Could A.I. Make Us More Human?
What if, in the end, Artificial Intelligence ends up making us more human? What does that mean and from where do we begin to measure that?
The common theme today is that Artificial Intelligence (AI), along with robots, is on a relentless path to replace us all. We’ll either mostly be living in a dystopian world of tiny apartments in brutalist slums or we’ll all be sipping Singapore slings on sunny beaches. Whichever way it does go, what if, after it all, AI ends up making us more human?
It’s a perspective we don’t seem to have yet thought much about, but there are so many potential outcomes we cannot possibly guess them all. Any assertion of an outcome is really not more than an educated, at best, guess.
For this article, let’s set aside a dystopian or utopian outcome and the concept of ever creating an Artificial General Intelligence (AGI), a form of AI that would have the machines end up the same intelligence as or more than us, humans.
A number of AI tools such as Machine Learning and Natural Language Processing have been used in healthcare for disease detection and developing pharmaceuticals. Other AI tools are being used to help us understand our planet, nature. Even trying to talk with whales. An attempt to talk to the external, the “other.”
What if it turns out that with LLMs (Large Language Models) like Claude, Gemini and chatGPT, that we can’t get rid of their tendency to hallucinate and make things up? What if these video tools can’t stop adding extra fingers and toes? Maybe we can’t solve for racial and gender bias? After all, the culture is in the code and the code is in the culture. That may not be such a bad thing.
The rise of the internet and associated digital technologies has already had a significant impact on what it means to be human in an advanced society. Two decades later and we are realizing both the benefits and the dangers of social media. The issues of dopamine addiction. These information technologies have impacted geopolitics, economic models, warfare and education, music, art and literature, all the elements of culture.
On Becoming More Human Through Artificial Intelligence
What does becoming more human even mean? Does it mean we have some new form of species level awareness? Of ourselves, our place in nature? Our place in the universe? We don’t really have a benchmark that instantly says “this is what being human means today.” No Net Promoter Score like a Net Human Score.
Despite the fact of still warring with one another, that we still have societal challenges such as economic inequality, racism and other social issues, we have progressed as a species. There is less warfare (although that’s at risk for now), we are living longer, there is far less poverty, we are more educated and more aware of the world than ever before in human history. We have advanced.
Perhaps though, we can use as a sort of benchmark, the idea of anthropocentrism, also known as human exceptionalism, as one way to think about how AI could help play a role in us becoming more human. Most importantly perhaps, that we realize both implicitly and explicitly that we are not, in fact, exceptional at all.
Should we come to the conclusion, and realize that we just aren’t as special as we’ve long thought, this would have a profound effect on how we run our societies, economic and political systems and our very relationships with nature and the environment.
There are three pillars to anthropocentrism. Perceptual, which is how we sense the world around us with our ears, eyes, nose etc. Then there’s the descriptive, those characteristics we define as what it means to be us, Homo Sapiens. The final, and this is key, I think, when it comes to the role of AI, is normative; our assumptions, theories and assertions we make that make us believe we are superior to fellow animals and even our place in the cosmos.
Arguably, it is the normative aspect that causes so much kerfuffle. It’s the part that has us thinking other animals, like our pets, cannot have emotions or feel. That endless mining of natural resources is quite fine. That listening to nature, like our ancestors (and some cultures still do) so wisely did, isn’t necessary, since we’re exceptional, that we know best.
Slowly, we are starting to realize, in parts of society and academia, that we aren’t as exceptional as we have long believed. One example of this is how, for many decades, the rule in biology was to never place human characteristics or behaviours onto other animals. To never anthropomorphize. This is changing. It’s becoming more acceptable in academia to consider that there may in fact, be shared characteristics.
So then, what if, using AI tools like Machine Learning and Generative AI, we do end up having a chinwag with dolphins, whales and maybe even elephants. AI tools have already been used to help us understand that elephants have names for each other. Last year, marine biologists had a conversation with a whale.
Some new research is showing that plants too, make noises to communicate and of course, there’s the Wood Wide Web. Using AI tools and arrays of sensors, perhaps we would develop ways to “listen in” to nature.
This of course means, that will we be able to comprehend what we are learning from other animals, from nature. And if we do comprehend it, and we do realize that the entire idea of human exceptionalism was utterly daft and wrong, how would we react?
It is the kind of realization that in the past, affected a part of a society and, over decades or centuries spread into society and slowly became accepted socioculturally at global scale.
But this would be very different now. For over half the worlds population is connected to one another. Today, ideas, like viruses, spread at unprecedented speeds.
If AI, through tools such as bots, or AI agents, become active participants, which to some degree they already are, within our societies, then they are in the social feedback loops. How would it impact their algorithms and how they behave and then of course, how we behave?
Perhaps then, what it would mean to be more human might come down to how we react, what we do with what we learn. The information becomes knowledge and when information becomes knowledge, is when we can take actions. So the question is…what actions will we take when we realize we aren’t that exceptional after all? What do you think?