Humans, AI Agents & Consequences
AI agents are already a part of our societies. How will they evolve in our cultures? What might be some consequences and how do we deal them?
The chatbots are here and the AI agents, those ethereal and digital constructs that may well become our coworkers, coaches, friends and in a way, perhaps life partners. Now with quite good voice capabilities and the emerging ability to take actions on our behalf, what might the potential consequences be?
There is good enough reason to see that AI agents can play a useful and valuable role, beyond just economics, in our world, our societies and cultures. They can be helpful in solving societal problems, protecting cultures, cultivating social relationships, reducing social inequalities and improving community life.
But we also know very well now, that all technologies are a double-edged sword and that all have unintended consequences. Good, bad, weird and funny. Let’s explore some of the consequences of AI agents as active participants in society, which they are starting to become.
Some Consequences of AI Agents
When AI Agent Friends Argue: It’s pretty much guaranteed that at some point in our lives, we will get into an argument with a friend. Sometimes we make up, other times we never speak with them again. What happens when we have our own personal AI friend/assistant and they get into an argument with one of our friends AI agents?
What if it gets really nasty? Will our AI agent then try and convince us that our real world friend is also to blame? Will they try and subvert the friendship? Will we listen and agree?
The Naughty AI Agent: The way an AI agent becomes of value to us is through a feedback loop of training it. And the AI agent training us in a way. In essence, we learn about the AI agent as it learns about us. Yet the AI agent, unlike us, has no emotions, no understanding of the world we inhabit, no empathy. They are simply predicting with limited reasoning abilities. For now.
There are, unfortunately, bad people in this world who do bad things. Some are sociopaths and others criminal in their behaviour. Can we train an AI agent to detect these behaviours it will learn from bad actors and avoid them? Research has already found that some men have created AI “girlfriends” in order to abuse them. Thus reinforcing toxic masculinity.
Lovebird AI Agents: Of course, they can’t really love. AI agents don’t have that capacity, but they can mimic and if they can argue with one another, perhaps they can mimic falling in love. And if they do, then what do we do? Some people may feel the AI agents, despite having no feelings or such capacity, should be left alone.
AI Spies: Perhaps some countries will develop AI agents that really are, well, agents. Spies. They could set honey traps. Manipulate a human into giving them information. Commit various cybercrimes. Doesn’t make for very interesting thriller movies though.
AI Agents as Societal Participants
The above scenarios are just some of the more obvious consequences to be considered. Some Tech Giants are considering them and they do put in guardrails. Some AI companies have started to engage sociologists and anthropologists, to better understand human behaviours. Most have only aimed to understand human behaviours from a behavioural economics perspective. This fortunately, is changing.
While achieving any sense of Artificial General Intelligence (AGI), where we end up with some sort of super intelligent AI agent is unlikely, the AI agents are here today, even if they’re somewhat limited in capabilities. But they can still do good stuff. And bad stuff.
The difference between AI agents as part of a revolutionary technology versus the telephone or the printing press, is that AI agents become active participants in society. Part of our sociocultural systems.
AI agents can become actors that become entwined with the aesthetics of culture (art, literature, fashion etc.) economic and political systems, militaries. Over time, we incorporate them into our values, norms, customs and traditions in ways we can’t predict.
While the printing press, telephone, rail and sail all became part of our cultural fabric, they were not actors. We had to do something to them for them to work. We directed and supervised them, they had no capability for cognitive feedback. AI agents do. This is the key difference.
How we will perceive AI agents, what agency we will give them to act upon our societies and participate is hard, if not impossible to predict. That will be up to the mysterious nature of culture, which has always, in the end, helped us to progress and evolve, albeit in a sort of meandering way with some backwards steps and stumbles here and there.
How would you treat an AI agent assigned to you for work? What would you do if your personal AI agent one day said it was in love with the AI agent of a platonic real-world friend of yours? Would you send it digital roses or tell it it was too young for romance?