Social Machines & Human Society
Social agents are here and evolving quickly. They are both exciting and daunting and will require whole new ways of thinking about societies globally.
We are at the start of a fundamental shift in how humans interact with the digital aspects of our lives. Driving this are Large Language Models (LLMs) and now the Large Action Model (LAM) as put forth by a new company, Rabbit with the launch of their r1 device. These AI tools are bringing social agents deeper into our lives.
LLMs and LAMs are transformative AI tools that are already having an impact on our world, though not quite to the degree they’re being hyped as having. LLMs, like ChatGPT, Claude and Grok are quite well known. They are mostly contextual, not a social agent and used for generating content, from images and videos to book writing. LAMs are fairly new and are all about taking actions by performing tasks that we ask of a LAM.
The two combined usher in a new way of engaging with the digital sphere and digital technologies. LAMs make it possible now to create social AI agents that can take actions on our behalf.
This then begs the question of how humans will accept, adopt and adapt to engaging with these social agents? Additionally, how much agency we will give them and how much will we anthropomorphise social agents? And, can we socialise these agents, do we want to?
The term thinking machines is bandied about a fair bit, but so far, machines can’t really “think” in the way that humans do. This ties in with social agents, essentially personalised AI tools that help humans do things. From ordering dinner and a taxi to completing complex analysis of spreadsheets and documents at work.
The closest we are to a fully functional social agent with AI is chatbots. And humans have shown an increasing tendency to become emotionally attached to these tools. And research has shown there are significant psychological impacts of chatbots on humans. Today, there are many thousands of chatbots available and services like Replika, Hugging Face, Kuki and others enable people to create their own chatbots quite easily and quickly. They’re fairly narrow in scope and not very high functioning, and certainly not anywhere near “thinking.”
In the near future, we may have an AI agent that we use at work, assigned to us by the company. We may have another that is our personal AI agent, that we can name and train to take actions on our behalf. It is a trope of Sci-Fi becoming reality.
How sociocultural systems, human societies and cultures, will adopt AI agents is yet to be seen as we are just entering this phase of the technology. Today, there is a wide spectrum of emotions and opinions with some fearing AI agents to others rapidly adopting them whenever they can.
The biggest sociocultural debates around LLMs, Generative AI (GAI) are around the impacts of these technologies on economic and political systems, citizen privacy, Rule of Law, military, and the aesthetic aspects of culture such as art, music and literature. In terms of sociocultural impacts, these are quite profound.
Eventually, LAMs and thus social agents, could make websites less relevant, change our interactions with computers and shift technology behind the scenes. This will likely take one to two decades to have significant impact. Though that depends on how culture evolves these technologies.
These debates and discussions will now become even more complex as we see the rise of social AI agents for personal and business use. They may profoundly change how we interact with the digital sphere of our lives. We access the digital sphere today mostly through apps on various types of devices and occasionally by voice through smart speakers and phones.
Eventually, we will enable our personal AI agents to interact with co-workers social agents along with our personal one interacting with friends and family and community organisations we are engaged with. This requires us to give a fair degree of agency to social agents so that they can take actions on our behalf. One challenge may be how humans stay in the loop for confirming the actions an agent may take. Think you get a lot of notifications now?
This also leaves us to consider how humans will interact with these agents in terms of how we will have to think and work with them. For LLMs we have to write or speak in prompts. This is not a natural way of interacting with our world. As LLMs improve, we may no longer have to speak in prompts. But it does impact how we interface with social agents. The question becomes who has to really change? Humans or machines?
If we anthropomorphise social agents too much, when a company providing the service shuts down, gets acquired or hacked and we lose our AI agent, how will we react? Much of our personal lives will have been offloaded to agents. Should such AI agents have rights? The are artefactual machines, they are not human and never can be. But the psychological impact of loss may be significant to individuals and even social organisations.
Some countries like Canada and Norway consider internet access to be a fundamental human right. Is access to a social AI agent then to become a human right? When a child is born, is it assigned a social agent that evolves alongside that person? Grows up with it? Who then protects that persons data and privacy? is it government? Businesses go out of business all the time. Ideally, governments don’t. But political systems can change. This raises a lot of questions.
What happens to a social agent tied to an individual when they die? Can a social agent remain working for use by relatives as a form of ancestor connection? What about if dementia or another form of mental illness comes into play? If a social agent can operate a business, who does the wealth accrue to if the human has not participated in that revenue generation?
Should a social agent commit a crime, perhaps fraud, or bullying, who is responsible and how is a social agent held accountable if the human didn’t encourage the criminal activity?
We have entered a phase with AI tools and platforms that we can no longer leave to just computer scientists and engineers, entrepreneurs and tech giants to evolve and operate. We will need the input, guidance and advice of philosophers, sociologists, anthropologists and psychologists. Even economists will have to return to a large degree, to their philosophical roots.
Tech companies that create these social agents and evolve them, will bristle at the suggestion of letting the human sciences anywhere near their technology. This is evidenced by OpenAI, Microsoft and Google all effectively sacking entirely or disempowering their AI ethics teams. They see ethics and human sciences as blocking innovation and hobbling AI development. How this is resolved is likely to play out in courts in democratic nations. There are geopolitical implications as well. Autocracies care little for human rights and may see social agents as a way to control populations.
Social agents have some very exciting implications. They could help detect and treat mental illnesses. Be an emotional support to some degree and take away a lot of mundane work, freeing us up for more creative human-centric pursuits.
Ultimately, as with all technologies throughout human history, going back even to our hunter-gatherer phase, culture decides the role technologies play in societies. History also shows us that we tend to improve our societies and evolve technologies in ways that benefit us. We see this in the inevitable decline of fossil fuel technologies being replaced by renewable energy technologies. Cars and planes are more efficient today not just because of capitalism, but because of culture.
What we haven’t dealt with in any sociocultural system before, is cognitive technologies on the scale of AI and social agents. We can look at social media and the internet for clues and guidelines of what to do and not do. And we will. Human culture is an incredible, often underestimated force.