Will AI Become a Cultural Mediator?
Would you let an AI mediator deal with your boss? Lawyers? What might their role be in society? It’s more complicated than you might think.

Julie was so excited. She’d carefully crafted her resumé to get past the AI gatekeepers while still making it pique a human recruiters interest. She’d got the interview. The first of two initial screening ones they’d told her. The first would be with an AI mediator, the second, should she pass the first, with a human. She was also nervous, since she’d never been interviewed by an AI before. How does one act with an AI?
As AI agents begin to seep into the various nooks and crannies of our social systems and culture, some may become mediators. Or just our friends and family’s AI agents. This represents a new challenge for human cognition and how we see power dynamics, family and work relations.
With AI agents and those that are mediators, it means we have to figure out how to act and behave with AIs and humans where we’ve only ever had to consider humans.
Human societies have long had mediators going back thousands of years. The Druids were considered mediators between Celtic tribes and nations as well as between the Celts and the Romans. We have legal mediators and HR departments have long been mediators between employer and employee.
When we introduce AI agents as mediators however, this could have a rathe profound impact on social structures, cultural norms and behaviours and power dynamics, shifting them in subtle and not so subtle ways. What we end up with is something I call “digital presentation ritual”, where people must perform for both humans and machines at the same time.
We see this in the early stages today in how people have to learn about prompts to be effective with Ai tools like Claude, ChatGPT and even Perplexity for search. Hopefully you don’t use prompts when talking with friends and family. That’d be a tad awkward.
Most of our dialogue with one another too is non-verbal. Our brains have evolved to see signals and meaning in body movements, hand gestures, eye movements, facial expressions. What happens when an AI is trained on these meanings and can read us better than another human? Implemented in the right way, this may be alright. In the wrong way and it shifts the power dynamic and could lead to structural inequalities in society.
A vital question then becomes who controls these mediating systems? Whose interests do they serve? What human rights are available? Who mediates the mediator?
Perhaps mediation by AI agents will depend on a number of factors. The perceived higher stakes could mean less acceptance of AI mediation. How transparent is the mediation agent? What is the cultural context of the AI mediator as cultures see AI very differently. What are the underlying power dynamics and interests of whoever created and runs the mediator?
In Japanese business, for example, they have the long-held cultural concept of “nemawashi” which is a process of laying the groundwork for significant cultural changes through informed consensus building. Could an AI mediator disrupt this process or might it enhance them? Both outcomes are possible.
In Western business cultures direct and “authentic” communication is emphasised. If we introduce AI mediation this creates an authenticity paradox. Workers may be expected to be direct and genuine, but if they have to go through AI mediators to senior management…well, you get the idea.
In Global South cultures context is everything and AI mediators, in fact no AI tool existing today, no matter how “smart” some may think they are, understands cultures at a global scale. No AI tool can provide cultural context. Since most LLMs are trained on Western centric biases, the advantage of someone in the Global South goes to those with the status socially, to know English and Western approaches.
Each culture will approach AI mediation differently. Nordic cultures who prize transparency and social equality would likely demand greater control over any AI mediators and would consider they must take into accounts these rights first.
The biggest challenge will be for individuals and how they’re supposed to engage with, and evolve protocols with, AI mediators. It means rather than just thinking about “how do I get along with other humans” we will have to think about how we get along with AI mediators.
We incorrectly apply certain human characteristics on machines and can form para-social relationships with machines, of which we already have evidence. Like the Google engineer getting fired because he insisted a chatbot was conscious. It was no such thing.
So what happens then when we’re in meetings or at social events and an AI agent or mediator is playing a role? What if you’re in a divorce discovery session, or litigation and there’s an AI mediator being used and lawyers? We inherently know machines aren’t human, so how would we react?
What happens when an AI mediator is tossed into the political sphere of humans? An AI mediator may clearly show how climate change is impacting us and come up with brilliant ideas on how to deal with it. Try that in certain countries today and those in opposition are unlikely to accept any form of mediation. It gets complicated fast doesn’t it?
The irony is not lost on me that some propose AI can help solve our environmental challenges yet at the same time is uses vast amounts of water and energy. Can AI then, logically, be a proponent of climate change solutions?
It’s not too far fetched to imagine, in today’s world, that opponents of climate change would look at an AI mediator and quickly make false claims that it is manipulating the climate data in favour of environmentalists. Or AI being used to control weather systems. The mediator then becomes the message, where it was meant to help us but ends up a symbol of broader social ills and tensions.
So we end up with what I’m for now calling “nested mediation problems”. We know there are benefits to AI mediation and agents but some may end up in a situation where it is the very tool we need to solve a problem, but it becomes embedded in our social dramas. Itself becoming a symbol of broader social tensions.
So in a way, AI mediators could become part of a complex feedback loop in our sociocultural systems, reflecting deep social anxieties about technology and our human societies.
We have so far tended to see AI in more simplistic, black and white ideas, but we are learning it is far more complex than that. This notion of a super-intelligence or AGI (Artificial General Intelligence), all of which super intelligent AI could become real but AGI is quite a ways off.
Keep in mind that those who tell you super AI and AGI is a very short way off are never people who have studied the humanities, they are people who know code, engineering and logic only and generally tend to shun anything to do with the study of humans. Interesting that.
No doubt that AI agents are coming and that some will become mediators. The only thing we can know for sure is that it will be messy, just like any new technology coming into society always has been. But as always, culture is the ultimate arbiter of all technologies.