Workplace Culture & AI: It Gets Complicated Fast
Using AI agents in business is a more complex AI to bring on. It will impact workplace culture and present new challenges other AI tools don't.
Imagine, if you will, a couple of years from now and most workplaces have an AI agent assigned to employees and managers. One day, the CEO asks the AI agent what it would decide on an issue, the CEO disagrees. This starts to happen a lot. But the AI agent has been given agency. A battle of wills and egos plays out. Things go badly for the company. Who gets fired; the CEO or the AI agent? Who makes that decision? The human board of directors or the Board’s AI agent?
This may not be that far fetched. We have already begun to give some AI tools like ChaGPT, a degree of agency. We are anthropormophizing AI at a rapid pace. In an organisation where AI agents are engaged, they will learn not just from corporate data, but their social interactions with employees, customers, suppliers. They in effect, become social actors.
In this article I look at some of the potential implications of AI in the workplace as a cultural (digital) anthropologist. While most discussions are just around productivity, it is becomingly increasingly important to look at overall workplace culture and AI’s place.
In this way, we can better mitigate for risks and also plan better ways to make AI agents more successful and thus, humans more successful and a business. AI agents are very different from standard Information Technology (IT) because they have agency. IT systems are not neutral, but IT systems don’t make decisions. Humans do. This is a critical distinction. This blurs even more with the deployment of machines that are social.
First I will describe what I mean by an AI agent and machines as social actors. Then I look at some potential issues to be considered. The deployment of AI agents will need input from HR professionals, senior management and employees at all levels.
What Is An AI Agent As a Social Actor?
We are all social actors. Essentially, it is us as humans who take actions that impact our social lives. This can be at the micro level such as parents making decisions and taking actions. At a macro level this would be the leader of a country. Social actors make rules and policies on how we run our societies. From rules and at home to rules of a nation or like the United Nations.
Riles, policies, procedures can, to varying degrees, be seen as algorithms, which is at the foundation of most AI tools, such as Generative AI like chatGPT or Claude, also known as LLMs (Large Language Models.)
These LLMs are based on training data and often micro workers who help in the training. Then they’re put into the wild (the world) and these LLMs continue to be trained by those using them. Every time you use Midjourney, ChatGPT and other tools, you are a trainer. So these LLMs learn from us as we learn from them. It is a feedback loop.
In the case of an AI agent in the workplace, this might be that the company acquires an LLM and then trains it on it’s own data. We see this with chatbots today, which can be used for internal knowledge management as well as customer service.
A company, may, for example, give each worker an AI agent. This agent then becomes part of the suite of productivity tools an employee uses to do their job. For the most part, businesses see AI agents and tools as simply productivity tools. But as AI agents continue to learn from employee use ans as they are given a degree of agency and trust, they become social actors. Or social machines.
LLMs, through the feedback loop will take on aspects of the workplace culture in which they used. If it is a toxic workplace, the AI agents are likely to gain toxic traits. If it is an empowering workplace with a good workplace culture, the AI agent gets those traits.
You can start to see how AI agents, given degrees of agency, become social actors. We know that human culture enters code and AIs. It’s why we have racial and gender bias in AI. We’ve not figured out yet how to solve that and if an AI company says it has, they are straight up lying.
This doesn’t mean AI shouldn’t be used in business. Not at all. Done right AI agents could help drive innovations and be of immense benefit to a company. But AI agents cannot be considered as simply productivity tools that are an extension of IT systems.
How AI Agents Make It Complicated. Fast.
From almost all of the books, papers and research I’ve read regarding the use of AI in business, the focus is quite predictable. And narrow. It’s Taylorism (scientific management) purely in terms of productivity through automation. This sets up the use of AI to fail and is more likely to hurt productivity and could lead to loss of profits. Why?
Because AI agents as I’ve described, are social actors. Every single business in the world runs on people. No people, no business. From employees and management to customers and suppliers. And humans are, well, human. We do annoying things and funny things. All workplaces have a culture, be it toxic, neutral or good. Businesses are inherently social organisms.
Consider that AI agents are trained on not just corporate data and information, but their interactions with employees. If a CEO is a toxic leader and the management team is too, then the AI agent will, through feedback loops and learning, adopt that toxicity. This would intern filter through the organisation. If an AI agent is responsible for customer service engagement and maybe social media too…well, a PR crisis could well be in the making.
Employees today don’t tend to stay in their jobs for decades. Even CEO’s and other executive suite management rarely stay for extended periods. So what happens to the AI agent assigned to them when they leave? This has implications for knowledge management that helps businesses to operate. When a long-term employee leaves, this is known as a loss of tacit knowledge that is not captured anywhere or at least not very often and usually quite poorly.
So an AI agent that has gained an immense amount of knowledge over a few years, but it intimately tied to an employee, will also have developed a “personality” and will likely have characteristics of the fired, retired or moved-on employee. Characteristics that the company may not want. Can those elements be cleaned out of the AI agent so it can be assigned to a new employee? No one knows. But it’s a clear risk.
What if an employee who is fired or laid off makes a claim through a wrongful dismissal suit that some elements of the AI agent they used belongs to them? Does it? What are the rights of the employee in this regard and could those elements be carved out from the AI agent and perhaps moved over to someones personal AI agent? Corporate intellectual property and secrets could be at risk? With the opacity of how most AI tools work, extracting those characteristics may be impossible
Most business struggle today dealing with what information an employee takes with them and what information an employee may, with no malice, upload to a Cloud service to use while working at home on their own device. Such information leaks are a common occurrence. AI agents amplify this issue.
Might companies create an AI Superagent CEO that supervises all the various other AI agents in the company? Who has governance over that AI Superagent? The CEO alone? The board of directors? Since much of the decision making of AI tools today can’t be understood and are inherently opaque, how can the human CEO or board directors ever understand how a Superagent makes a decision? This is is important to governance.
Coming back to opacity in decision making, what happens when an AI agent delivers a decision and the company goes with that decision and it turns out the AI agent made up facts or hallucinated? Who gets fired? How do you fire an AI agent? What is the cost to fix it?
If a corporate culture is inherently aggressive and a tough negotiator and an AI agent is assigned the duties of negotiating with suppliers, even if that is one AI agent to another and they end up fighting, this could lead to massive supply chain disruption? Where is the human in the decision making process?
How are productivity gains understood and what are effective metrics for success? How do OKRs, Lean Sigma and other such systems work with AI agents tied to employees? Is there a measurable productivity gain if the employee spends increasing amounts of time managing the AI agent(s)? Where is the point of diminishing returns?
This is not an exhaustive list, but when we look at AI in the workplace from a cultural perspective, beyond the narrow confines of scientific management and Taylorism, we begin to see the inherent risks for a broad swathe of industries. it will mean a whole new school of thought on the meaning of productivity and well, scientific management.
These are issues no current Generative AI or other AI tool can understand or imagine because they are predictive tools, not reasoning and have no understanding or capacity to understand, human culture.
Machines can’t understand human culture because it is always in flux and evolving. Businesses don’t respond to technology innovations, they respond to how humans use technologies for competitive advantage through disruption or business model evolution.
We have to date, oversimplified the role of AI in the workplace when it comes to LLMs or Generative AI. Other AI tools such as Machine Learning, Natural Language Processing and so on, are being used across many industries with success. But they don’t have agency like LLMs do and they are specific use cases, what is called narrow AI. Deploying AI agents is a whole other layer of complexity.