Global Cultures & Artificial Intelligence
All AI systems have cultural and other biases in them. Thinking more globally, perhaps AI could end up helping humans get closer?
Much of the discussion around the impacts and implications of Artificial Intelligence (AI) revolve around economics, politics and either a doomsday or techtopia. They are highly Western world centric viewpoints. Yet AI is a technology that will play an important role on the global stage. Which means all our varied cultural and social systems.
So perhaps we should also be considering AI’s impact and potential role more globally, with a diverse lens, rather than a purely Western centric one. Especially if we are indeed, moving towards a period of time when we may live in a multipolar world as democracies struggle and autocracies become more entrenched.
After all, culture is in the code and the code is in the culture. Culture too is mutable, ever changing and evolving and if some AI tools become social agents, able to act to varying degrees in our societies, then these tools take on a global multicultural aspect.
Across African nations, AI has permeated the marketing profession far faster than in most Western nations. This seems to be driven by tough economic times and reduced marketing budgets, especially for creative. African nations aren’t really imposing any restrictions on the use of AI either, which has accelerated innovation with AI tools. Though mostly with Generative AI and not other tools like Machine Learning (ML) or Natural Language Processing (NLP), where many African nations lag.
The easiest AI tool to adopt in most countries is Generative AI (GAI) such as LLMs through ChatGPT, Microsoft CoPilot, Claude, Midjourney etc., which have zero or currently low cost barriers to entry. Basic computer skills and internet access are pretty much all one needs. Many African youth, especially in more prosperous area, are very technology literate.
Other AI tools, like ML or NLP that are more robust but usually require more significant resources, skills and time to implement and will likely lag in developing nations. This sets off a dynamic situation where those in developing nations can adopt and leverage some AI tools but not others. GAI is good, but it is narrow in scope. Africa is lagging in AI use, but there are plans to help them catch up. This is important. There are similarities in some Latin American and Asian countries.
Then there are the issues of cultural, gender and racial biases that become inherently coded into all AI tools, not often through intent, but through coders not even being aware of their own biases. While some in the AI industry have acknowledged culture in the code, the industry doesn’t exactly have a stellar record of putting in good, or lasting, guardrails.
Do AI companies based in Western, developed nations (WEIRD countries), then have an obligation to consider the sociocultural implications of their AI tools when being used in developed nations? We know that there are inherent biases in WEIRD nations. AI tools developed in China will likely have cultural biases that are Chinese-centric just as ones developed in Silicon Valley will have mostly white male centric cultural biases. Same for India, Russia or Brazil.
African nations are adopting GAI tools because they are easily accessible and cost very little. But it’s more than just economic factors. Africa as a continent has a rapidly growing and highly youthful population. African youth are better educated than their parents, have increasing wealth and are very innovative. African nations are pushing forward to become more prosperous and they are highly motivated. They may well find some very valuable and powerful uses of GAI.
The promotion of AI tools in WEIRD nations is often very individualistic in nature. The hype of GAI tools like ChatGPT, Claude, Midjourney et al, is usually how they can benefit the individual first, society second. This is the cultural nature of more “me” centric cultures. Asian and Nordic cultures tend to be more “we” centric with community and society being more important.
These “me” and “we” dynamics will impact the evolution of AI. Likely in how AI tools are developed, regulated and rolled out. The EU, where many of its members are more “we” than “me” cultures is an example of a different cultural reaction and view of AI as compared to the USA where there are minimal regulations around AI.
China and Russia, among other autocratic nations, see AI tools from the perspective of challenging the global polity and for gaining strategic advantages both economically and militarily. They are more motivated to ensure their AI tools are imbued with the cultural mindset of autocracy, so any regulations will likely be designed from this perspective.
As social agents become part of the digital sphere, people may have their personal AI agent, one as a workplace AI agent and there may well be diplomatic AI agents for countries. Imagine a whole mass of AI diplomats all dialoguing with each other. It makes for some interesting thinking…what if there’s a cultural spat and they start fighting…or, what if they create a peace plan that could end global kinetic warfare?
Perhaps it is imaginable too, that personal AI agents can play a role in reducing racism, securing the fact that we have far more in common with one another than not. I’m not suggesting an instant cure for racism, although that would be rather nice. Historical evidence, data that personal AIs may train on, show that humans prefer trade and relationships over conflict. Surely that’s a better bias to put in the code? My condolences to Hobbesian centric thinkers.
Just as individually we see the world differently from one another, so do cultures and societies. In the past, we’ve had a fair bit of time and space between us to figure out how to live with one another. This is less so with todays communications technologies having global reach.
And so nations, regions, states, provinces have evolved different laws and cultural norms and behaviours around social media because they each see social media in different ways. Culture has always influenced technological evolution. Cars made in France were, for decades, often very different in terms of the features they used and where they put them in the vehicle. We still see different ways of answering phones in various cultures. All unique sociocultural systems.
To anticipate that AI tools, especially GAI tools, will be homogenous and developed in the cultural mindset of any one country or culture is to not understand how technologies have evolved for thousands of years in human societies. We know that nations are competing on advancing AI, it is a significant geopolitical technology already, from weaponization to economic strategy.
One of the wonderful things about the world is that we have so many cultures and all the elements that make up a culture, from food, literature and architecture to economic and political systems. It is only natural then, that AI will evolve in a similar fashion. Culture is in the code and the code is in the culture. This may well be what makes AI, in the end, work better for humanity.