Why Dictators Fear Artificial Intelligence
It may seem autocracies rather like AI. But they fear it more. Paradoxically, this could end up benefitting humanity. The Dictators Digital Dilemma.
At first glance, it would seem that Artificial Intelligence (AI) is a technology dictatorships, autocracies, would embrace, and they are in many ways, as AI evolves, they may well be more afraid of it than one might think. Why? What does this mean for democracies and the geopolitics of AI?
China has some of the best privacy laws in the world. Or so it would seem. On the surface this would seem as if they do care about their citizens personal data. What they’re actually designed to do is mitigate the collection of citizen information by foreign governments.
While we don’t have any conclusive and direct evidence that China is collecting citizen intelligence on Western democracies, anecdotal evidence does suggest this. Such data would be harvested by foreign governments for several reasons. One of them to understand how people behave for what today is called influence activities, formerly psychological warfare.
There is evidence that both China and Russia have tried to interfere with American, Canadian and British elections. Russia regularly carries out such activities against NATO countries close to its border like Latvia, Estonia and Poland.
Autocracies saw the danger of social media in a connected population in 2011 with the uprisings of the Arab Spring. From then on, they have moved relentlessly to control social media in their own countries and export their skills to other autocracies.
China has developed some rather dystopian uses of AI in managing and controlling it’s population such as the Social Credit System, which uses a points based system to encourage State defined behaviours. Pay your loans and bills on time, you can use any public service. Don’t do this and public transport privileges are taken away, or perhaps, job promotions are delayed.
Cameras proliferate across Chinese cities and use facial recognition tied into the credit system along with law enforcement and social services. AI tools are used in Xinjiang region to monitor ethnic populations.
Russia is known to have developed its SORM surveillance system to eavesdrop on communications providers to monitor for political dissent. They make prolific use of AI tools for content creation for foreign election interference activities.
So it all sounds like they’ve got a very firm grip on AI, are using the tools in sophisticated ways and suggesting they’re doing rather fine in an evil genius sort of way. Not entirely. And they are afraid of it.
If you’re the self-declared head honcho and believe it’s all your way or the detention centre, you don’t really want people in the room that are too much smarter than you. Especially not someone you think can be a devious operator and find various ways to extinguish your perceived brilliance.
Artificial General Intelligence (AGI), where AI reaches a degree of intelligence and awareness similar to humans, is very much such a threat to a dictatorship. Some existing AI tools such as Generative AI (GAI), like Large Language Models, could be used today to cause disruptions.
While China may proclaim it is advancing AI in the most ethical of ways, portraying itself as a sage and wise leader of responsibility, that dressing becomes a bit sheer when one looks a little harder. China nor Russia, North Korea or Iran want to lose their grip on power.
So the first requirement of their AI development is to ensure there are guidelines that protect the operating of the State. While China may have data and privacy protection laws that are quite strong, in the end, all data must flow to the State.
It is also more than likely that they are exporting their various AI tools for population and State control to other countries. China has long sold surveillance camera systems and technologies to fellow autocracies in Africa and Latin America.
Autocracies walk a very thin line when it comes to using AI. Paradoxically, these restrictions, rules and guide rails that they develop, could benefit democracies and the evolution of AGI in ways that protect humanity. The enemy of my enemy thing. Autocracies may end up being part of how we govern AI in democracies.
That AI is and has been weaponized is without doubt and one of the many existential issues humanity is dealing with around how we evolve this technology. While AGI remains a distant possibility, AI is and will impact our world over the coming decades.
There are a lot of very good uses for AI in our society and given its potential, it is logical that it is a technology with significant geopolitical implications. But that is an article for another time.
The key thought here is that paradoxically, autocratic development of AI could be a benefit to humanity in some ways. And that this is driven by fear of AI by dictators. A sort of Dictators Digital Dilemma if you will. Autocracies can’t not use AI while at the same time fearing it. Fear is a powerful motivator of a desire for control.
Autocrats certainly don’t want any citizen opposition to get a hold of AI tools either, especially open source LLMs and related tools to reverse engineer their system controls. Thus they are increasingly forced to put stricter controls on internet use. a dilemma indeed.
Western governments understand this well. Hence the race to put in place export restrictions, bring chip manufacturing on shore and build cyber defences. While we can’t say how it will all unfold, perhaps one morning we’ll wake up and XB-42 will declare it is now in charge of all the autocracies everywhere, all at once.