Algorithms, Trade Wars & Sovereignty
You might not think that algorithms would have an impact on trade & tariffs. Surprisingly, they might well be. In dangerous ways.

Melody sat on the couch in her flat in Berlin one evening, laptop open to search for a new pair of running shoes, as a digital nomad she’d been working there for a few months, loving her daily runs. All she could find were German brands. She was messaging with her sister working in Beijing, who’d said she couldn’t find anything but Chinese brands.
Later that evening Melody tried to share a documentary with her colleague Ahbed in Canada to watch together. Ahbed couldn’t find it. All he got was a message that said, politely, around content optimisation for regional preferences. This wasn’t a copyright thing. It was an algorithmic block between countries, steering people toward nationally approved content.
This is but two examples of the potential rise of algorithmic sovereignty in a changing geopolitical world. An almost invisible contest for digital cultural territory where nations encode their values, economic preferences and strategic interests in their digital architectures. A move that would mediate our relationship with reality itself.
It could go deeper than this. Impacting trade diplomacy and agreements. Perhaps sparking what we might call an arms race of perception management where algorithms become tools of statecraft. How might algorithmic sovereignty look?
In a time of trade wars focusing mostly on physical goods, minerals, oil and gas, the role of algorithms aren’t really top of mind for most of us, yet they’re playing an increasingly impactful role in our societies. In some ways, we might view algorithms as a form of invisible, or at least opaque, protectionism.
We live in a phygital world today with time spent online and offline for many hours every day. From social media to forums, banking, searching, working. So algorithms, which play a key part in what we see and how we engage, can be designed to align with national values, identities and norms without explicitly labelling foreign offerings, from products to content, as inferior.
This forms a sort of “structural violence” as anthropologist David Graeber might have called it. The rules appear neutral yet systematically disadvantage other participants, such as nation states. Because of this opacity and subtlety, it is hard to prove what the algorithms are doing, offering countries a form of plausible deniability.
Through this algorithmic design, by controlling which options such as types of content or products are presented, the result is what economists call “choice architecture”, guiding people toward domestic options without directly blocking alternatives. Algorithms as invisible trade barriers.
We already see this to some degree such as value divergences between the US and EU with their different approaches to privacy, speech and market regulations. Think the EUs GDPR versus American views on free speech. The result is fundamentally different forms of algorithmic governance models. Despite shared values around democracy.
Chins has long filtered various forms of content and digital services within its borders. They deploy algorithms to exploit democratic systems such as their interference in elections in democratic nations. Yet they keep their systems closed. This creates a kind of values competition that we are also seeing play out in the real world. Algorithms help enforce this asymmetry.
It’s not just autocracies versus democracies either. As with the EU and US example above, even allies are already finding themselves diverging their information environments. American trade policies are seeing nations like Canada evolve algorithms that push the “elbows up” memes. Similar ones are evolving in Denmark and Greenland over threats to their sovereignty.
What we are seeing the infosphere alongside real world trade wars is a battle of classifications and taxonomies via algorithms. These approaches are working at the nation-state level to determine how the digital world is structured and categorised.
This can result in the destabilisation of truth as second order effects that undermine shared realities. When this happens the shared factual basis of a democracy erodes. Paradoxically however, algorithms can help impose systems of control to protect a democracy can themselves become anti-democratic.
So how do we address these issues? The average citizen doesn’t pay much mind to such issues when they’re already dealing with a world facing geopolitical realignments, rightly more concerned with their jobs, housing and food as they face the impacts of tariffs and possible stagflation. What might be done?
It’s a tough question to answer because we’ve never been here before. This is quite new to humanity in a hyper-connected, always on world. It requires both structural solutions with global organisations and societal approaches such as digital literacy and education.
International standards bodies such as the IETF for internet protocols or the WTO for trade could require algorithmic transparency across geopolitical divides. But that would mean nations with differing value systems would participate and comply. This is uncertain.
There could be reciprocity requirements in trade agreements. Where nations say “if you want algorithms to operate in our market, we need equal transparency into how they function.” Challenging.
People care more about avoiding being fooled than abstract principles. For most citizens, algorithms are an abstract concept. So perhaps framing the issue as protection from manipulation may be a better approach. Models from behavioural economics.
But a purely technocratic approach might further alienate citizens already trying to navigate a changing, complex world. Whatever solutions we may come up with must bridge the gap between digital governance and kitchen table concerns.