The Complexity of Social Media Moderation
Moderating social media platforms is complex and expensive. Have we been doing it wrong? Is there a better approach? Possibly.

Jagmeet makes a short video about Bhangra dance moves, in which he makes a hand gesture a few times. The social media platform’s automated system flags it for moderation. Janil, sitting in her contracted office in a sea of cubicles sees the video and interprets it, based on her training as inappropriate and has the post deleted since it seems to be a negative political statement. It isn’t. But this is part of the complexity of social media moderation in a hyper-connected world.
For years, most social media platforms approach to moderation has been based on a sort of binary code of universalities and broad brushes. But what if the way platforms have been approaching moderation needs to be viewed through a different lens?
This is becoming an even bigger concern as platforms like X and Facebook pull back on moderation and take approaches like “community notes”, abrogating their responsibilities and passing it on to the users. In some ways, community-based approaches can do prove useful and work, but not as the only solution.
And that lens is a multicultural one. Some aspects of moderation are fairly straightforward and easier to make a determination. Obvious spam and scams, extreme violence and sexual acts. Terrible stuff that unfortunately, thousands of human moderators have to see every day. Often leaving them with PTSD or other mental health issues.
Moderating social media platforms has always been a challenge, filled with complexities. A significant aspect of those challenges is the speed of cultural transmission and the nature of different platforms. Twitter (x) and similar platforms like BlueSky are a fast, constant stream of content. Others, like Reddit or Facebook, are slower, where a post can take days or even weeks to filter through the network.
Beyond the obvious bad content, much moderation requires a degree of cultural understanding of the context of the content that has been created. A simple dance move made by teens in America may be interpreted as inappropriate by a moderator based in the Philippines or India. And vice versa.
Most moderation for social media platforms today is done by sub-contracted companies, not by the platforms themselves. Sometimes the moderation is then a sub-sub contract, which is usually done in countries where labour is cheap. This has been heavily reported on, including the challenges it created.
Moderation so far, has shown to not be a problem that AI can solve either. Algorithms only go so far and are often imbued with the unconscious biases of the creators of the algorithms. These tools can and do help, but only to a certain degree. The final say is most often a human.
The degree of training provided to human moderators varies widely. A contracting social media platform often has in-house moderators, but they are small teams compared to the sub-contracted moderator companies. The in-house teams often develop the universal rules and guidelines, which are then subject to re-interpretation by other cultures. And this is where we run into the challenges and complexities of platform moderation.
We all view the world around us from the perspective of our own culture. It’s biases, values, norms and customs. That’s just part of who we are. And this is why we need to consider cultural contexts for social media moderation.
Beyond the cultural aspects of course is the added challenges of the speed of various types of platforms, the scale of the platform itself and the reach of a platform across multiple cultures. Moderating is an expensive undertaking for any platform.
Most of the major social media platforms are based in Silicon Valley and their approach to solving such challenges is most often a coder-based approach seen through a binary lens and the belief that technology solves all problems. This is rarely the case and when we’re talking about global reach and multiple cultures, even less of an ideal approach. Such challenges are nuanced, complex and need human based approaches layered in with technology.
This is a difficult balance. Yet some platforms are taking a more balanced approach. While not perfect, Reddit has invested more heavily in a human centric approach to moderation mixed with technology. So has Pinterest, probably the best moderated platform. Medium too has taken this approach and it’s proving effective.
But moderation will, to some degree, remain a game of whack-a-mole as cultures are never static and change generationally. Getting to a truly better place will likely take many more years, but overall we are making progress. On some platforms at least.
If more platforms add in not just a more human-centric approach, finding the balance of technology and human, then bring in cultural awareness and context, moderation can, in my view as a digital anthropologist, improve even further.
This means not viewing moderation through a singular lens of humans all being the same around the world. Our species is made up of a wonderful tapestry of different views and approaches to how we see and navigate the world through the stories we tell. Culture.
Instead of having one culture make decisions on content from another culture, platforms might implement an approach that we might call “cultural translation networks” where flagged content is reviewed by both the originating and receiving cultures. Which means adding a layer perhaps, or having a mediator that understands a culture.
Platforms that invest in better cultural understanding and improve their moderation processes are the ones that will see improved brand reputation and overall higher engagement, which should lead to more stable revenues. Those that don’t will see a loss of audience and lower revenues.
People are more aware of click-bait and misinformation tactics through growing cultural awareness. We’re not anywhere near being fully aware, but that awareness is growing. This is evidenced by people moving away from platforms that have and are becoming, more toxic.
Platform moderation has always been complex, but add in the cultural lens, taking a more human-centric approach and putting humans first and technology second is perhaps, an approach that may work better.
Proper moderation, in my view, requires the public setting of clear rules and boundaries. When the platforms keep their rules and boundaries secret, or constantly shifting, users will grow more and more frustrated over time, and some will react by pushing the boundaries further.
Clear cut lines, and no arbitrators of what is 'true' and 'false' would improve things dramatically I believe.