On Making Social Media Platforms Better
While some social media platforms are under attack, others show that we can have better platforms. Here's the two factors that tell us how.
The major social media platforms are having a bit of a rough go these days. Being called before federal government hearings in Australia, Canada, Europe and the United States. Lawsuits from individuals, civil society groups, States, provinces and increasingly, education systems. And growing dissatisfaction by those who use these platforms.
So can we make better social media platforms? I believe we can and we already have some examples of good platforms with user satisfaction and far less toxicity. These and a core concept, may hold the answers.
The troubles facing social media platforms today are not so much about the technology itself, but about the humans that operate them and how they ended up deploying those technologies. They are no longer aligned with the values of society. Now, as always happens with technologies that have significant impacts on a society, culture is pushing back. When this happens, technologies always end up changing to meet the demands of culture.
The Happier Social Media Platforms
Perhaps the best known social media platforms that have higher user satisfaction, less toxicity while still having economic viability are Pinterest, Medium, Substack and Wikipedia. Reddit is sort of in the middle. No social media platform is perfect, all suffer from nasty people who like to do nasty things for their kicks.
But these platforms don’t just manage these bad actors, they also do something else. They all empower the people on these platforms more than those who don’t. Platforms like Facebook, X (Twitter), Discord give more weight to the algorithms than humans. This creates an adversarial environment where the machine-human relationship is imbalanced.
Despite the challenges of these more derided platforms, they do still deliver some social value in certain areas. Society just feels that right now, the negatives are outweighing the positives. Unless these platforms figure it out, society will eventually leave them. How we engage with them has already changed.
None of the platforms under cultural pressure today started out with this intent, nor can we be sure to say that they chose to give the machine more power than the humans. Perhaps. All do work to varying degrees to deal with toxicity and disinformation.
The Machine-Human Relationship
What this is all about is the balance of the relationship between machines and humans. It is comprised of two main factors; Informational Asymmetry (IA) and Cultural Alignment (CA).
Information Asymmetry (IA) is when one party has more or better information in a relationship. In this case, some social media platforms collect more data on users and therefore has more power, especially when using algorithms.
Cultural Alignment (CA), is the values, norms, behaviours of individuals or societies. This is what we bring to social media platforms when we engage with them. Our expectations, hopes and fears as well.
In terms of social media platforms, ones that focus on IA as their means of operating give less value to the people using them. This means they exploit the cultural values of the individuals, creating feedback loops to influence the human to align with the algorithm. It’s why we get so much polarization of topics and issues and people end up going down conspiracy theory rabbit holes. In an IA model, the preference is towards confirmation bias because it’s better for clicks, likes and targeted advertising.
So we end up with four scenarios as posited by sociologist Dr. Massimo Airoldi in his book “Machine Habitus” and in this way, we can understand how to make a better social media platform.
Assisting: This is when there is higher IA and CA and the machine plays an assistive role. Done right, this can be a nice balance, but it can lead to polarization as it favours confirmation biases. Good for the platform, not so much for the human. Who rarely understands the manipulation taking place.
Nudging: This is direct manipulation when the IA is much higher than the CA. The human is manipulated through algorithms, essentially nudged into taking actions.
Collaborating: This is when more weight is given to CA, to humans, who influence the algorithm. A significant upside to this type of platform is they don’t need to collect as much data on people. This is seen in platforms like Pinterest, Medium, Substack and to some degree, Reddit.
Misunderstanding: This is where humans become aware that something isn’t quite right. Usually it is weak IA and CA. We don’t see this in many platforms, except deceiptful advertising, misinformation and disinformation. Some platforms may apply low IA rules and this enables this type of thing to happen. This often leads to dissatisfaction and disconnection with the platform.
The Path To Better Social Media
So how do we get to building better social media platforms? Part of the answer is moving towards the collaborative scenario where there is greater equality between the machines and the humans. This is when the platform respects the cultural values of humans and works towards a more balanced informational asymmetry. Less of peoples data is needed and a platform can still be profitable as a business. We already see this as the case with the platforms I have mentioned.
This will also lead to higher satisfaction in the platform. Pinterest users can more easily express their cultural values and hence engagement is higher and there is less toxicity. Free speech happens in accordance with cultural alignments.
It is interesting that machines are not sentient agents as they do not understand, or suffer from, external societal and cultural constraints like poverty, inequality, physical and symbolic violence, norms and traditions. They can’t ever really (which is yet another reason AGI is highly unlikely), so we should be aware of the greater value of human culture as the primary guide of the algorithms, rather than them guiding us.
This is why people are pushing back against the social media platforms they don’t like. They have become misaligned and misunderstand society and culture.
Note: I did use chatGPT to summarize some complex elements. Unfortunately it was incorrect in its facts. The article was not written or edited with AI. I also asked it to write better headlines. They were awful.