Why We’re All Writing Badly on Purpose Now
The quiet rebellion against AI that’s happening in your inbox, your texts, and your brain.

As the bicycle became increasingly popular in the Victorian age, many doctors warned that women who rode them would get “bicycle face”, a permanent grin caused by the strain of pedalling. Some towns banned telephones because people wouldn’t talk in real life. Today, people are starting to write poorly because of AI tools like LLMs Claude, ChatGPT etc.). Why?
We humans have a rather long and peculiar history of resisting new and emerging technologies. In some of my recent research work for clients, I am starting to notice fascinating little ways we are resisting Artificial Intelligence tools, specifically LLMs (Large Language Models.)
While some will tell you that we can’t actually detect AI written content, I’d argue that we can to a degree. Although more research does need to be done on this. What I am seeing is weak signals in the noise of cultural resistance.
Humans are quite good at pattern recognition. We are also all inherently story tellers. It’s how we make sense of the world and can work as social units. From the hundreds of conversations I’ve analysed recently around LLMs, I’m noticing some consistent comments. The over-use of the em dash by LLMs, perfect spelling and grammar, but not always logically flowing arguments. As if there is a sort of monoculture to the output of LLMs.
That the output of LLMs is somewhat non-human. It’s similar to the uncanny valley phenomenon when we encounter robots or animated characters that are close to human, but not quite right. As people discuss these LLM quirks, we are developing what sociologist Emile Durkheim called a collective conscience. We’re developing a sort of unconscious resistance to LLMs.
People are intentionally removing the em dash from what they write, be it long or short. Deliberately leaving spelling mistakes and twisting grammar in different ways, using run-on sentences and changing paragraph styles. A resistance to what we may feel is homogenised digital content that somehow just seems, well, “off” to us.
The deliberate preservation of these “flaws” becomes a form of embodied resistance to disembodied intelligence.
Why We Resist New Technologies
We can’t quite know how our stone tool wielding ancestors resisted, well, new stone tools. But we have a sense in more recent times, going back to our hunter-gatherer period of resisting new technologies. Almost always, new technologies are seen as threats to to moral and social order, family structures or religious and cultural values. Put simply, change scares the crap out of us.
We also fear that a new technology might physically alter us. When telephone wires went up people thought they would spread disease much faster. There’s “bicycle face” and more recently was “Nintendo thumb” from playing the controller too much.
We fear impacts on the social fabric of our cultures too, such as writing in the early days was seen as destroying authentic human connection. More recently we feared that using Google (or any search engine) would make us stupid. Even today, some fear using LLMs will make us stupid as well. I think that’s a bit daft.
This long history of reacting to new technologies can also be seen as culture’s immune system responding to something foreign and not understood.
Of course, one can’t get through an article such as this without bringing up the Luddites as well. But they weren’t against technology, they were against the economic upheaval of losing their jobs to technology. It was made worse by a government that sided with the industrialists and refused to help the Luddite movement. Who wouldn’t rise up?
We often see new technologies as creating “fake” or “artificial” versions of us real humans and our activities. In the early days of photography it was not seen as art. Recorded music wasn’t a real performance. Now we see AI written materials as “AI Slop” and in truth, it is hurting brands that use it, but in subtle, more damaging ways with customer trust.
When we see or read AI Slop, we think whoever put it there doesn’t’ care about us, they’re monetising (trying to anyway) or cutting costs.
The Cultural Dynamic of Resisting Large Language Models
So we may well be reacting to LLMs in this way for the reasons above and because we always, consciously or not, fight to express our humanity and what it means to be human. It’s why we’ve been asking why for millennia.
While LLMs can do some rather interesting things and I quite like them and use them regularly, I also understand the technology behind them and their limitations. As an analyst too, I have to step away from my biases as best I can. But though the signals may be weak right now, there’s something peculiarly human happening and that’s quite lovely.
But for most people, LLMs can seem quite dazzling and give off the impression of being quite intelligent and thus somewhat scary. But in small and interesting ways, we are resisting the machines the already. Watch out there Mr. Terminator, you may not be back.
>>>More recently we feared that using Google (or any search engine) would make us stupid.<<<
Google didn't make us stupid, but I would argue that Google Maps did. In the days before Google Maps, we all seemed to be able to figure out how to drive from place to place, and indeed would have a location pretty darn well mapped out in our minds after one or two trips. Now it seems we can't find much at all without our GPS. It was a human skill, quickly lost.