Children and AI Agents In A New World
Understanding the profound shifts in how children learn, think, and connect as AI becomes their constant companion.

Your eight year old daughter Anna is settling in to do her homework and decides to ask her AI assistant, Peku, to help her with some math work. Then she slips seamlessly into a chat with Peku about how her best friend hurt her feelings at school today. The AI assistant responds patiently as it always does, never tiring of the interactions with Anna and never has a bad day or gets stressed out.
What we are witnessing today is the first generation of children that will never know a time where humans didn’t have such a companion or assistant. This means a fundamentally different kind of human. The question then isn’t so much about is this good or bad, but rather, do we understand what we are creating? What is happening?
I’m looking at this issue through my lens as a digital anthropologist. And I’m just touching the surface of a very complex set of issues. Since LLMs (an AI tool) are new to the world and only just entering society at scale, it is hard to truly predict any outcomes. But as with all technologies, there will be unintended consequences.
Despite their seeming sophistication, LLMs (Large Language Models), are fundamentally low-context communicators. They can process explicit information very well, yet struggle with implicit meaning, cultural nuance (they in fact, have no understanding of culture at all), and contextual interpretation. This creates a sort of anthropological paradox.
Children who are growing up with AI assistants will be trained and educated in a completely different way than prior generations. It’s kinda like teaching a child to speak French by having them interact with someone who literally only understands English translations. While the child may speak perfect French, they’ll have no understanding of the cultural soul of the language.
So one question becomes: what kinds of cognitive muscles are children building versus which are atrophying? Let’s consider professor of neuroanthropology Merlin Donald’s stages of cognitive evolution; mimetic, mythic, theoretic and culture.
The culture and society we grow up in plays a vital role in our development. LLMs may disrupt this as they provide seemingly authoritative answers, yet have no cultural context, emotional resonance, or the familial and societal bonds so important in child development. So kids may end up being superior in navigating the information landscape (infosphere), but far less skilled at reading social cues, understanding implicit cultural knowledge or developing the embodied intelligence that comes from interacting with other humans.
What we may be seeing is the rise of “cyborg children” whose cognitive development co-evolves alongside LLMs, or AI agents if you prefer. While this sounds dystopian, it’s not necessarily so. But it requires us to rethink what we mean by human intelligence and development.
Throughout Homo sapiens history, parents and other prominent societal figures have served as educators and mentors. That old saying “it takes a village to raise a child” is true. So what happens when kids form an attachment to AI agents that lack genuine emotions, that just mimic emotion and have no lived experiences and haven’t faced struggles and wins? This would create new forms of identity development.
And then of course, there’s second-order effects. If kids get used to AI companions that never judge, never go on vacations or travel the world, how does this impact their ability to navigate the complexity and messiness of human societies and relationships?
Human learning has a lot of necessary friction; struggle, confusion and eventually, mastery. But if AI agents reduce this friction too much, this could undermine the development of persistence, frustration tolerance, mental strength and the satisfaction we get when we overcome a challenge. How then, do we learn to be human in different cultural contexts?
Children learn differently in various cultural environments as well. In more collectivist cultures like Asian and Nordic countries and indigenous societies, learning isn’t just school, it’s a whole-of-society thing. It’s about sustained social engagement. LLMs could short-circuit this type of learning, where the child gains information, but no or minimal societal context.
Individualist societies, like America and some European countries, could see AI agents for kids amplify these traits of personal achievement, self-reliance and individual problem solving. This could create hyper-individualised childhood development. But with AI agents, a child may appear individualised, but are entirely reliant on the relationship with the AI agent.
Then there’s high and low context cultures. Japan is a high context culture with the concept of “kuuki wo yomu” (reading the air or room), which is the ability to understand unspoken social dynamics. Children who spend much time with an AI agent however, may develop “context blindness”, excellent at processing explicit information but lousy at reading implicit social cues.
In low context cultures where the emphasis is on explicit communication, like many Western societies, kids might develop superb skills at precise, logical communication, but lose the capacity for creative and emotionally nuanced communication.
Children have evolved to learn within relatively small, stable social groups where they can develop deep, contextual understanding of social dynamics, which are crucial to healthy development. So again, we see the issue of a child having access to vast amounts of information, but missing the crucial social learning aspects.
Culture, however, is incredibly adaptive. It’s a survival system we chose because biological evolution is so much slower. It’s not like we can put the rabbit back in the hat. It’s out. So we need to figure this out and likely, we will. It will be messy. It always is. It will take time.
We may well see some interesting hybrid forms of children and AI agent use emerging, where AI agents are integrated into education systems. Perhaps courses that teach critical thinking and mental models at an earlier age. Programs that ensure sociocultural aspects of a given society are incorporated into early and ongoing childhood development. I’m not an academic, so I’ll leave those ideas up to educators.
We do have to consider Ai agents and education systems. Much work on this is being done at the university level, but less so in elementary level education, but that’s going to happen. This may well be a time where we are seeing the emergence of what I’d call “techno-cultural speciation”. The development of fundamentally different human cognitive and social patterns arising from children’s use of AI agents at a young age.
So one days soon, “everyone will be on the spectrum.” What a time to be alive.
Good read.