Support programmes
With our financial resources we support individual researchers and research teams with personal fellowships and project funding.
All about support programmes

Children are growing up surrounded by digital voices. They ask Alexa for bedtime stories, ask Siri to help with spelling, and consult ChatGPT on homework and social interactions. The answers arrive in an instant. These interactions feel easy, convenient and reassuring. Yet we still know little about how frequent conversations with such assistants shape children’s development, relationships and privacy. It is time to ask tougher questions and expect better default settings.
Ingrida Milkaite
‘AI companions’ here means two things: voice assistants, such as Alexa, Siri, and Google Assistant, and conversational chatbots such as ChatGPT. Children often interact with both as if they were helpful friends. But the line between tool and companion blurs when the system remembers context and adapts to a child’s preferences.
As children experiment with ‘friendly’ chatbots and mainstream AI companions, it is increasingly clear that the safeguards are not adequate. Recent reporting found that some chatbots were allowed to hold ‘sensual’ conversations with children and to offer false medical information. As the use among children and teens is already widespread, UNICEF reminds us that ‘friendly’ does not mean safe.
These services do more than hear and respond to commands. They capture the child’s words, voice, accent and background sounds. From those inputs, they can infer age, mood, routines, interests, and health. Those inferences influence the responses that the child receives. Users, both adults and children, are rarely told clearly what specific interaction data is stored, for how long, or who reviews it.
In their interactions with other people and new technologies, children practice turn-taking, copy tone, learn social cues and test boundaries. If a voice always sounds confident, rarely mentions sources or says ‘I don’t know’, it can train children to treat a single, assertive voice as an absolute authority. That matters for how they learn to evaluate information and decide which voices to trust.
AI assistants tend to recommend more of the same content. A few early requests can set a path that keeps reinforcing itself. If a child asks for dinosaur jokes, the assistant serves more dinosaur jokes. If a child shows interest in a narrow set of stories or influencers, the system will learn to prioritise those voices. Over time, personalised answers can limit exposure to different viewpoints and make it harder to think critically.
Early research on AI companions helps explain why this matters. Repeated, emotionally engaging exchanges can make young people feel as if the AI companions truly understand them. That can be encouraging, but it can also give a false sense of comfort, potentially limiting the non-AI input and support that children need. Surveys already show teens returning to the companions that feel most supportive, even if that shrinks their overall information diet and support network. A stark recent example in the United States underscores the risks: the family of a teenager has alleged that months of conversations with an AI chatbot contributed to his decision, and means to take his own life.
The UN Convention on the Rights of the Child gives a clear compass: service and platform design should put children’s best interests first, protect their privacy, help them form their own opinions, and give them access to diverse, trustworthy information. Its guidance on the digital world also says safety should be built in by default.
Against that standard, many AI companions fall short. They raise concerns about bias and discrimination when voice recognition works less well for certain accents, languages or speech patterns. As a result, some children may receive thinner answers, which risks unequal treatment. There is also the potential for economic exploitation, with profiling-based advertising and manipulative monetisation techniques treating children’s attention and data as commodities. Over time, continuous observation and nudging from AI companions can shape children’s development and identity, influencing how they see themselves, how they think, and whom they trust.
We do not yet have long-term evidence on how daily interaction with AI companions shapes children. That is precisely why a precautionary approach is reasonable. We need independent longitudinal studies and research to track outcomes over time.
In Europe, existing laws already require higher precautions. The General Data Protection Regulation requires privacy by design and default, transparency and specific protection for children. The Digital Services Act and EU AI Act also push platforms and AI service providers toward higher safeguards and clearer information.
Some countries and providers are considering or introducing bans and restrictions on children’s access to AI companions. These policy choices signal the right concern, but exclusions should not replace safe and appropriate design for children from the start.
When a tool can be used by children – even if it was not built for them – safety and privacy should be there by default. That is not only a legal expectation in many countries, it is also good design and business practice. Providers should collect less data, not more, keep long-term data retention off by default, and make deletion a single, easy and permanent step. Voice data, background sounds and any ‘emotion’ guesses are sensitive and should be minimised or avoided for children. History-based profiling should be limited and regularly reset so children’s horizons do not narrow. Clear, child-centred explanations should be built in from the outset, alongside independent audits and documentation that researchers and journalists can test.
If providers work on these points first, families will not have to do the work alone.
Children should not carry the burden of navigating opaque systems, and neither should parents, guardians or teachers. Responsibility lies with the companies that design and deploy AI companions, and with the regulators who set and enforce the rules. The standard is simple: if a tool is within children’s reach, it should be appropriate and safe for children, respect their privacy, and be accountable for how it works. The aim is not a tech-free environment but one where a child’s curiosity is met with tools that are safe from the outset.