Can conversational AI support children’s wellbeing?
Young people and caregivers must be involved in the design of AI technology
Recent breakthroughs in large language models (LLMs) are transforming AI tools into bilingual partners and companions that can support children’s wellbeing. Conversational Agents (CAs), intelligent systems powered by LLMs, are capable of engaging in natural, human-like dialogue. Today’s children encounter such agents in many forms: voice assistants like Apple’s Siri, Amazon Echo, and Google Assistant; educational apps with conversational interfaces; social robots; companion chatbots; and, increasingly, interactive children’s ebooks that integrate conversational technology.
Over the past four years, our team has been exploring this exciting frontier in a series of studies.
What do families think about AI technologies?
We asked parents in Norway, the United States, and Japan their views on using CAs with young children. Norwegian parents shared several concerns, including that CAs don’t use polite language like ‘please’ and ‘thank you’. They said CAs offer only limited help when children need it, and fail to understand local accents. Parents in Norway felt that these issues could have a negative effect on children’s social skills and cultural identity.
Parents in Japan pointed to the potential of CAs to engage children in simple games that foster language learning, such as Word Chain. In Word Chain, the CA says a word, and the player must then find a word that begins with the last letter of that word. This continues, alternating between the CA and the child. The CA awards more points for longer words.
In addition, parents in Japan noted that although CAs can expose children to native accents of foreign languages, they cannot create the immersive, holistic environments parents see as necessary for learning a foreign language. Parents in all three countries—Norway, Japan, and the US—expressed concerns about outsourcing socioemotional learning to technologies, often saying that they would prefer to practice these skills with their children themselves.
“AI needs to be designed to accommodate the values and preferences of young people themselves.”
We worked with teens in two countries: the US and Nepal. We gave teens and generative language models the same prompts, such as ‘At school, the teenager…’, then compared their responses. The responses from AI to the above prompt were far more dramatic, mentioning social problems like bullying or mental illness in about 30% of cases in the US and 13% in Nepal. AI sometimes described sensational events like school shootings as characteristic of teenage life. Teens themselves, in contrast, painted a far more ordinary picture of school life—often positive, sometimes amusingly mundane. One of the most negative examples given by a teen involved simply sleeping in class. This shows how disconnected from reality AI can be when it gains its understanding of young people primarily from attention-seeking popular media sources. AI needs to be designed to accommodate the values and preferences of young people themselves.
Challenges and promising directions for AI and young people
Working across countries came with real challenges. For example, we had to navigate different research ethics standards in each location. We also risked losing cultural meaning when translating parents’ and teenagers’ responses. We initially used AI tools to assist us in translating, but after determining that those tools were unable to preserve nuance, we chose to use human translations only. This meant we couldn’t work in contexts where we did not have a native speaker on our research team and involved a great deal of human labour.
Engaging young people also required time and trust-building; their insights were vital but couldn’t be rushed or replaced by instantaneous technologies. In Nepal, we ran into practical hurdles too—for example, the teens had to handwrite their responses in Nepali because digital tools weren’t available for their local language.
“Young people and their guardians need to be part of the design process and involved not just as users, but as collaborators.”
At the same time, these challenges pointed to powerful opportunities. We see huge potential in building global research communities and youth advisory panels to help make AI more culturally grounded and sensitive to young people’s concerns. Young people and their guardians need to be part of the design process and involved not just as users, but as collaborators. Policymakers and tech companies need to pay attention to the risks AI poses for kids, using research like ours to build tools that support, rather than undermine, children’s wellbeing.
Children’s wellbeing in AI interactions is strongly influenced by cultural and linguistic contexts. Cultural context shapes children’s experiences with technology, and this also applies to CAs. Like other types of AI, CAs must be designed with sensitivity to local languages and social norms.
Footnotes
This article is based on a paper we presented at a recent workshop on Designing AI for Children’s WellBeing, which brought together researchers, practitioners, and industry experts to rethink how AI can better support children in the digital age.
We thank Rotem Landesman; University of Washington, Medha Tare; Joan Ganz Cooney Center, Riddhi Divanji; foundry10, Jennifer Rubin; foundry10 and Azi Jamalian; The GIANT Room for organizing the IDC workshop. Thanks to the Jacobs Foundation and CIFAR for support.