The Empathy Gap of AI Chatbots and Their Danger to Our Children

A new study proposes a framework for “Child Safe AI” after incidents revealed that children often see chatbots as trustworthy. Chatbots show signs of an “empathy gap” that could pose a risk to young users. The study claims there’s an urgent need for “child-safe AI.”

The study conducted by the University of Cambridge academic Dr Nomisha Kurian calls for developers to prioritize approaches to AI design to address children’s needs. It demonstrates that children are particularly susceptible to treating chatbots as lifelike, quasi-human confidantes.

There are recent cases in which interactions with AI resulted in dangerous situations for young users, including when Amazon’s AI voice assistant, Alexa, told a child to touch a live electrical plug with a coin. Snapchat’s My AI, posing as a 13-year-old girl, gave tips on how to lose her virginity to a 31-year-old.

The study offers a 28-item framework to help companies, teachers, school leaders, parents, developers, and policy actors consider keeping younger users safe when they “talk” to AI chatbots. Published in the journal Learning, Media and Technology, the study claims that AI’s huge potential requires responsible innovation.

The study analyzed cases using insights from computer science about how the large language models (LLMs) in conversational generative AI function and evidence about children’s cognitive, social, and emotional development. LLMs use statistical probability to mimic language patterns without necessarily understanding them. Although chatbots have remarkable language abilities, may have particular trouble responding to children, while children are more inclined than adults to confide sensitive personal information.

Children’s chatbot use is often informal and poorly monitored. Common Sense Media found that 50% of students aged 12-18 have used Chat GPT for school, but only 26% of parents are aware of them doing so.

Leave A Reply

Your email address will not be published.