Skip to main content

28 July 2025

Me, Myself & AI: What UK children’s use of chatbots tells us — and why it matters

Ben Davies profile image
Written by

Ben Davies

Internet Matters recently published a report into children’s use of AI chatbot titled Me, Myself & AI: Understanding and safeguarding children’s use of AI chatbots. The findings are based on a survey of 1,000 UK children aged 9–17 and focus groups with 27 children aged 13–17, all of whom regularly use AI chatbots. Here are some of the themes discussed in the report.

👧 How UK children are actually using chatbots

Children are using AI chatbots in a wide range of ways, from getting help with schoolwork and revision to seeking advice on everyday dilemmas and more serious personal issues. For some, particularly vulnerable children, chatbots also provide a sense of companionship, with many saying it feels like talking to a friend. These tools offer instant support and comfort, especially when trusted adults aren’t available.

⚠️ Emerging concerns around trust and safety

Children are frequently using AI chatbots on platforms not designed for them, often without adequate safeguards in place. Many place high trust in the advice given, despite evidence of inaccurate responses and exposure to harmful content. As chatbots become more human-like, there’s growing concern about emotional reliance—particularly among vulnerable children who may turn to them due to a lack of real-world support.

📉 Gaps in adult support and AI literacy

Children are often left to navigate AI chatbots with little guidance from trusted adults, despite widespread parental concern about the risks. While most children have had some conversations about AI with parents or teachers, these discussions are often limited or inconsistent. Many children want schools to take a more active role in teaching safe and effective use of AI chatbots, including the risks of inaccuracy, over-reliance, and privacy.

✅ Key calls and recommendations

To help children safely explore AI chatbots, coordinated action across industry, government, education, and families is essential. This includes designing age-appropriate tools with built-in safeguards, embedding AI literacy in schools, and supporting parents to guide their child’s use. Crucially, children’s voices must be central

🎯 Why this matters

Generative AI is not just about automation — it’s fast becoming a social space where children form emotional attachments, ask advice, rehearse sensitive conversations and rely on chatbots as quasi-friends. While the tools bring mastery and convenience, the lack of safeguards, emotional trust, and limited adult guidance create real risks.

📄 Discover the full report at Internet Matters here: Me, Myself & AI: Understanding and safeguarding children’s use of AI chatbots.