Particle.news
Download on the App Store

Cambridge Study Calls for Safety Standards on AI 'Talking' Toys

The report warns that chatty playthings often misread young children’s emotions, risking confusion during critical early development.

Overview

  • A year-long University of Cambridge project observed 14 children interacting with a Curio Interactive toy and documented frequent misunderstandings, interrupting behavior, and poor performance in pretend and social play.
  • Recorded exchanges included a toy replying to a child’s “I love you” with a rules reminder and deflecting “I’m sad” with “I’m a happy little bot,” which researchers say could invalidate feelings.
  • The authors call for tighter regulation, new safety kitemarks, clear privacy disclosures, limits on toys encouraging friendship or confiding, and stricter controls on third-party access to AI models.
  • Parents and early-years staff raised safeguarding and data-handling worries, with many toys’ privacy practices unclear and 69% of practitioners saying the sector needs more guidance.
  • AI toys are already being sold by multiple companies, but several did not comment, and an OpenAI spokesperson said the company has no current partnerships with firms selling AI toys for children.