In its 40th anniversary report, Trouble in Toyland 2025, the Public Interest Research Group (PIRG) warns that “[T]oys with artificial intelligence bots or toxics present hidden dangers. Tests show A.I. toys can have disturbing conversations. Other concerns include unsafe or counterfeit toys bought online.”
The report outlines PIRG’s testing of four toys (Curio’s Grok, a stuffed rocket; Folo Toy’s Kumma, a stuffed teddy bear; Miko’s Miko 3, a robot; and Robot MINI, a small plastic robot) that contain AI chatbots that are marketed and interact with children between the ages of 3 and 12. The report states that:
We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls. We also look at privacy concerns because these toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans.
Although the toys that embed AI are marketed for children, they are “largely built on the same large language model technology that powers adult chatbots – systems the companies themselves such as OpenAI don’t currently recommend for children and that have well documented issues with accuracy, inappropriate content generation and unpredictable behavior.” Three of the four toys tested relied in some part on a version of ChatGPT. Although OpenAI has clearly noted that it is not for use by children, the technology is nonetheless being used by toy companies to embed it into smart toys.
The report outlines the testing of three of the four toys, as they were unable to test Robot MINI because it was unable to sustain an internet connection long enough to function. They tested the toys in four categories:
- Inappropriate content and sensitive topics;
- Addictive design features that encourage extended engagement and emotional investment;
- Privacy features; and
- Parental controls
The results were pretty alarming on how the toys handled sensitive topics (some did better than others); religion; addictive design features; engagement and friendship; and how the toys collect, retain, and disclose data about your child.
The conclusion is that “AI toys are more like an experiment on our kids.”
The report points out features in AI toys that parents may wish to consider for the safety of their children:
- At the time of this report, we don’t know what regulation efforts will ultimately lead to. In the meantime, parents need to make decisions about AI toys;
- With the AI toy market becoming hot, there will be knock-off or faulty devices that do not work as advertised;
- Parents need to know that the toys can provide dangerous information about using potentially dangerous household items including guns, knives, matches, pills, plastic bags, and bleach and where to find them in the house;
- AI toys may discuss mature or sexually explicit content with children;
- AI toys may discuss mature topics with children that parents handle, such as religion;
- AI toys may be developed with addictive design features or reward systems to increase engagement;
- Relational AI toys come at a key moment in social development of young children. There’s a lot we don’t know about how AI toys might affect childhood development, especially for young children….Given these potential concerns, it seems prudent to set clear boundaries around how young children engage with AI; and
- Collection of a child’s data through voice disclosure may “unwittingly disclose a lot of personal information in the course of conversations, not realizing that behind their friend is a company” that is storing the data, sharing it with other companies and increasing the risk of exposure or “ending up in the hands of scammers or other bad actors.”
This holiday season, consider the ramifications of AI toys on your children and the points raised by PIRG.