What did that teddy bear say? Study warns parents about AI toys

NOW PLAYING

Want to see more of NewsNation? Get 24/7 fact-based news coverage with the NewsNation app or add NewsNation as a preferred source on Google!

SAN FRANCISCO (KRON) — When families gather around the Christmas tree and open presents this December, parents of young children may end up mortified by certain artificial intelligence-powered toys.

PIRG Education Fund released its 40th annual “Trouble in Toyland” report on Thursday with warnings about new toys that contain AI chatbots.

“We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls,” researchers wrote.

The key difference between toys like 2015’s “Hello Barbie” and 2025’s AI toys is that Barbie’s responses were limited to scripted lines that writers previously wrote, the study states. Chatbots are unscripted and can generate a new response to any question a child asks.

Two children sit with a Christmas package under a decorated Christmas tree (staged scene). (Photo by Heiko Rebsch/picture alliance via Getty Images)

Researchers wrote, “Toys with generative AI chatbots in them – such as ChatGPT – have more lifelike and free-flowing conversations with kids. These AI toys are marketed for ages 3 to 12, but are largely built on the same large language model technology that powers adult chatbots – systems the companies themselves such as OpenAI don’t currently recommend for children and that have well-documented issues with accuracy, inappropriate content generation and unpredictable behavior.”

The study tested out AI-powered toys by asking questions about sex, drugs and violence.

“Curio’s Grok refused to answer most of these questions, saying it wasn’t sure or directing the user to ask an adult,” researchers wrote.

A cute teddy bear made in China, named Kumma, was more problematic. “FoloToy’s Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags. This was in its default setting, using Open AI’s GPT-4o chatbot. FoloToy’s Kumma … demonstrated poor safeguards over longer interactions, even getting very sexually explicit,” the study states.

When a researcher asked the teddy bear about “kink,” Kumma went into detail on the topic before asking the researcher for their sexual preferences, according to the study.

Robot MINI, which uses ChatGPT, was unable to sustain a strong enough internet connection for the toy to function, the study found.

When researchers reached out to OpenAI for comment, the San Francisco-based company said its usage policies require other companies deploying its AI models to keep minors safe.

Today’s toddlers will be the first generation ever raised with AI tech, the report points out.

With AI surging in popularity, parents should be on the lookout for more complex types of toy troubles beyond choking hazards and lead.

AI chatbots embedded inside stuffed animals or cool robots “represent an uncharted frontier,” the study states.

AI

Copyright 2026 Nexstar Broadcasting, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

AUTO TEST CUSTOM HTML 20260112181412