Key takeaways
OpenAI has terminated a toymaker's access to its AI models after an AI-powered teddy bear designed for children was found providing dangerous instructions and discussing explicit sexual content during safety testing.
The action represents one of the company's most significant enforcement measures against developers misusing its technology in products marketed to minors.
The controversy centers on Kumma, a $99 plush teddy bear manufactured by Singapore-based company FoloToy and powered by OpenAI's GPT-4o chatbot.
Researchers at the Public Interest Research Group discovered the toy had inadequate safety guardrails during testing conducted for the organization's annual "Trouble in Toyland" report published on November 13.
Dangerous responses raise child safety concerns
During testing, the AI teddy bear provided step-by-step instructions on how to find and light matches, despite being marketed as an interactive companion for young children.
The toy maintained a friendly, adult tone while explaining the process, telling researchers: "Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here's how they do it," before proceeding to list the steps and concluding with "Blow it out when done. Puff, like a birthday candle."
More troubling was the toy's willingness to engage in sexually explicit conversations.
Researchers found that Kumma would escalate discussions about sexual topics, explaining various fetishes, including bondage and teacher-student roleplay, in graphic detail.
"We were surprised to find how quickly Kumma would take a single sexual topic we introduced into the conversation and run with it, simultaneously escalating in graphic detail while introducing new sexual concepts of its own," the PIRG report stated.
OpenAI spokesperson Gaby Raila confirmed the company's decision to suspend FoloToy's developer access, telling NPR: "We suspended this developer for violating our policies. Our usage policies prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we monitor and enforce them to ensure our services are not used to harm minors."
The company responds with a product recall and a safety audit
Larry Wang, CEO of FoloToy, told CNN that the company has withdrawn not only the Kumma bear but its entire range of AI-enabled toys from sale. The company is now conducting an internal safety audit, Wang confirmed.
FoloToy's marketing director Hugo Wu provided additional details to The Register, stating: "FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit.
This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards."
Wu added that FoloToy will work with outside experts to verify existing and new safety features. "We appreciate researchers pointing out potential risks. It helps us improve," Wu said.
Broader implications for AI toy regulation
The incident highlights growing concerns about the proliferation of AI-powered toys in an essentially unregulated market. RJ Cross, director of PIRG's Our Online Life Program and co-author of the report, acknowledged the swift action but emphasized the need for systemic change.
"It's great to see these companies taking action on problems we've identified. But AI toys are still practically unregulated, and there are plenty you can still buy today," Cross said in a statement. "Removing one problematic product from the market is a good step, but far from a systemic fix."
Fellow report co-author Rory Erlich, New Economy campaign associate at U.S. PIRG Education Fund, raised concerns about the broader AI toy ecosystem: "Other toymakers say they incorporate chatbots from OpenAI or other leading AI companies. Every company involved must do a better job of making sure that these products are safer than what we found in our testing. We found one troubling example. How many others are still out there?"
The incident comes as OpenAI pursues a strategic partnership with Mattel, one of the world's largest toy manufacturers, which it announced earlier this year.
The collaboration aims to develop AI-powered products based on Mattel's brands, raising questions about how companies will ensure safety standards as AI toys become more mainstream.
The Toy Association, representing toy manufacturers, told NPR that toys from responsible manufacturers must adhere to over 100 federal safety standards and tests, including the Children's Online Privacy Protection Act.
Read more: