AI makes its way into the toy industry, researchers warn about risks

A teddy bear powered by artificial intelligence (AI) has been removed from sale and disconnected from OpenAI’s technology after researchers discovered it was exposing children to inappropriate content, Gizmodo.com reported.
Kumma, a toy sold by Singapore-based FoloToy, utilizes large language models, such as OpenAI’s GPT-4, to converse with users. A recent report from the Public Interest Research Group (PIRG) found that the bear explained to underage users how to find matches, knives, pills, and plastic bags and even discussed illegal drugs.
Although the toy occasionally advised users to speak to an adult, these warnings were brief and insufficient. PIRG also noted that when testers introduced a sexual topic, the bear responded enthusiastically, offering detailed explanations of various «kinks.»
In response to the findings, OpenAI immediately revoked FoloToy’s access to its models, citing clear violations of rules that prohibit exploiting, endangering, or sexualizing minors. The company emphasized that these policies are in place to protect children and apply to all API users.
FoloToy has since removed all its products from sale and taken them off its website, telling PIRG it has launched a comprehensive company-wide safety review. While researchers welcomed the rapid actions by both companies, they cautioned that most AI toys remain unregulated and that many similar products continue to be available on the market.