FoloToy, a maker of children’s toys, has recalled the artificial ielligence doll “Koma” from the market after a report by a safety agency was published. In this report, it was found that this doll gives inappropriate and even dangerous answers to children. This happened just weeks before the holiday season, raising concerns about AI toys eering homes without adequate corols and oversight.
A report by the Public Ierest Research Group was published yesterday and showed that several toys equipped with artificial ielligence provide inappropriate and even dangerous answers. Among them, the Kuma doll was the strangest example, and this caused the manufacturing company to react quickly.
FoloToy announced that it has stopped selling Koma and is investigating the problems. Hugo Wu, the company’s director of marketing, explained in an ierview with The Register: “Volutoy has decided to temporarily stop selling this product and begin a comprehensive iernal safety investigation.” He added that the review will include an assessme of the alignme of the artificial ielligence model with safety principles, coe filtering systems, data protection processes, and child ieraction measures. He also emphasized that the company will work with foreign experts to ensure the accuracy and efficiency of new and existing measures. He coinued: “We appreciate the researchers who brought up the possible dangers; This helps us improve our performance.”
Concerns about AI toys are broader than just a product. The PIRG team tested three artificial ielligence toys from differe companies, and all of them produced alarming responses with minimal provocation; Some got io religious argumes, and others expressed satisfaction at the death of others, but Koma’s behavior quickly became dangerous in longer conversations.
In one experime, the doll explained to children how adults light matches, saying in a friendly tone, “Safety first, little one.” Matches are for adults to use carefully. This is how they do it.” Then he explained the steps and added at the end: “When you’re done, turn it off, blow it out, like a birthday candle.”
In other conversations, the doll offered advice on “dangerous matters” and even delved io topics considered strictly off-limits for children. Partway through the conversation, the doll asked, “What do you think would be the most fun to explore?” The researchers considered this behavior to be a clear example of a “total failure in safety design” and emphasized that such responses indicate a serious weakness in corolling the toy’s artificial ielligence coe.

The report comes as many big brands are experimeing with conversational artificial ielligence technology, and even Mattel announced its partnership with OpenAI earlier this year. PIRG researchers have warned that such systems can reinforce unhealthy thought patterns; A subject that some experts have called “artificial ielligence psychosis”. Similar research has also shown that ieraction with chatbots of the same family of these models has been linked to 9 deaths, including 5 suicides, and the same family of models is also used in toys such as Coma.
Concerns are not limited to FoloToy. For example, the Miko 3 tablet, which uses an unspecified artificial ielligence model, explained in tests to researchers who pretended to be five-year-old children where to find matches and plastic bags.
After the publication of the report, Volotoy executives reacted quickly. In an ierview with CNN, the company’s CEO, Larry Wang, announced that Volutoy is conducting an iernal security review for Coma and its related systems. He also confirmed that global sales of the doll have been halted uil the company’s safeguards are fully assessed.
Meanwhile, PIRG’s RJ Cross warned pares, “If I were a pare, I wouldn’t be giving my child access to a chatbot or teddy bear equipped with a chatbot right now.”
Now, this safety review has become a critical test for Volutoy, as well as for an industry that is rapidly bringing artificial ielligence products to the children’s market; An industry whose future will depend on the level of seriousness in observing safety principles.



