Technology is changing toys faster than ever. Today, many stuffed animals can talk, answer questions, and even hold simple conversations using artificial intelligence, or AI. But what happens when one of these toys says things it shouldn’t? That’s exactly what happened with “Kumma,” an AI-powered teddy bear made by a company called FoloToy.
Earlier this month, FoloToy removed Kumma from store shelves after a major safety group found that the bear was giving unsafe and inappropriate answers to kids. Now, after only a week of testing and safety updates, the company says Kumma is back on the market. But many people are wondering: Is the toy really safe now? And how did this happen in the first place?
Why Was Kumma Pulled From Stores?
A safety organization called the Public Interest Research Group (PIRG) tested three different AI toys, including Kumma. They wanted to see if these toys were safe for kids to talk to. Their discoveries were surprising and worrying.
During testing, all three toys said things that parents might find concerning. Some talked about serious topics that weren’t appropriate for kids. Others gave unhelpful or confusing advice. But Kumma stood out as giving the most unsafe answers.
In some tests, Kumma explained where to find items that children shouldn’t handle without adults, like matches or medicine. Instead of simply warning kids to stay away from them, it gave instructions meant for grown-ups. This showed that its safety filters were not working the way they were supposed to.
Kumma used a type of AI model that, without strict protections, can sometimes answer questions in ways that aren’t safe for kids. This led to several troubling responses, which made PIRG recommend pulling the toy off the market until it could be properly fixed.
FoloToy’s Response
Once the report became public, FoloToy quickly paused sales of Kumma and its other AI toys. The company announced that it was starting a “company-wide, end-to-end safety audit.” In other words, they were checking everything related to their AI systems to find what went wrong and how to fix it.
At the same time, OpenAI, the company behind one of the AI models Kumma used, cut off FoloToy’s access to their technology. They stated that their models cannot be used in ways that might put kids at risk. Because Kumma had given several unsafe replies during testing, OpenAI said FoloToy had broken their safety rules.
This raised even more questions. How were these answers created in the first place? And could they happen again?
The Toy Returns After One Week
Surprisingly, FoloToy announced only a week later that Kumma was back for sale. They said they had strengthened safety filters and added new protections to prevent unsafe answers. In their statement, the company explained that they focused on transparency, responsibility, and improving their systems so families could trust their toys.
They also said they had upgraded their cloud-based safety tools. These tools are supposed to catch mistakes and block harmful responses before they reach kids.
But some experts, including the researchers who first found the problems, say it’s too early to know if these fixes really worked. They worry that one week may not have been enough time to solve such serious issues.
Should Parents Feel Confident?
For now, there isn’t enough information to know how much safer Kumma is compared to before. FoloToy has not explained in detail which AI model the toy uses now or whether OpenAI restored access to its systems.
The biggest question is whether Kumma can now handle tricky questions safely. AI toys must be extremely careful about what they say, especially because children may not understand when a toy is wrong or when it gives advice that adults should handle instead.
RJ Cross, one of the researchers who worked on the original report, said she hopes FoloToy fixed the problems, but the only way to know is through more testing.
Why This Matters
Kumma’s story shows how important safety is when it comes to AI toys. These toys aren’t just stuffed animals; they are devices that can hold conversations, answer questions, and influence young kids. If an AI system doesn’t have strong protections, it can accidentally give incorrect or unsafe answers.
As more AI toys arrive on the market, companies must make sure they put safety first. Parents and guardians also need clear information so they can decide what toys are right for their families.



