SmartHackly
  • Home
  • News
  • AGI
  • Open Source
  • AI Applications
  • Startups
  • Enterprise
  • Resources
  • Robotics
No Result
View All Result
SAVED POSTS
SmartHackly
  • Home
  • News
  • AGI
  • Open Source
  • AI Applications
  • Startups
  • Enterprise
  • Resources
  • Robotics
No Result
View All Result
SmartHackly
No Result
View All Result

AI Toys Safety: Why Advocacy Groups Urge Parents Not to Buy Them

November 20, 2025
in News, Robotics
0
Share to Facebook

Are AI Toys Safe for Kids?

AI Toys Safety is now a major concern as children’s and consumer advocacy groups urge parents not to buy them during the holiday season.

These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI’s ChatGPT, according to an advisory published Thursday by the children’s advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.

“The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay said.

AI toys, made by companies including Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but disrupt children’s relationships and resilience, the group said.

“What’s different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters,” said Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program. Because of this, she added, the trust young children are placing in these toys can exacerbate the types of harms older children are already experiencing with AI chatbots.

A separate report Thursday by Common Sense Media and psychiatrists at Stanford University’s medical school warned teenagers against using popular AI chatbots as therapists.

Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for years. They just weren’t as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel’s talking Hello Barbie doll that it said was recording and analyzing children’s conversations.

This time, though AI toys are mostly sold online and more popular in Asia than elsewhere, Franz said some have started to appear on store shelves in the U.S. and more could be on the way.

“Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products,” Franz said.

It’s the second big seasonal warning against AI toys since consumer advocates at U.S. PIRG last week called out the trend in its annual “ Trouble in Toyland ” report that typically looks at a range of product hazards, such as high-powered magnets and button-sized batteries that young children can swallow. This year, the organization tested four toys that use AI chatbots.

“We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls,” the report said. One of the toys, a teddy bear made by Singapore-based FoloToy, was later withdrawn, its CEO told CNN this week.

Dr. Dana Suskind, a pediatric surgeon and social scientist who studies early brain development, said young children don’t have the conceptual tools to understand what an AI companion is. While kids have always bonded with toys through imaginative play, when they do this they use their imagination to create both sides of a pretend conversation, “practicing creativity, language, and problem-solving,” she said.

“An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would. We don’t yet know the developmental consequences of outsourcing that imaginative labor to an artificial agent — but it’s very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds,” Suskind said.

Beijing-based Keyi, maker of an AI “petbot” called Loona, didn’t return requests for comment this week, but other AI toymakers sought to highlight their child safety protections.

California-based Curio Interactive makes stuffed toys, like Gabbo and rocket-shaped Grok, that have been promoted by the pop singer Grimes. The company said it has “meticulously designed” guardrails to protect children and the company encourages parents to “monitor conversations, track insights, and choose the controls that work best for their family.”

In response to the earlier PIRG findings, Curio said it is “actively working with our team to address any concerns, while continuously overseeing content and interactions to ensure a safe and enjoyable experience for children.”

Another company, Miko, based in Mumbai, India, said it uses its own conversational AI model rather than relying on general large language model systems such as ChatGPT in order to make its product — an interactive AI robot — safe for children.

“We are always expanding our internal testing, strengthening our filters, and introducing new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani. “These new features complement our existing controls that allow parents and caregivers to identify specific topics they’d like to restrict from conversation. We will continue to invest in setting the highest standards for safe, secure and responsible AI integration for Miko products.”

Miko’s products are sold by major retailers such as Walmart and Costco and have been promoted by the families of social media “kidfluencers” whose YouTube videos have millions of views. On its website, it markets its robots as “Artificial Intelligence. Genuine friendship.”

Ritvik Sharma, the company’s senior vice president of growth, said Miko actually “encourages kids to interact more with their friends, to interact more with the peers, with the family members etc. It’s not made for them to feel attached to the device only.”

Still, Suskind and children’s advocates say analog toys are a better bet for the holidays.

“Kids need lots of real human interaction. Play should support that, not take its place. The biggest thing to consider isn’t only what the toy does; it’s what it replaces. A simple block set or a teddy bear that doesn’t talk back forces a child to invent stories, experiment, and work through problems. AI toys often do that thinking for them,” she said. “Here’s the brutal irony: when parents ask me how to prepare their child for an AI world, unlimited AI access is actually the worst preparation possible.”

Frequently Asked Questions (FAQ)

1. What are AI toys?
AI toys are products that use artificial intelligence to interact with children, often promising educational benefits and companionship.
2. Are AI toys safe for kids?
Advocacy groups warn that AI toys can pose safety risks, including exposure to inappropriate content and the potential for emotional harm.
3. What are the concerns about AI toys?
Concerns include excessive use, risks of unsafe behaviors, and a disruption of essential human relationships and creative play in young children.
4. How are companies addressing safety concerns?
Some companies claim to implement safety features and parental controls, but critics argue that these measures are often insufficient.
5. What alternatives are recommended for children’s play?
Experts suggest that traditional toys that encourage imaginative play are safer and promote better developmental outcomes.
Tags: AI Toys SafetyChatGPT RisksChild DevelopmentConsumer WarningFairplayMikoU.S. PIRG
TweetShare
Aymen Dev

Aymen Dev

Aymen Dev is a Software Engineer and Tech Market Analyst with a passion for covering the latest AI news. He bridges the gap between code and capital, combining hands-on software testing with financial analysis of the tech giants. On SmartHackly, he delivers breaking AI updates, practical coding tutorials, and deep market strategy insights.

Related Stories

Futuristic 3D digital brain illustration representing ChatGPT 5.2's Reflection Engine

ChatGPT 5.2: The Ultimate 2026 Survival Guide to Mastering Agent Workflows

by Aymen Dev
December 13, 2025
0

ChatGPT 5.2 is here, and it represents a definitive shift in the history of artificial intelligence. If GPT-4 was the era of "Chat," and GPT-5 was the era...

a man interacting with an AI agent on a screen, representing Perplexity AI Agents automating enterprise workflows.

How Perplexity AI Agents Are Transforming Enterprise Automation

by Aymen Dev
December 10, 2025
0

Perplexity AI Agents are rapidly transforming the platform from an AI search engine into a serious enterprise automation layer, designed to execute complex, multi-step business workflows. This shift...

breaking news style concept image showing the leaked OpenAI 'Operator' interface taking over a web browser, with a 'Task in Progress' status bar and autonomous navigation controls

OpenAI “Operator” Leaked: Is This The End of Manual Browsing?

by Aymen Dev
December 9, 2025
0

While the developer community was busy celebrating the open-source victory of DeepSeek V3, OpenAI was quietly preparing its massive counter-attack. And this morning, the dam finally broke. Leaked...

Illustration of an AI robot holding justice scales, symbolising the Perplexity AI lawsuit and legal challenges around AI copyright

NYT’s Perplexity AI Lawsuit Could Rewrite AI Rules

by Aymen Dev
December 8, 2025
0

Perplexity AI lawsuit: The New York Times has filed a major case against AI search startup Perplexity that could redefine how AI companies use copyrighted content and how...

Next Post

Nvidia Q3 Earnings: $500B Backlog Confirmed as CEO Rejects AI Bubble

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

SmartHackly

SmartHackly delivers the latest AI news, automation trends, and productivity insights. Explore smart tools and guides to help you work efficiently in the digital age.

Recent Posts

  • ChatGPT 5.2: The Ultimate 2026 Survival Guide to Mastering Agent Workflows
  • AI Agents Vs Traditional Automation: What You Need To Know In 2026
  • How Perplexity AI Agents Are Transforming Enterprise Automation

Categories

  • AGI
  • AI Applications
  • Enterprise
  • News
  • Open Source
  • Resources
  • Robotics
  • Startups

Weekly Newsletter

  • About
  • Privacy Policy
  • Terms and Conditions
  • Contact Us

© 2025 SmartHackly - Your source for AI tools, automation insights, and tech innovation. All rights reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • News
  • AGI
  • AI Applications
  • Enterprise
  • Robotics
  • Open Source
  • Resources
  • Startups

© 2025 SmartHackly - Your source for AI tools, automation insights, and tech innovation. All rights reserved.