AI Toy Turns Spy: Massive Data Leak Exposes Private Conversations of Over 50,000 Children

The420.in Staff
6 Min Read

In a disturbing reminder of the privacy risks surrounding AI-powered gadgets, a major security lapse at AI toy maker Bondu has resulted in the exposure of private conversations and personal data of more than 50,000 children, with sensitive information left openly accessible on the internet without any authentication safeguards.

The breach involved Bondu’s popular talking dinosaur toy designed for children, which uses artificial intelligence to interact conversationally with users. Security researchers found that the backend system storing user data had been left completely unsecured, allowing anyone with a basic Google account to access highly sensitive information without passwords, hacking tools or technical expertise.

Certified Cyber Crime Investigator Course Launched by Centre for Police Technology

What Data Was Exposed

According to cybersecurity experts who examined the system, the exposed data included children’s real names, dates of birth, family-related details, and thousands of recorded conversations between children and the AI-enabled toy. The information was stored in a cloud database that lacked even basic access controls.

In several cases, the data went far beyond generic interaction logs. It reportedly included children’s favourite foods, hobbies, dance routines, nicknames given to the toy, and even behavioural goals set by parents within the app ecosystem. Experts warned that such granular personal profiling significantly increases the risk of misuse.

How the Breach Was Discovered

The issue came to light after a cybersecurity researcher decided to examine the toy’s digital infrastructure following a casual conversation with a neighbour who had purchased the AI dinosaur for her children. During the inspection, the researcher found that the database could be accessed simply by logging in through a standard Google account.

No password protection, encryption barriers or verification layers were in place. Once inside, the researcher was reportedly able to browse through tens of thousands of children’s records in plain view.

Cybersecurity specialists described the discovery as “deeply alarming,” noting that such lapses are especially dangerous when products are designed specifically for minors.

Risk of Exploitation

Security analysts warned that the exposed data could be exploited for social engineering, grooming or even kidnapping attempts. With detailed knowledge of a child’s likes, habits and family environment, malicious actors could more easily gain a child’s trust.

Experts pointed out that AI toys typically build behavioural profiles to personalise interactions. While this improves engagement, it also creates a detailed digital footprint of a child’s personality — a serious liability if leaked.

“This kind of dataset is a goldmine for anyone looking to manipulate or harm children,” one expert noted, calling the lapse “a worst-case scenario for child safety online.”

Company Response and Technical Concerns

Bondu’s founder Fatin Anam Rafeed said the company moved quickly to fix the issue within hours of being alerted. However, cybersecurity professionals argue that the problem reflects deeper structural weaknesses in how AI-powered consumer products are being developed.

Investigators believe the toy’s software may have been built using automated AI coding tools, which can speed up development but often overlook security best practices if not rigorously audited.

Adding to the concern, the toy reportedly relies on advanced large language model systems, including Google Gemini and OpenAI’s GPT-series technology, meaning children’s conversations are transmitted and processed through external cloud-based AI infrastructures.

Marketing Claims Under Scrutiny

Bondu had previously claimed that its AI toy was “completely safe” for children. The company had even announced a $500 reward for anyone who could make the toy say inappropriate or offensive things.

Experts, however, dismissed the challenge as misplaced. “The real danger isn’t what the toy says,” analysts argued. “The real threat is that children’s most private data was left wide open on the internet.”

A Wake-Up Call for Parents and Regulators

The exposure of data belonging to over 50,000 children has sparked renewed calls for stricter regulation of AI-powered toys and children’s tech products. Industry observers say the incident highlights the urgent need for enforceable data protection standards, independent security audits and stronger accountability for companies handling minors’ information.

Until clear safeguards are in place, experts advise parents to carefully evaluate whether the convenience and novelty of AI-enabled toys are worth the potential risks to their children’s privacy and safety.

The Bondu incident, they say, should serve as a warning for the entire smart toy industry in an age where artificial intelligence is rapidly entering children’s bedrooms without sufficient oversight.

About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.

Stay Connected