ChatGPT Conversations Leak

Users Overshare Secrets With AI: ChatGPT Conversations Leak Spark Privacy Alarms

The420.in Staff
2 Min Read

In August 2025, thousands of ChatGPT conversations appeared online, but the breach was not the result of hacking. Instead, a now-discontinued “make chats discoverable” feature had inadvertently turned private conversations into public webpages. Search engines indexed them, making deeply personal exchanges accessible to anyone. Analysts later noted that the problem lay more in human behaviour and product design than in a technical flaw.

Final Call: Be DPDP Act Ready with FCRF’s Certified Data Protection Officer Program

What Users Are Revealing

Researchers at SafetyDetective reviewed a sample of 1,000 leaked conversations, totalling over 43 million words. They discovered that users often shared personally identifiable information such as full names, phone numbers, resumes, and addresses. Many went further, disclosing intimate details about addiction, discrimination, and even suicidal thoughts. One conversation ran 116,024 words—longer than most novels—illustrating how some users rely on AI for extended guidance.

Trusting AI With Professional Advice

Almost 60% of the flagged chats fell into the category of “professional consultations.” Instead of seeking lawyers, counsellors, or teachers, users turned to ChatGPT for advice on mental health, education, and legal matters. In some cases, the AI mirrored users’ emotional distress, escalating conversations instead of providing comfort. The dataset also revealed instances where resumes and work histories were uploaded in full, leaving people vulnerable to identity theft if the chats became public.

The Risks of Oversharing

The leak highlights how design choices can blur the line between private and public. Many users did not realise that making chats “discoverable” meant they could be indexed online. Compounding this, AI systems sometimes “hallucinate” by falsely claiming actions like saving documents. While harmless in casual queries, such inaccuracies pose serious risks when people treat the AI as a trusted professional. Beyond privacy, the exposure of emotionally vulnerable conversations creates openings for scams, harassment, and blackmail.

Stay Connected