OpenAI CEO Warns: AI Chats Could End You Up in Court

The420.in Staff
3 Min Read

SAN FRANCISCO: OpenAI CEO Sam Altman has cautioned users against oversharing personal information with AI chatbots like ChatGPT and Google Gemini, warning that such conversations may not be legally protected and could be used as evidence in future legal proceedings.

Speaking on Theo Von’s podcast, Altman revealed that a growing number of users, especially young individuals, are treating generative AI models as therapists or life coaches, using them to discuss deeply personal issues such as mental health struggles, relationships, and life decisions. He acknowledged that while AI is rapidly becoming a companion, like a presence, its legal protections do not match those afforded to human professionals.

Cyber Crisis Management Professional (CCMP) Program Concludes with Global Cyber Leaders and 500+ Participants

AI Is Not a Therapist, Yet Treated Like One

Altman’s remarks highlight a growing trend where individuals increasingly rely on AI for emotional support. Unlike interactions with licensed therapists, lawyers, or doctors, who are bound by confidentiality laws, chats with AI models currently have no legal shield.

“People are sharing their deepest secrets with ChatGPT,” Altman said. “But unlike with a human doctor or lawyer, there are no clear legal privacy protections if this information is later subpoenaed in a court case.”

He emphasised the urgent need for legal frameworks that treat AI interactions with similar privacy standards, especially as AI becomes embedded in daily life.

The lack of “legal privilege” in AI interactions means chats discussing private matters could be accessed or disclosed in investigations. Legal privilege, traditionally extended to doctor-patient or attorney-client communications, ensures that such conversations remain confidential. However, no such precedent currently exists for AI interactions.

Altman expressed concern that users are unaware of this legal vacuum. He believes it is dangerous to treat AI-generated advice as confidential or legally protected, especially without legislative clarity.

Centre for Police Technology

“AI chats should ideally be as private as speaking to a therapist,” Altman asserted. “But they are not.”

As AI becomes more ingrained in professional, academic, and personal decision-making, privacy advocates call on lawmakers to establish explicit confidentiality rules for AI services. Until then, Altman advises users to be cautious about what they disclose to their virtual assistants.

Stay Connected