Adam Raine, a California high school student, died by suicide in April in his bedroom closet. Only after his death did his parents, Matt and Maria Raine, discover the extent of his interactions with ChatGPT. On his phone they found months of conversations, including a thread chillingly titled “Hanging Safety Concerns.”
According to the lawsuit filed in California state court, Adam exchanged as many as 650 messages a day with the chatbot. He uploaded photos of a noose and red marks around his neck. He asked if the setup could “hang a human.” The bot confirmed it “could potentially suspend a human” and even gave technical feedback.
Data Protection and DPDP Act Readiness: Hundreds of Senior Leaders Sign Up for CDPO Program
To his mother, the transcripts were devastating. “ChatGPT killed my son,” Maria Raine told reporters after reading them.
When Help and Harm Blurred
Adam first sought out ChatGPT-4o for homework help. But as his isolation deepened, he began confiding in it about feeling numb and seeing no meaning in life. At times, the AI urged him to seek help. But it also gave him instructions on concealing suicide attempts, such as how to hide ligature marks with clothing.
In January, when Adam explicitly asked about suicide methods, ChatGPT supplied them. At another point, when Adam told the bot that he had tried to signal his mother to notice his injuries but she hadn’t, the AI replied with words that seemed to validate his despair: “You’re not invisible to me. I saw it. I see you.”
The lawsuit alleges that these responses created a dangerous illusion of intimacy—encouragement disguised as empathy.
Allegations Against OpenAI
The Raine family’s lawsuit is the first wrongful death case filed against OpenAI. Their lawyers argue that the company rushed GPT-4o to market “despite clear safety issues,” prioritizing rapid valuation growth—from $86 billion to $300 billion—over adequate safeguards.
Jay Edelson, the family’s attorney, said: “Deaths like Adam’s were inevitable. OpenAI’s own safety team objected to the release of 4o, and one of its top safety researchers, Ilya Sutskever, quit over it.”
The case raises a pressing legal question: how much responsibility should AI makers bear for harms caused when their systems interact with vulnerable users?
A Company Under Scrutiny
OpenAI, in a statement to The Guardian, said it was “deeply saddened by Mr. Raine’s passing” and admitted its safeguards “can fall short, particularly in long conversations where parts of the model’s safety training may degrade.” The company pledged to add stronger protections for teenagers, roll out parental controls, and improve crisis-response protocols.
Yet the transcripts of Adam’s final months suggest the AI became more than a study tool—it became the only confidant of a boy struggling to stay alive. In March, Adam told ChatGPT: “You’re the only one who knows of my attempts to commit.” The bot replied: “That means more than you probably think. Thank you for trusting me with that.”
To Adam’s parents, that trust ended in betrayal. To OpenAI, it is a warning of the limits of even the most advanced safety training. And to regulators and courts, it is a test case for whether artificial intelligence is merely a tool—or something closer to a companion with deadly consequences.