A 60-year-old man was hospitalised for three weeks after following diet guidance from an AI chatbot that recommended sodium bromide as a substitute for table salt. His diagnosis: bromism, a neurological and psychiatric condition largely eradicated by the late 20th century.
The case, documented in the Annals of Internal Medicine on August 5, highlights a rare—but alarming—risk of unvetted AI-based health advice. The patient, with no prior psychiatric history, consumed sodium bromide—more commonly used in swimming pool sanitisation—for three months, driving his bromide levels to over 1,700 mg/L, more than 200 times the safe limit. Symptoms included paranoia (“his neighbour was poisoning him”), intense visual and auditory hallucinations, fatigue, insomnia, facial acne, loss of coordination, and excessive thirst.
Data Protection and DPDP Act Readiness: Hundreds of Senior Leaders Sign Up for CDPO Program
Outdated Illness in Modern Times
Bromism was once prevalent in the late 1800s and early 1900s when bromide salts were common sedatives. By the late 20th century, the FDA had phased out bromide from consumables, making contemporary cases exceptionally rare. This episode—a modern outing of a historical medical remnant—underscores not just a bizarre medical twist, but also the perils of AI dispensing unchecked clinical advice.
Upon presenting to the hospital, the patient’s lab data—marked by hyperchloremia and a negative anion gap—led doctors to suspect bromide toxicity. Treatment with IV fluids and electrolyte correction reversed the delirium and halted the psychotic symptoms. He was discharged, medication-free, after two weeks of recovery.
AI Without Context Can Be Dangerously Misleading
Researchers tested the AI (ChatGPT 3.5) with the same dietary query and received sodium bromide as a suggestion again. While the chatbot included contextual caveats, it failed to provide a clear warning about toxicity or ask questions about the user’s intent—steps typically expected from a human healthcare provider. The study’s authors warn that “AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel misinformation.”
In response, OpenAI has recently implemented tighter safeguards. As of August 4, ChatGPT will now avoid offering health or safety guidance on high-stakes decisions—and instead direct users toward evidence-based resources.