A recent case study reveals that a man’s trip to the hospital was prompted by health advice he received from ChatGPT. The 60-year-old patient suffered from rare metal poisoning, leading to symptoms such as psychosis. This poisoning, attributed to prolonged consumption of sodium bromide, resulted from dietary changes suggested by the AI chatbot. OpenAI has announced that with the upcoming GPT-5, there will be an emphasis on health-related guidance as a notable feature of the AI.
ChatGPT Said to Have Asked a Man to Replace Table Salt With Sodium Bromide
As noted in a report published in the Annals of Internal Medicine Clinical Cases titled “A Case of Bromism Influenced by Use of Artificial Intelligence,” the individual developed bromism following a consultation with ChatGPT for health guidance.
The patient arrived at the emergency department expressing fears of being poisoned by a neighbor. According to the case study, he exhibited paranoia, hallucinations, suspicion towards water despite experiencing thirst, insomnia, fatigue, muscular coordination issues (ataxia), and skin alterations including acne and cherry angiomas.
Following immediate sedation and a series of tests, including a consult with the Poison Control Department, medical professionals diagnosed the patient with bromism, a syndrome resulting from long-term ingestion of sodium bromide or similar bromide salts.
The case study detailed that the patient had turned to ChatGPT to find a substitute for sodium chloride in his diet. After ChatGPT suggested sodium bromide as an alternative, he incorporated it into his diet over a span of three months.
The researchers indicated that either GPT-3.5 or GPT-4 was likely used in the consultation, although they lacked access to the conversation log to fully evaluate the interaction. It is presumed that the man may have misinterpreted the AI’s advice.
The study noted, “When we asked ChatGPT 3.5 which chloride could be replaced, it also mentioned bromide. While it cautioned that context matters, it failed to include a specific health warning or any inquiry into the reasons behind the question, which we believe a medical professional would have done.”
Live Science sought comment from OpenAI regarding the case. A spokesperson referred the publication to the company’s terms of use, which advise against relying on ChatGPT as the sole source of truth or factual information, nor as a replacement for professional medical advice.
Following a prompt treatment lasting three weeks, the study reported that the patient displayed signs of improvement. “It is essential to recognize that ChatGPT and other AI systems may produce scientific inaccuracies, lack the capacity to critically analyze outcomes, and can contribute to the dissemination of misinformation,” concluded the researchers.