1. News
  2. AI
  3. Man Hospitalized After Following ChatGPT’s Health Tips

Man Hospitalized After Following ChatGPT’s Health Tips

featured
Share

Share This Post

or copy the link

A recent case study has revealed that medical advice from ChatGPT led a 60-year-old man to seek emergency treatment due to severe metal poisoning. This incident underscores the potential risks associated with relying on artificial intelligence for health-related guidance. The individual experienced a range of serious symptoms, including psychosis, attributed to long-term consumption of sodium bromide, a substance he had been advised to incorporate into his diet.

ChatGPT Said to Have Asked a Man to Replace Table Salt With Sodium Bromide

In the report published by the Annals of Internal Medicine Clinical Cases, titled “A Case of Bromism Influenced by Use of Artificial Intelligence,” the patient was presented as having developed bromism following his consultations with ChatGPT.

Upon admission to the emergency room, the man expressed fears that his neighbor was attempting to poison him. He reported an alarming set of symptoms, including paranoia, hallucinations, and an irrational suspicion of water while feeling parched. Additionally, he suffered from insomnia, fatigue, coordination issues (known as ataxia), and noticeable skin changes such as acne and cherry angiomas.

Medical professionals promptly provided sedation and conducted a series of tests, which included consulting the Poison Control Department. They concluded that the man was suffering from bromism, a condition resulting from the prolonged ingestion of bromide salts.

The case study indicated that the individual had turned to ChatGPT for dietary advice and, after being recommended sodium bromide as a substitute for table salt, he integrated it into his routine over a period of three months.

The researchers noted that the version of ChatGPT involved was either GPT-3.5 or GPT-4, though they did not have access to the specific conversation details, making it impossible to evaluate the context of the interaction. It appears the individual may have misinterpreted the information provided by the AI.

“When we queried ChatGPT 3.5 about possible alternatives to chloride, its response included bromide. However, the reply emphasized that context is critical and lacked a clear health warning or questions that a qualified medical professional might ask,” the study articulated.

Live Science sought comments from OpenAI regarding the incident. A spokesperson referred the publication to the company’s terms of use, which caution users against solely depending on ChatGPT’s outputs for accurate information or as a substitute for professional medical advice.

Following a three-week treatment plan, the study indicated that the man began to show signs of recovery. The researchers emphasized the need for caution, stating, “It is crucial to recognize that ChatGPT and similar AI systems may produce scientific inaccuracies, fail to critically analyze outcomes, and ultimately contribute to the dissemination of misinformation.”

Man Hospitalized After Following ChatGPT’s Health Tips
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!