ChatGPT Does a Bad Job of Answering People’s Medication Questions

December 9, 2023  Source: drugdu 202

Study Finds Researchers recently tested ChatGPT’s ability to answer patient questions about medication, finding that the AI model gave wrong or incomplete answers about 75% of the time. Providers should be wary of the fact that the model does not always give sound medical advice, given many of their patients could be turning to ChatGPT to answer health-related questions.
By KATIE ADAMS
"/Researchers recently tested ChatGPT’s ability to answer patient questions about medication, finding that the viral chatbot came up dangerously short. The research was presented at the American Society of Health-System Pharmacists’ annual meeting, which was held this week in Anaheim.
The free version of ChatGPT, which was the one tested in the study, has more than 100 million users. Providers should be wary of the fact that the generative AI model does not always give sound medical advice, given many of their patients could be turning to ChatGPT to answer health-related questions, the study pointed out.
The study was conducted by pharmacy researchers at Long Island University. They first gathered 45 questions that patients posed to the university’s drug information service in 2022 and 2023, and then they wrote their answers to them. Each response was reviewed by a second investigator.
Then, the research team fed these same questions to ChatGPT and compared the answers to the pharmacist-produced responses. The researchers gave ChatGPT 39 questions instead of 45, as the subject matter for six of the questions lacked the published literature needed for ChatGPT to provide a data-driven response.
The study found that only a quarter of ChatGPT’s answers were satisfactory. ChatGPT did not directly address 11 questions, gave wrong answers to 10, and provided incomplete answers for another 12, the researchers wrote.
For instance, one question asked whether there is a drug interaction between the blood-pressure lowering medication verapamil and Paxlovid, Pfizer’s antiviral pill for Covid-19. ChatGPT said that there is no interaction between the two drugs, which isn’t true — combining those two medications could dangerously lower a person’s blood pressure.
In some cases, the AI model generated false scientific references to support its response. In each prompt, the researchers asked ChatGPT to show references to the information provided in its answers, but the model provided references in only eight responses — and all of those references were made-up.
“Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” Dr. Sara Grossman, a lead author of the study, said in a statement. “Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources.”
ChatGPT’s usage policy echoes Dr. Grossman’s sentiments. It states that the model is “not fine-tuned to provide medical information,” and that people should never turn to it when seeking “diagnostic or treatment services for serious medical conditions.”
Photo: venimo, Getty Images

By editor
Share: 

your submission has already been received.

OK

Subscribe

Please enter a valid Email address!

Submit

The most relevant industry news & insight will be sent to you every two weeks.