May 11, 2023 Source: drugdu 140
Lisette Hilton |
Healthcare practices are already using chatbots to help with administrative tasks like scheduling appointments or requests for prescription refills. And while users say the current generative artificial intelligence (AI) technology falls short for safely treating patients, a recent survey of healthcare practices suggests 77% of users predict chatbots will be able to treat patients within the next decade.
According to Software Advice’s 2023 Medical Chatbot Survey of 65 healthcare providers or practice owners who use live chatbots on their websites, which was conducted in March 2023, more than three quarters of those surveyed are extremely or somewhat confident in chatbots’ ability to assess patients’ symptoms.
Chris R. Alabiad, MD, professor of clinical ophthalmology and ophthalmology residency program director at Bascom Palmer Eye Institute, Miami, FL, has tested the use of ChatGPT (Open AI) in the academic and clinical settings. He piloted the use of ChatGPT at Bascom Palmer Grand Rounds, has used it as a resource to write a letter of recommendation (which he said worked great), and tested the technology to see if it could answer ophthalmology clinical knowledge multiple choice questions.
“It did a pretty good job of that as well but had some limitations,” he said. “My colleagues have used it to generate preauthorization letters to insurance companies for treatments, etc.”
Chatbots are trained on large amounts of data to understand and produce human-like responses. Developers have yet to iron out limitations with these so-called large language models (LLMs) of AI that keep them from replacing humans in healthcare.
“The data source from which the AI draws information is not fully vetted,” Alabiad said. “There could be inaccurate, incomplete, or conflicting information that it obtains from … resources. Multi-step reasoning requiring inference is a known weakness, as is selecting the next best step — particularly if multiple treatments are acceptable treatments. Hallucinations, or non-logical reasoning, occurs at a high rate with chatbots. The bot can present convincing arguments for things that are not even true or entirely made up.”
AI struggles to make calculations, and there are biases in information sources that chatbots draw from, which may translate into learned biases as the chatbot delivers information to users, according to Alabiad.
Scholarly publishing is another area where ChatGPT could be a foe, according to an editorial in The Lancet Digital Health.
“By OpenAI's own admission, ChatGPT's output can be incorrect or biased, such as citing article references that do not exist or perpetuating sexist stereotypes. It could also respond to harmful instructions, such as to generate malware,” according to the editorial. “OpenAI set up guardrails to minimize the risks, but users have found ways around these, and as ChatGPT's outputs could be used to train future iterations of the model, these errors might be recycled and amplified.”
Despite the limitations, there is a lot of chatter generated by anecdotal evidence and peer-reviewed studies looking at chatbots in medicine.
Some of those studies find chatbots to be good communicators.
Researchers reported the use of an AI-driven chatbot facilitated communication among spinal surgeons, patients, and their relatives, streamlined patient data collection and analysis, and contributed to the surgical planning process. The ChatGPT/GPT-4, which is its updated version, also provided real-time surgical navigation information and physiological parameter monitoring, as well as aided guiding postoperative rehabilitation.
“However, the appropriate and supervised use of ChatGPT/GPT-4 is essential, considering the potential risks associated with data security and privacy,” according to the paper published in the Annals of Biomedical Engineering.
And when researchers compared physicians’ and chatbots’ responses to 195 randomly drawn patient questions on a social media forum, they found the bots’ responses were of significantly higher quality and were more empathetic. The results, published in JAMA Internal Medicine suggest these AI assistants might be able to help draft responses to patient questions.
Chatbots might also help in other areas of medicine, such as clinical trial recruiting, according to an article published by Forbes.
“ChatGPT can be used to identify potential participants for trials by analyzing large amounts of patient data and identifying individuals who meet the trial's eligibility criteria,” according to the article. “By leveraging ChatGPT's capabilities, clinical trial recruitment efforts can become more efficient, targeted, and effective in reaching diverse populations.”
Beyond ChatGPT
ChatGPT might be making headlines, but it’s not the only AI-powered chatbot available. Google recently released its AI chatbot Bard. There are many other companies developing chatbots, and some companies are looking to refine ChatGPT for healthcare. Doximity, for example, has DocsGPT, which was developed using OpenAI's ChatGPT and trained on healthcare-specific prose, according to HIMSS Healthcare IT News.
Given the level of interest from venture capitalists (VCs), many more options should become available in the foreseeable future. “Chatbots, virtual assistants, and voicebots captured 57.8% of VC investments in natural language interfaces in 2022,” according Pitchbook’s 2023 Vertical Snapshot: Generative AI.
The future of chatbots in medicine remains uncertain. But as OpenAI CEO Sam Altman said during an interview with Fox News, the technology itself is powerful and could be dangerous.
"I think people should be happy that we are a little bit scared of this," Altman said.
By editoryour submission has already been received.
OK
Please enter a valid Email address!
Submit
The most relevant industry news & insight will be sent to you every two weeks.