ChatGPT and healthcare: could the AI ​​chatbot change the patient experience?

Scientist reveals how artificial intelligence is changing the diagnosis and treatment of cancer

ChatGPT, the artificial intelligence the chatbot which was released by OpenAI in December 2022, is known for its ability to answer questions and provide detailed information in seconds, all in a clear and conversational manner.

As its popularity grows, ChatGPT is popping up in virtually every industry, including education, real estate, content creation, and even healthcare.

Although the chatbot has the potential to change or improve certain aspects of the patient experience, experts warn that it comes with limitations and risks.

They say AI should never be used as a substitute for medical care.

AI HEALTH CARE PLATFORM PREDICTS DIABETES WITH HIGH ACCURACY BUT ‘WILL NOT REPLACE PATIENT CARE’

Searching for medical information online is nothing new – people have been searching their symptoms on Google for years.

But with ChatGPT, people can ask health-related questions and engage in what feels like an interactive “conversation” with a seemingly omniscient source of medical information.

“ChatGPT is much more powerful than Google and certainly gives more compelling results whether (those results are) good or bad,” said Dr. Justin Norden, digital health and AI expert, adjunct professor at Stanford University. . in Californiatold Fox News Digital in an interview.

ChatGPT has potential use cases in virtually every industry, including healthcare. (Stock)

With Internet search engines, patients get information and links, but then they decide where to click and what to read. With ChatGPT, the answers are explicitly and directly given to them, he explained.

One big caveat is that ChatGPT’s data source is the internet – and there’s a lot of misinformation on the web, as most people know. That’s why chatbot answers, no matter how convincing, should always be checked by a doctor.

Additionally, ChatGPT is only “trained” on data through September 2021, according to multiple sources. Although he can increase his knowledge over time, he has limitations in terms of disseminating newer information.

“I think it could create a collective danger to our society.”

Dr. Daniel Khashabi, professor of computer science at Johns Hopkins in Baltimore, Marylandand expert in natural language processing systems, fears that as people get used to relying on conversational chatbots, they will be exposed to an increasing amount of inaccurate information.

“There is a lot of evidence that these models are perpetuating false information that they saw during their training, regardless of their origin,” he told Fox News Digital in an interview, referring to the “training “Chatbots.

AI AND HEART HEALTH: MACHINES DO A BETTER JOB OF READING ULTRASOUND THAN ULTRASOUNDS, SAYS STUDY

“I think that’s a big concern in the public health field, because people are making life-changing decisions about things like medications and surgeries based on that feedback,” Khashabi added.

“I think it could create a collective danger to our society.”

It could “remove” some “non-clinical burden”

Patients could potentially use ChatGPT-based systems to do things like schedule appointments with medical providers and refill prescriptions, eliminating the need to make phone calls and endure long wait times.

“I think these kinds of administrative tasks are well suited to these tools, to help remove some of the non-clinical burden of the health system“said Norden.

The ChatGPT logo on a laptop

With ChatGPT, people can ask health-related questions and engage in what feels like an interactive “conversation” with a seemingly omniscient source of medical information. (Gabby Jones/Bloomberg via Getty Images)

To enable these types of features, the vendor would need to integrate ChatGPT into their existing systems.

According to Khashabi, these types of uses could be useful if implemented in the right way, but he warns that it could frustrate patients if the chatbot doesn’t work as expected.

“If the patient asks for something and the chatbot didn’t see that condition or a particular way of phrasing it, it could break down, and that’s not good customer service,” he said. .

“There should be very careful deployment of these systems to ensure they are reliable.”

“It could fall apart, and that’s not good customer service.”

Khashabi also thinks there should be a fallback mechanism so that if a chatbot realizes it’s about to fail, it immediately switches to a human instead of continuing to respond.

“These chatbots tend to ‘hallucinate’ – when they don’t know something, they keep making things up,” he warned.

It can share information about the uses of a drug

Although ChatGPT states that it does not have the ability to create prescriptions or offer medical treatments to patients, it does offer detailed drug information.

Patients can use the chatbot, for example, to learn more about a drug’s intended uses, side effects, drug interactions, and proper storage.

Woman asking for medication advice

ChatGPT doesn’t have the ability to make prescriptions or offer medical treatments, but it could potentially be a useful resource for getting information about medications. (Stock)

When asked if a patient should take a certain medication, the chatbot replied that it was not qualified to make medical recommendations.

Instead, he said people should contact a licensed healthcare provider.

He might have details about mental health issues

Experts agree that ChatGPT should not be considered a substitute for a therapist. It’s an AI model, so it lacks the empathy and nuance that a human doctor would provide.

However, given the current shortage of mental health providers and the sometimes long wait times for appointments, it can be tempting for people to use AI as a means of interim support.

SYBIL AI MODEL CAN PREDICT LUNG CANCER RISK IN PATIENTS, STUDY SAYS

“With the shortage of providers in the midst of a mental health crisis, especially among young adults, there is an incredible need,” said Stanford University’s Norden. “But on the other hand, these tools are untested and untested.”

He added: “We don’t know exactly how they will interact, and we’ve already started to see cases of people interacting with these chatbots for long periods of time and getting strange results that we can’t explain. “

SMS sick man

Patients could potentially use ChatGPT-based systems to do things like schedule appointments with medical providers and renew prescriptions. (Stock)

When asked if it could provide mental health support, ChatGPT provided a disclaimer that it cannot replace the role of a licensed mental health professional.

However, he said he could provide information on mental health issues, coping strategies, self-care practices and resources for professional help.

OpenAI “prohibits” the use of ChatGPT for medical purposes

OpenAI, the company that created ChatGPT, warns in its usage policies that the AI ​​chatbot should not be used for medical purposes.

Specifically, company policy states that ChatGPT should not be used to “tell someone that they have or do not have a certain medical condition, or to provide instructions on how to cure or to treat a health problem.

ChatGPT’s role in healthcare is expected to continue to evolve.

He also said that OpenAI’s models “are not refined to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.”

Additionally, he stated that “OpenAI’s platforms should not be used to triage or manage life-threatening issues that require immediate attention.”

CLICK HERE TO SUBSCRIBE TO OUR HEALTH NEWSLETTER

In scenarios where vendors use ChatGPT for healthcare applications, OpenAI requires them to “provide a disclaimer to users informing them that AI is being used and its potential limitations.”

Like the technology itself, ChatGPT’s role in healthcare is set to continue to evolve.

While some think it has exciting potential, others think the risks need to be weighed carefully.

CLICK HERE TO GET THE FOX NEWS APP

As Dr. Tinglong Dai, a Johns Hopkins professor and renowned healthcare analytics expert, told Fox News Digital, “The benefits will almost certainly outweigh the risks if the medical community is actively involved in the development effort.

Leave a Reply

Your email address will not be published. Required fields are marked *