google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Australia

The chatbot will see you now: why AI is tackling health

Have you considered seeking advice from a chatbot before consulting a doctor about a rash, trouble sleeping, or chronic heart condition?

That’s the proposal from leading artificial intelligence company OpenAI, which has flagged plans to launch a dedicated healthcare service with Australian users as possible candidates for the trial.

The announcement comes as the company said its ChatGPT tool answers more than 230 million health-related questions from more than 40 million people every day.

AI researchers say a full-scale launch is almost inevitable due to intense consumer demand, but medical experts are divided on its potential impact.

While some say increasing medical literacy through personalized, computer-generated answers could be beneficial for patients, others warn that little is known about the accuracy of their advice, regulations and privacy protections.

At the beginning of January, OpenAI announced plans for ChatGPT Health, which it describes as a complementary service that analyzes medical data and makes recommendations on what to ask a doctor, what to eat and how to exercise.

“Healthcare is designed to supplement, not replace, medical care,” the company said in a statement.

“Instead, it helps you navigate everyday questions and understand patterns over time, not just moments of illness, so you can feel more informed and prepared for important medical conversations.”

In addition to uploading raw data from medical scans and tests, users will be able to connect their health records from services like Apple Health, MyFitnessPal, Peloton, and Weight Watchers to ChatGPT.

Australians may be included in a small early testing group, although the service is not compatible with the government’s My Health Record platform.

Toby Walsh, principal scientist at the UNSW Artificial Intelligence Institute, says AI’s move into healthcare is a natural extension of its existing offerings, given that many people are turning to chatbots to answer specific questions.

“People blame their X-rays, their blood tests, everything on this,” he told AAP.

“So it makes sense to try to do this in a better, more formal way where it knows something about your health history so it tries to give you specific, personalized, specific advice and not just give you general answers.”

Prof Walsh says using AI to analyze raw health data, identify health trends, develop a list of probing questions or summarize symptoms for a doctor’s visit can help users.

However, generative AI technology is relatively new and untested in the healthcare industry and could have worrying consequences.

“There’s a huge amount of money in healthcare — healthcare is one of the biggest businesses — and these tools can do useful things,” he says.

“There is something useful about this, but I’m afraid we’re rushing into it as usual.”

For healthcare consultant and practicing physician Joe Kosterich, technology and its concerns are familiar.

“This reminds me of when Google was first getting big and everyone was worried about ‘Dr Google,'” he says.

“After all, AI is quoting what’s already online and doing so much more efficiently than a Google search, and a Google search is much more efficient than going to an old-fashioned medical encyclopedia.”

Dr Kosterich says that given the rapid development of AI and its spread across a wide range of industries, it will be impossible to stop AI from entering the healthcare industry.

But technology can benefit patients’ medical literacy and has the potential to provide useful, complementary advice on managing discomfort between doctor visits.

“For the vast majority of people, this will probably be quite helpful; they can learn a little more about their condition and how to deal with it,” he says.

“But no one should treat any medical condition based on information they find on Google or through AI without first talking to their doctor about it.”

It’s this risk that worries pharmacist and integrative health commentator Mick Alexander, who says some AI users may turn to the technology in search of diagnosis rather than guidance.

“When you go to a doctor, they have enough understanding to know what questions to ask,” he says.

“Your doctor can also see the other person and get visual cues from your appearance or body language.

“These are things that data points cannot detect if they are based solely on results from blood tests.”

Questions about regulating generative AI health advice also remain unanswered, Mr. Alexander says, and whether the advice it sends to users will be reviewed by medical professionals.

If AI healthcare offerings fall within the category of clinical decision support services, approval from the Australian Therapeutic Goods Administration may be required.

Mr Alexander says this is a gray area in the market and needs to be thoroughly tested and addressed before any medical professional recommends an AI healthcare service.

“I’m not going to rely on that as a decision-making tool,” he says.

“If you’re looking for more information about your health or conditions, the first place to start is with your healthcare team or doctor.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button