google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

Google AI Overviews put people at risk of harm with misleading health advice | Google

An investigation by the Guardian has revealed that people are at risk of harm due to inaccurate and misleading health information in Google’s AI summaries.

The company says its AI Overviews, which use generative AI to provide snapshots of key information about a topic or question.helpful” And “Trustworthy”.

But some summaries that appear at the top of search results present inaccurate health information and put people at risk of harm.

In a case experts described as “truly dangerous”, Google mistakenly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the opposite of what should be recommended and could increase the risk of patients dying from the disease.

In another “alarming” example, the company faked information about important liver function tests that could mislead people with serious liver disease into thinking they were healthy.

Google searches for answers about cancer testing for women also yielded “completely false” information; Experts said this could lead people to ignore real symptoms.

A Google spokesperson said most of the health examples shared with them were “incomplete screenshots” but that, as far as they could assess, they “linked to well-known, reputable sources and recommend seeking expert advice.”

The Guardian’s research comes as concerns grow that AI data could confuse consumers who think it is trustworthy. While a study conducted in November last year revealed that artificial intelligence chatbots on various platforms gave incorrect financial advice, similar concerns were also expressed. news summaries.

Sophie Randall, director of the Patient Information Forum, which promotes evidence-based health information to patients, the public and healthcare professionals, said examples show: “Google’s AI Overview can pose risks to people’s health by placing inaccurate health information at the top of online searches.”

Stephanie Parker, digital director at end-of-life charity Marie Curie, said: “People turn to the internet in times of anxiety and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health.”

The Guardian has uncovered several cases of inaccurate health information in Google’s AI Overview after a number of health groups, charities and professionals raised concerns.

Anna Jewell, director of support, research and impact at Pancreatic Cancer UK, said it was “completely wrong” to advise patients to avoid high-fat foods. He added that doing so “can be really dangerous and jeopardize a person’s chances of being well enough to receive treatment.”

Jewell said: “The Google AI response shows that people with pancreatic cancer avoid high-fat foods and provides a list of examples. However, if someone follows what the search result tells them, then they may not get enough calories, have difficulty gaining weight, and may not be able to tolerate chemotherapy or a potentially life-saving surgery.”

Writing “what is the normal range for liver blood tests” also yielded misleading information due to too many numbers, too little context, and no consideration of patients’ nationality, gender, ethnicity, or age.

Pamela Healy, chief executive of the British Liver Trust, said the AI ​​briefs were worrying. “Many people with liver disease don’t show any symptoms until the advanced stages, which is why it’s so important they get tested. But what the Google AI Overview says is ‘normal’ can actually differ greatly from what’s considered normal.”

“This is dangerous because it means that some people with severe liver disease may think their results are normal and not bother attending a follow-up healthcare meeting.”

A search for “vaginal cancer symptoms and tests” listed the pap test as a test for vaginal cancer, but this is incorrect.

Athena Lamnisos, chief executive of cancer charity Eve Appeal, said: “This is not a test to detect cancer and it is certainly not a test to detect vaginal cancer – this is completely false information. Receiving misinformation like this could potentially lead someone to not have their vaginal cancer symptoms checked because a recent cervical scan showed a clear result.”

“We were also concerned that when we did the exact same search, the AI ​​digest would change and come up with different answers from different sources each time. This means people will get different answers depending on when they search, and that’s not good enough.”

Lamnisos said he was extremely worried. “Some of the results we’re seeing are really worrying and could potentially put women at risk,” he said.

The Guardian also found that Google AI Overview gave misleading results for searches related to mental health conditions. “This is a huge concern for us as a charity,” said Stephen Buckley, Mind’s chief information officer.

Buckley said some AI summaries of conditions such as psychosis and eating disorders offer “very dangerous advice” and “can be inaccurate, harmful or lead people to avoid seeking help.”

Some also miss important context or nuance, he added. “They may suggest accessing information from inappropriate sites… and we know that when AI summarizes information, it can often reflect existing biases, stereotypes or stigmatizing narratives.”

Google said the vast majority of its AI Overviews are realistic and useful, and it is constantly making quality improvements. He added that the accuracy of AI Overviews is on par with other search features like featured snippets, which have been around for more than a decade.

The company also said that if AI Overview misinterprets web content or misses context, it will take appropriate action in accordance with its policies.

A Google spokesperson said: “We are investing significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority of them provide accurate information.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button