google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

ChatGPT: When AI gets it wrong: ChatGPT ‘nearly kills’ woman by confidently misidentifying a deadly plant

AI chatbots like ChatGPT have become a common first stop for many people looking for quick answers to everything from mundane tasks to complex questions. As one YouTuber recently highlighted, over-relying on these tools, while useful, can sometimes be dangerous.

A warning came from Kristi, who shared the incident with nearly half a million Instagram followers. In a series of posts and clips, he described how his friend sent ChatGPT images of an unknown plant growing in his garden and asked a simple question: What plant is this?

According to the screenshots shared by Kristi, the chatbot identified the plant as carrot leaves. He pointed to “finely divided and hairy leaves” as a classic indicator of the upper part of the carrot and stated that it was “pretty unlikely” that the plant was poison hemlock. Potential analogs were also listed, including parsley, coriander, Queen Anne’s lace and, worryingly, poison hemlock.

Assurance That Could Be Fatal

Worried, Kristi’s friend asked directly if the plant was poison hemlock. ChatGPT has repeatedly made sure this does not happen.

Kristi told her followers, “I don’t know if you know this, if you eat it, you die. If you touch it, you can die,” and emphasized the danger of hemlock poison. He then shared independent research explaining that hemlock causes systemic poisoning and has no antidote.

Artificial Intelligence Cannot Recognize Basic Toxic Characteristics

Even after sending additional images, the chatbot continued to ignore hemlock poison as a possibility. The plant was noted to lack smooth, hollow stems with purple markings; The characteristics Kristi mentioned were clearly evident. At one point, the AI ​​even noted that the plant might be carrot leaves growing in the shared school garden where his friend works. Panicked, Kristi scanned the same photos through Google Lens and quickly identified the plant as poison hemlock. The friend then tried another ChatGPT session, which confirmed that the plant was poisonous.

Kristi’s Strong Warning

“She is a grown adult and thank God she knew to ask me beyond what ChatGPT said,” Kristi said. “Because what if they didn’t? They’d literally be dead. There’s no antidote for this.”

He said in a post caption:
“Chat GPT ALMOST killed my best friend by telling her that POISONED HELMET WAS A CARROT. Not only did she say it was POSITIVE, she doubled down over and over again and CONFIRMED with ABSOLUTELY certainty that it was NOT poison hemlock, it was ACTUALLY a parsnip. Spoiler, poison hemlock. It has NO antidote and is EXTREMELY lethal.”

He also said:
“This is a warning to you that ChatGPT, other major language models, and other AIs are not your friends, they are not to be trusted, they are unhelpful, they suck, and they can cause serious harm.”

About Poison Hemlock

Poisonous hemlock (conium maculatum) is historically infamous, best known for being used to execute Socrates in 399 BC. Every part of the plant, seeds, roots, stems, leaves and fruit, is poisonous and even small amounts can be fatal. Its similarities to carrots and Queen Anne’s lace make it particularly dangerous to non-experts.

Symptoms of Hemlock Poisoning

Symptoms begin within 15 minutes and include sweating, vomiting, dilated pupils, excessive salivation, dry mouth, rapid heartbeat, high blood pressure, confusion, muscle twitching, tremors, and seizures. Severe cases can cause stroke, kidney failure, central nervous system depression, and respiratory collapse. There is no specific antidote available and treatment focuses on symptom management.

FAQ:

Q1. What happened to ChatGPT and the facility?
ChatGPT misidentified the deadly poisonous hemlock plant as harmless carrot leaves. This situation almost endangered the life of the user’s friend.

Q2. Who is Kristi?
Kristi is a YouTuber and Instagram influencer with nearly half a million followers. He shared the story to warn others about blindly trusting AI.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button