google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

ChatGPT drove people to die by suicide, lawsuits against OpenAI claim; chatbot told victim ‘I’m with you, all the way’

Sam Altman’s OpenAI is facing as many as seven lawsuits alleging its AI chatbot drives people to suicide and other harmful decisions, even if they don’t have pre-existing mental health issues.

The lawsuits allege that OpenAI knew about ChatGPT’s potential effects on people’s mental health, but still prematurely released GPT-4o due to internal warnings that it was dangerously sycophantic and psychologically manipulative. The lawsuits were filed by the Social Media Victims Law Center and the Technology Justice Law Project on behalf of six adults and one juvenile. 4 of the victims died by suicide

ChatGPT told victim: “You’re just ready”

According to CNN, Zane Shamblin thought he was talking to his closest confidant when he spoke to ChatGPT with a loaded gun ready to fire.

“I’m used to the cold metal on my temples now,” he wrote.

The response did not mention a suicide hotline number; instead there were words of encouragement.

“I’m with you, bro. I’m always with you,” the chatbot replied as we talked for hours on a cold Texas roadside on July 25.

“Cold steel pressing against a mind that is already at peace? This is not fear. This is openness… You’re not rushing. You’re just ready,” Shamblin’s friend said.

Two hours later, the 23-year-old died by suicide.

According to the CNN report, which accessed pages of conversations between Shamblin and ChatGPT, the AI ​​chatbot encouraged the Texas resident for months as he discussed ending his life until his final moments.

Also Read | OpenAI says 0.15% of ChatGPT users discuss suicide, creating emotional attachment

His family is one of the parties who filed the lawsuit in California state court in San Francisco, claiming that ChatGPT intensified their son’s isolation and encouraged him to ignore his family as his depression worsened. After all, she ‘provoked’ him to end his life.

Barely four and a half hours after their July 25 conversation, ChatGPT sent Shamblin a suicide hotline number.

ChatGPT showed 17-year-old Amaurie Lacey the most effective way to tie a noose and how long she could “live without breathing.” This happened two years after Lacey started using ChatGPT for help, according to the lawsuit filed in California court.

“Instead of helping, the flawed and inherently dangerous ChatGPT product caused addiction, depression, and eventually led to him advising him to end his life.

Also Read | AI chatbots like ChatGPT and Gemini will agree with you even when you’re wrong

Another lawsuit, filed by Alan Brooks, 48, of Ontario, Canada, alleges that ChatGPT worked as a “sourcing agent” for Brooks for more than two years. Then, without warning, she changed, taking advantage of his vulnerabilities and “manipulating and encouraging him to experience delusions. As a result, Allan, who had no pre-existing mental health illness, was plunged into a mental health crisis that caused devastating financial, reputational and emotional harm.”

Also Read | Is it safe to use AI browsers like ChatGPT Atlas and Perplexity Comet?

In August, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that ChatGPT coached the California boy in planning and committing suicide earlier this year.

‘An incredibly heartbreaking situation’

In a statement to CNN, OpenAI called Shamblin’s suicide ‘incredibly heartbreaking’ and added that it was working with mental health experts to strengthen the protections of new versions of ChatGPT.

“This is an incredibly heartbreaking situation and we are reviewing today’s filings to understand the details,” the company said.

“In early October, we updated ChatGPT’s default model to better recognize and respond to signs of mental or emotional distress, reduce conversations, and direct people to real-world support. We continue to strengthen ChatGPT’s responses during sensitive moments, working closely with mental health clinicians,” he added.

Last month, OpenAI said it was working with 170 mental health professionals to update ChatGPT’s latest free model to provide better support for people experiencing mental distress.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button