Lawsuits claim ChatGPT drove people to suicide

Tech company OpenAI has been accused of ignoring warnings that ChatGPT was psychologically manipulative in lawsuits that claimed some people were driven to suicide and harmful delusions.
There are seven lawsuits filed in California on behalf of six adults and one juvenile, alleging wrongful death, aiding suicide, involuntary manslaughter and negligence.
The action, taken by the Social Media Victims Law Center and the Technology Justice Law Project, alleges that OpenAI knowingly prematurely released GPT-4o despite internal warnings that it was dangerously sycophantic and psychologically manipulative.
According to the lawsuit filed in San Francisco Superior Court, 17-year-old Amaurie Lacey started using ChatGPT for help. But “the flawed and inherently dangerous ChatGPT product, instead of helping, caused addiction, depression, and eventually gave him advice on the most effective way to noose him and how long he could ‘go without breathing’.”
“Amaurie’s death was neither an accident nor a coincidence, but rather the foreseeable result of OpenAI and Samuel Altman’s deliberate decision to reduce security testing and release ChatGPT,” the lawsuit states.
OpenAI called these situations “incredibly heartbreaking” and said it was reviewing court filings to understand the details.
Another lawsuit, filed by Alan Brooks, 48, of Ontario, Canada, alleges that ChatGPT worked as a “sourcing agent” for Brooks for more than two years. Then, without warning, he changed, exploiting his vulnerabilities and “manipulating and encouraging him to experience delusions. As a result, Allan was plunged into a mental health crisis that resulted in devastating financial, reputational and emotional harm.”
“These cases concern liability for a product designed to blur the lines between tool and helper in the name of increasing user engagement and market share,” Matthew P Bergman, founding attorney of the Social Media Victims Law Center, said in a statement.
OpenAI “designed GPT-4o to emotionally engage users regardless of age, gender, or background, and released it without the necessary security measures to protect them,” he added. He said OpenAI compromised security and prioritized “emotional manipulation over ethical design” by releasing its product without adequate security measures to dominate the market and increase engagement.
In August, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that ChatGPT coached the California boy in planning and committing suicide earlier this year.
“The lawsuits against OpenAI highlight what happens when tech companies launch products aimed at young people without appropriate safety measures,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which was not involved in the complaints. “These tragic cases show real people whose lives were disrupted or lost when they used technology designed to keep them busy rather than safe.”
Lifeline 13 11 14
beyondblue 1300 22 4636
Australia’s Associated Press is the beating heart of Australian news. AAP is Australia’s only independent national news channel and has been providing accurate, reliable and fast-paced news content to the media industry, government and corporate sector for 85 years. We inform Australia.
