google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

OpenAI plans ChatGPT changes after suicides, lawsuit

Openai CEO Sam Altman speaks on July 22, 2025 during the integration of the Federal Reserve’s Washington, DC, the United States, the United States.

Ken Ceno | Reuters

Openai details “sensitive situations” while addressing Chatgpt’s planning plans to address the shortcomings
After the case of a family accusing Chatbot of suicide for the death of his young sons.

“We will continue to develop, we will be directed by experts and we will rely on responsibility for people who use our vehicles – and we hope that others will join us to make sure that this technology protects people most vulnerable.” Blog post “To help people the most when they need it.”

Early on Tuesday, Adam Raine’s parents filed a product obligation against Openai and wrong death after his sons died with suicide at the age of 16. NBC News Report. In the case, the family said that “Chatgpt actively helps Adam discovering suicide methods”.

The company did not mention the Raine family or the case in her blog post.

Openai said Chatgpt was trained to direct people to seek help while expressing their intention to commit suicide, but Chatbot tends to offer answers to the measures of the company after many messages for a long time.

In the GPT-5 model, which was published in the early this month, it will cause the chat boat to develop speeches, and that it is working on an update, including how to “how to connect people to certified therapists before an acute crisis”, and that users are probably working on a licensed network where they can directly access through ChatGPT.

Openai also said how to connect users with friends and family members “with the closest ones”.

In the case of young people, Openai said that soon will offer controls that will offer more information about how parents’ children use Chatgpt.

Jay Edelson, the Chief Advisor of the Raine family, told CNBC on Tuesday that Openai has not reached direct condolences or to discuss any effort to increase the security of the company’s products.

“If you are going to use the most powerful consumer technology on the planet – you should trust that the founders have a moral compass.” He said. “This is the question for Openai right now, how can anyone trust them?”

Raine’s story is not isolated.

Author Laura Reiley at the beginning of this month article In the New York Times, he detailed how he died in suicide after he had discussed extensively with his 29 -year -old daughter’s idea Chatgpt. And in a case in Florida, the 14 -year -old Sewell Setzer III died of suicide after meeting with an AI Chatbot in the application character last year.

As AI services gain popularity, a number of concerns about their use for therapy, friendship and other emotional needs.

However, it can also be difficult to regulate the industry.

On Monday, the coalition of AI companies, initiative capitalists and executives, Openai President and founder Greg Brockman announced It is a political operation that leads to the future and will “oppose policies suppressing innovation” when it comes to artificial intelligence.

If you have suicide thoughts or if you are having trouble, Suicide and crisis life line For support and assistance from a trained consultant in 988.

WRISTWATCH: Openai says Musk’s file is consistent with the ongoing harassment model ‘

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button