google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Families of Canadian mass shooting victims sue OpenAI, CEO Altman in US court

Written by: Ryan Patrick Jones and Diana Novak Jones

April 29 (Reuters) – Family members of victims of one of Canada’s deadliest mass shootings filed a lawsuit against OpenAI and its CEO Sam Altman in a U.S. court on Wednesday, claiming the company identified the attacker as a credible threat eight months before the attack but failed to alert police.

The lawsuits, filed in federal court in San Francisco, accuse OpenAI leaders of failing to notify police because doing so would have revealed the volume of violent conversations on ChatGPT and potentially jeopardized the company’s path to a nearly $1 trillion initial public offering.

In February, nine people, most of them children, died in a shooting attack in Tumbler Ridge, British Columbia.

An OpenAI spokesperson called the attack a “tragedy” and said the company has a zero-tolerance policy on using its tools to facilitate acts of violence.

“As we have shared with Canadian officials, we have already strengthened our security measures, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat violators of the policy,” the spokesperson said in a statement.

The lawsuits are part of a growing wave of lawsuits accusing AI companies of failing to prevent chatbot interactions that plaintiffs say contribute to self-harm, mental illness and violence. They appear to be the first in the US to claim that ChatGPT played a role in facilitating mass murder.

Jay Edelson, who represents the plaintiffs, said he plans to file two dozen more lawsuits against the company in the coming weeks on behalf of others affected by the attack.

CASES’ OPENAI SECURITY TEAM CLAIM WAS REJECTED

Jesse Van Rootselaar, whose interactions with ChatGPT are at the center of the cases, killed an educational assistant and five students between the ages of 12 and 13 at his former school on February 10, after shooting his mother and half-brother at home on February 10, according to police. 18-year-old Van Rootselaar later died by suicide.

The plaintiffs include relatives of those killed at the school and a 12-year-old girl who survived but remained in intensive care after being shot three times.

According to one of the complaints, in June 2025, OpenAI’s automated systems flagged ChatGPT conversations in which the shooter described gun violence scenarios.

Citing a February Wall Street Journal article about the company’s internal discussions, the complaint said members of the security team suggested contacting police after concluding that the woman posed a credible and imminent threat of harm.

But the lawsuit alleges that Altman and other OpenAI leaders dismissed the security team and police were never called. The lawsuit alleged that the attacker’s account was disabled, but he was able to get a new account and continue using the platform to plan his attack.

Following the publication of the Wall Street Journal article, the company said the account was flagged by systems that determined “our patterns were being misused to further violent activity” but that the issues did not meet its internal criteria for reporting to law enforcement.

Last week, a local Tumbler Ridge newspaper published an open letter in which Altman said he was “deeply saddened” that the account was not flagged to law enforcement.

In a blog posted Tuesday, OpenAI said it trains its models to reject requests that “might meaningfully enable violence” and notifies law enforcement when conversations suggest “an imminent and credible risk of harm to others” and helps mental health experts evaluate borderline cases. The company said it continually improves its models and detection methods based on usage and expert input.

The lawsuits seek an unspecified amount of damages and a court order requiring OpenAI to overhaul its security practices, including mandatory law enforcement dispatch protocols. Edelson said one of the victims first filed a lawsuit in Canadian court but dismissed the case to pursue her claims in California.

OPENAI FACES MULTIPLE TEAMS

The lawsuits related to the Tumbler Ridge attack follow the filing of multiple lawsuits against OpenAI in US state and federal court in recent months over allegations that ChatGPT facilitated harmful behavior, suicide and, in at least one case, murder-suicide.

The cases, which are still in their early stages, will force courts to grapple with what role the AI ​​platform may have played in encouraging violence and whether the company can be held liable for its own actions or the actions of its users.

OpenAI has denied the allegations in the lawsuits, arguing that the perpetrator in the murder-suicide case had a long history of mental illness.

Florida Attorney General James Uthmeier announced earlier this month a criminal investigation into ChatGPT’s role in the 2025 Florida State University shooting.

(Reporting by Ryan Patrick Jones and Diana Novak Jones, Editing by Alexia Garamfalvi and Lincoln Feast.)

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button