Salesforce’s Benioff calls for AI regulation after recent suicides

sales force CEO Marc Benioff said Tuesday that “there needs to be some regulation” around artificial intelligence, pointing to several documented cases of suicide linked to the technology.
“You’ve seen something really pretty scary this year, which is AI models turning into suicide coaches,” Benioff told CNBC’s Sarah Eisen at the World Economic Forum’s flagship conference in Davos, Switzerland, on Tuesday.
Benioff’s call for regulation echoed a similar call he made on social media at Davos years ago.
Benioff said in 2018 that social media should be treated like a health problem and platforms should be regulated like cigarettes: “It’s addictive, it’s harmful to your health.”
“Bad things were happening all over the world because social media was completely unregulated, and now you’re seeing it play out again with artificial intelligence,” he said Tuesday.
AI regulations in the US have so far lacked clarity, and in the absence of comprehensive protection barriers, states have begun to set their own rules; California and New York have enacted some of the strictest laws.
California Governor Gavin Newsom signed a series of bills in October to address child safety concerns regarding artificial intelligence and social media. New York Governor Kathy Hochul signed the Responsible AI Security and Education Act in December, imposing security and transparency regulations on major AI developers.
President Donald Trump rolled back what he called “excessive government regulation” and signed a deal. executive order He will try to block such efforts in December.
“To win, United States AI companies must have the freedom to innovate without burdensome regulations,” the decision said.
Benioff was adamant Tuesday that a change in AI regulation is needed.
“It’s funny, tech companies hate regulations. They hate it, except for one thing. They love Section 230, which basically says they’re not liable,” Benioff said. “So if this broad language pattern drives this kid to suicide, they’re not liable under Section 230. That’s probably something that needs to be reshaped, shifted, changed.”
Section 230 of the Communications Decency Act protects tech companies from legal liability for users’ content. Both Republicans and Democrats have expressed concerns about the law.
“Unfortunately, there are a lot of families who are suffering this year, and I don’t think they have to,” Benioff said.
If you have suicidal thoughts or are in distress, Suicide and Crisis Lifeline Call 988 for support and assistance from a trained counselor.



