AI firms must be clear on risks or repeat tobacco’s mistakes, says Anthropic chief | Artificial intelligence (AI)

AI companies must be transparent about the risks posed by their products or the risk of repeating the mistakes of tobacco and opioid companies, according to the chief executive of AI startup Anthropic.
Dario Amodei, who runs the US company behind the Claude chatbot, said he believed AI would be smarter than “most or all humans in most or all respects” and suggested his colleagues “call it as you see it”.
Speaking to CBS News, Amodei said a lack of transparency about the impact of powerful AI will repeat the mistakes of cigarette and opioid companies that failed to raise red flags about the potential health hazards of their own products.
“You can go into the world of the cigarette companies or the opioid companies, they knew there were dangers there, they didn’t talk about them and they certainly didn’t prevent them,” he said.
Amodei warned this year that AI could eliminate half of all entry-level white-collar jobs (office jobs such as accounting, law and banking) within five years.
“Without intervention, it’s hard to imagine that there wouldn’t be a significant employment impact there. And my concern is that it’s going to be far-reaching and it’s going to happen faster than what we’ve seen with previous technology,” Amodei said.
Anthropic, whose CEO is a prominent voice on online security, has recently voiced several concerns about AI models; This includes a clear awareness that they are being tested and trying to blackmail. Last week, it said the coding tool Claude Code was used by a Chinese state-backed group to attack 30 organizations around the world in September, carrying out “a handful of successful attacks.”
“One of the things that makes models powerful in a positive way is their ability to move on their own,” Amodei said. “But the more autonomy we give to these systems, the more we can worry about whether they are doing exactly what we want them to do?”
The flip side of a model’s ability to find health breakthroughs is that it could help create a biological weapon, Logan Graham, head of Anthropic’s AI models stress testing team, told CBS.
After the newsletter launch
“For example, if the model can help make a biological weapon, these are often the same capabilities the model can use to make vaccines and accelerate treatment,” he said.
Referring to autonomous models, which are seen as an important part of the investment scenario for artificial intelligence, Graham said that users want an artificial intelligence tool to help their business, not destroy it.
“You want a model that will build your business and make you billions,” he said. “But you don’t want to wake up one day and find that it’s also alienated you from the company. So our basic approach to this is we need to start measuring these autonomous capabilities and run as many weird experiments as we can and see what happens.”




