google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

‘Too powerful for the public’: inside Anthropic’s bid to win the AI publicity war | AI (artificial intelligence)

This week, AI company Anthropic said it had created an AI model so powerful it felt overwhelmingly responsible. I won’t release him ring.

US treasury secretary Scott Bessent summoned the heads of major banks to chat about the Mythos model. Reform UK MP Danny Kruger wrote to the government demanding:Connect with AI firm Anthropic New border model Claude Mythos could present devastating cybersecurity risks to the UK” X went berserk.

Others, including renowned AI critic Gary Marcus, were more skeptical: “Dario [Amodei] Has much more technical skills than Sam [Altman]but he seems to have graduated from the same school of hype and hyperbole,” referring to the CEOs of Anthropic and rival OpenAI.

It’s unclear whether Anthropic is making the machine god. What’s more obvious is that the San Francisco-based startup, widely seen as the “responsible” AI company, excels at marketing.

In recent months, Anthropic has been featured in a 10,000-word profile in the New Yorker, two articles in the Wall Street Journal and on the front cover of Time magazine, which features Amodei’s face in movie poster style above the Pentagon and U.S. defense secretary Pete Hegseth.

Amodei and Anthropic co-founder Jack Clark appeared on two separate New York Times podcasts in February and discussed questions like whether their machines are conscious and whether they could soon “upend the economy.” The company’s “resident philosopher” spoke to the WSJ about whether Claude, a commercial product used to trade cryptocurrency and identify missile targets, has a “sense of self.”

One AI critic said Dario Amodei ‘has far more technical skills’ than OpenAI CEO Sam Altman, ‘but he seems to have graduated from the same school of hype and hyperbole.’ Photo: Denis Balibouse/Reuters

This all comes amid a tussle between Anthropic and the US defense department; Although Anthropic created the AI ​​tool the Pentagon used to strike Iran, it still managed to look much better than OpenAI, which offered to help the US military do the same – perhaps with fewer guardrails.

Media mogul Danielle Ghiglieri has triumphed on LinkedIn. “I’m endlessly proud to work at Anthropic,” he said of the company’s Time cover, tagging the journalists involved in the post about the “mad dash” to make the story a reality.

Watching a CBS 60 Minutes episode featuring Amodei was “pretty enjoyable” one of those pinch me moments” he said. “It wasn’t just the platform that made this meaningful. “We wanted to see the story we wanted to tell actually happen.”

From a New Yorker profile by journalist Gideon Lewis-Kraus: he wrote: “I’d be lying if I said I wasn’t nervous during our first face-to-face meeting… Working with someone of Gideon’s caliber means being pushed to express ideas you’re still forming and being comfortable with that discomfort.”

(“I’m sure they all say the same thing about you,” my editor said.)

Other tech PR professionals have noticed.

“They are clearly having a moment right now, but companies developing world-changing technology deserve equal scrutiny,” one of them said. “Last week they accidentally leaked their own source code, then this week they claimed to manage cyber threats with a powerful new model that only they control. Any other major tech firm would be a laughing stock.”

Anthropic accidentally released some of Claude’s internal source code at the beginning of April. “No sensitive customer data or personally identifiable information was included or disclosed,” he said.

What does all this mean about Anthropic’s undoubtedly powerful Mythos?

AI Now Institute’s chief artificial intelligence scientist, Dr. Heidy Khlaaf said the model’s capabilities are “unproven.” “Publishing a marketing post using deliberately vague language that obscures evidence raises the question of whether they are trying to raise more investment without scrutiny.”

“Mythos is a real breakthrough, and Anthropic was right to take it seriously,” said offensive cybersecurity expert Jameison O’Reilly. But he said some of Anthropic’s claims, such as finding thousands of “zero-day vulnerabilities” in major operating systems, aren’t that important in terms of real-world cybersecurity considerations.

A zero-day vulnerability is a flaw in software or hardware that is unknown to developers.

“We have spent over 10 years providing authorized access to hundreds of organizations – banks, governments, critical infrastructure, global corporations,” O’Reilly said. “In those 10 years, in hundreds of conflicts, there have been very few times that we needed a zero-day vulnerability to achieve our goal.”

Protesters in San Francisco marched on the offices of Anthropic and OpenAI last month, urging the AI ​​companies to stop development. Photo: Manuel Orbegozo/Reuters

Other reasons may have contributed to Anthropic’s decision not to publish Mythos.

The company has limited resources and appears to be struggling to offer enough computing capacity to allow all of its subscribers to use its models. Yes usage limits introduced It’s about the wildly popular Claude. It was recently rumored that users will need to purchase extra capacity on top of their subscription to run third-party tools like OpenClaw. At this point, it may not have the infrastructure to support the launch of an extravagant new creation.

Like OpenAI, Anthropic is in a race to raise billions of dollars and capture a — still poorly defined — market of people who might rely on chatbots as friends, romantic partners, or highly personalized assistants, and companies that might use them to replace human workers.

But the differences in these products are marginal and impressionistic; it relies mostly on hard-to-measure qualities like “sense of self” and “soul,” or rather what passes for these in an AI agent. The battle is for hearts and minds.

“Mythos is a strategic announcement that shows they are open for business,” Khlaaf said, saying Anthropic’s publishing limitation prevents independent experts from evaluating the company’s claims.

He suggested that “we may be seeing the same bait-and-switch playbook used by OpenAI, where security is a public relations tool to gain public trust before profits are prioritized,” adding: “Anthropic promotion has managed to hide this shift better than its competitors.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button