google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Pentagon Threatens to End Anthropic Work in Feud Over AI Terms

(Bloomberg) — The Pentagon warned Anthropic PBC that it would terminate the company’s military contracts on Friday if the artificial intelligence startup does not meet government requirements for use of its technology, according to people familiar with the matter.

At a meeting between Chief Executive Officer Dario Amodei and Defense Secretary Pete Hegseth on Tuesday, US officials threatened to declare Anthropic a supply chain risk or invoke the Defense Production Act to use its AI software even if the company did not comply, sources said.

This ultimatum signals an escalation in the growing dispute between the Department of Defense and the AI ​​initiative over the company’s insistence on guardrails for the use of the Claude AI tool. If implemented, the Pentagon’s threat would jeopardize the $200 million worth of work Anthropic has agreed to do for the military.

At the meeting, Amodei laid out Anthropic’s conditions: that the U.S. military refrain from using its products to autonomously target enemy combatants or mass spy on U.S. citizens, according to one of the people. The person in question emphasized that these scenarios have not yet emerged in Amodei’s operations in the field.

“We have continued to have good faith discussions about our usage policy to ensure that Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” Anthropic said in a statement after the meeting.

Those who described the discussions did so on condition of anonymity because they were confidential. Axios previously reported the outcome of the meeting.

Anthropic, now valued at about $380 billion according to its latest round of funding, became the first AI company cleared within the US government to process classified material, and its Claude Gov tool has quickly become a preferred choice among Pentagon officials who appreciate its ease of use. It faces increasing competition from national security rivals OpenAI, Google’s DeepMind and Elon Musk’s xAI.

A US official said the Pentagon was concerned that Anthropic was not supporting US targets after hearing that the company had questions about how its artificial intelligence was used during the special forces operation that captured Venezuelan President Nicolas Maduro in early January. Anthropic offered a different interpretation of the Pentagon’s claim that the company had questions about the Maduro raid.

“Anthropic has not discussed the use of Claude for specific operations with the War Department,” the company said Monday through a spokesperson, referring to the Trump administration’s preferred name for the Department of Defense. “Furthermore, we have not discussed this issue or raised concerns with any industry partners other than routine discussions on strictly technical matters.”

Anthropic positions itself as a company focused on the responsible use of artificial intelligence to prevent the devastating harms of technology. Claude founded Gov specifically for the purpose of US national security and aims to serve government clients within its ethical boundaries.

The conflict erupted just weeks after the Pentagon released a new strategy on artificial intelligence that called for making the military an “AI-first” force by increasing experimentation with frontier models and reducing bureaucratic hurdles. Specifically, the approach encouraged the Department of Defense to select models “free from usage policy restrictions that might limit legitimate military applications.”

More stories like this available Bloomberg.com

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button