Pentagon vs Anthropic: Defense Department formally notifies AI startup it is deemed supply chain risk
The Pentagon said it had formally notified Anthropic PBC that it had determined that the company and its products posed a risk to the US supply chain, according to a senior defense official, adding to the dispute over AI protections.
“DOW has formally notified Anthropic leaders, effective immediately, that the company and its products are considered a supply chain risk,” the official told Bloomberg News on Thursday, using the War Department acronym that Defense Secretary Pete Hegseth now prefers for Defense Department.
While the defense official stated that the decision will be “effective immediately,” Anthropic’s Claude AI tools are actively used by the US military in operations against Iran, according to a person familiar with the matter. In his warning to the company last Friday, Hegseth outlined a six-month transition period to shift artificial intelligence work to other providers.
Spokespeople for Anthropic and the Pentagon had no comment on the matter. The defense official did not say when or how the Pentagon notified the company.
Anthropic had previously vowed to challenge the Pentagon’s supply chain risk determination in court.
The Pentagon’s finding threatens to disrupt both the company and the military, which relies heavily on Anthropic’s software. Until recently, Anthropic provided the only AI system that could run in the Pentagon’s secret cloud. The Claude Gov tool has become a preferred option among defense personnel due to its ease of use.
“It’s a good capability,” and removing it “will be painful for everyone involved,” said Lauren Kahn, senior research analyst at Georgetown University’s Center for Security and Emerging Technology.
Anthropic Chief Executive Dario Amodei had been negotiating for weeks with Emil Michael, the undersecretary of defense for research and engineering, to draft a contract governing the Pentagon’s access to Anthropic technology.
But talks broke down last week after the startup demanded assurances that its artificial intelligence would not be used for mass surveillance of Americans or the deployment of autonomous weapons. Hegseth later explained in a post on X on Friday that Anthropic poses a supply chain risk, a designation often used for U.S. competitors.
It was not immediately clear what authority the Pentagon used to classify the company as a supply chain threat. Anthropic said in a statement last week in response to Hegseth’s social media post that it expects the move to eventually be made through Section 3252 of the law governing the U.S. armed forces.
“From the beginning, this was about a fundamental principle: that the military should be able to use technology for all lawful purposes,” the defense official said Thursday. “The Army will not allow a vendor to insert itself into the chain of command, restricting the lawful use of a critical capability and putting our warfighters at risk.”
The move comes as the US military is relying on Claude in its Iran campaign and American armed forces are turning to a range of artificial intelligence tools to quickly manage massive amounts of data for their operations.
Palantir Technologies Inc. The Maven Intelligent System, manufactured by Microsoft Inc. and widely used by military operators in the Middle East, counts Anthropic’s Claude AI tool among the major language models installed in the system, according to sources familiar with the matter, who say Claude works well and has become central to U.S. operations against Iran and accelerating Maven’s AI efforts.
Currently valued at $380 billion, Anthropic is on track to generate annual revenue of nearly $20 billion, a forecast based on current performance, and is more than doubling its run rate from late last year. But the Pentagon dispute has clouded the company’s outlook.
Time will tell the long-term impact of the Pentagon’s announcement regarding sales to corporate customers, which has long been Anthropic’s core business. Meanwhile, it is gaining traction among everyday users. Anthropic’s main app recently topped Apple Inc.’s download charts, indicating growing support for the company.
Key Takeaways
- The Pentagon’s identification of Anthropic as a supply chain risk could disrupt military operations based on Claude AI.
- Anthropic’s opposition to the Pentagon’s appointment highlights ongoing tensions between tech firms and government regulations.
- This underscores the critical role of artificial intelligence in modern warfare and the complexity of military contracts.




