Anthropic vs Pentagon: Dario Amodei-led AI firm vows legal action after Pete Hegseth’s ‘supply chain risk’ designation

AI startup Anthropic said on Friday that it would challenge the Trump administration’s decision to declare the company a supply chain risk and take the matters to court.
This came just hours after Defense Secretary Pete Hegseth issued the order in a statement about X following an order from President Donald Trump, who asked every federal agency to remove Anthropic and Claude from its pipeline.
“This action comes after months of negotiations that remained deadlocked over two exceptions we requested for the legal use of our AI model Claude: mass domestic surveillance of Americans and fully autonomous weapons,” Anthropic said in a statement shortly after Hegseth’s announcement.
He noted that the company has not yet received official communication from the Department of Defense or the White House regarding the status of the negotiations.
Anthropic had said it wanted narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. But the two sides were locked in a stalemate until Amodei refused to comply with the Trump administration’s request in a statement Thursday, prompting directives from the President and Hegseth.
The Pentagon had previously declared Anthropic a supply chain risk after President Donald Trump ordered US government agencies to stop using the artificial intelligence giant’s products.
In a statement Friday, Anthropic said it tried to negotiate in “good faith” with the Pentagon and supports the legal use of artificial intelligence, except for two cases it flagged.
“To our knowledge, these exceptions have not impacted a single government mission to date,” Anthropic said.
Describing the identification of supply chain risk as an “unprecedented action”, the organization noted that the label has historically been used on competitors. This is the first time an American company has been labeled this way.
Why didn’t Anthropic disagree with the Pentagon?
In its statement, Anthropic once again explained the reason for its conflict with the Pentagon.
“We do not believe today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians,” Anthropic emphasized.
It was also stated that the company believes that mass domestic surveillance of Americans “constitutes a violation of fundamental rights.”
“We believe this designation would be both legally invalid and set a dangerous precedent for any American company negotiating with the government,” Anthropic said.
Maintaining its previous stance, the AI startup stated that “no amount of ‘intimidation or punishment’ from the War Department will change its position on the same issue.”
“We will challenge any definition of supply chain risk in court.”
What does it mean for customers?
Anthropic detailed the consequences of the Department of Defense’s action on its existing customers, including those under contract with the federal government.
“The Secretary does not have the legal authority to support this statement. Legally, a supply chain risk definition under 10 USC 3252 can only cover the use of Claude as part of War Department contracts; it cannot affect how contractors use Claude to serve other customers,” he said.
This means, in practice, that there will be no impact on the use of its tools or any of its products, including the API, claude.ai by individual customers or those who have a commercial contract with Anthropic.
“If you are a War Department contractor, this designation (if formally accepted) will affect your use of Claude for War Department contract work only. Your use for any other purpose will not be affected,” he said.
The Pentagon wants to use Anthropic’s Claude chatbot for any purpose within legal limits, but without any usage restrictions from Anthropic. The firm insisted that Claude not be used for mass surveillance against Americans or in fully autonomous weapons operations.

&w=390&resize=390,220&ssl=1)
