Anthropic vs Pentagon: Court grants injunction, questions govt’s move to ‘cripple’ AI firm
A federal judge in San Francisco has issued a preliminary injunction in favor of artificial intelligence company Anthropic, marking a significant early victory in the company’s legal battle with the Trump administration and the Pentagon.
The ruling temporarily halts U.S. government actions that effectively banned federal use of artificial intelligence firm Anthropic as a broader legal wrangling unfolds.
Court Blocks Pentagon Blacklisting Amid Legal Scrutiny
Judge Rita Lin issued the decision Tuesday after a contentious hearing in which lawyers for both Anthropic and the U.S. government presented arguments about the legality of designating the company as a national security risk. The measure pauses enforcement of restrictions that prohibit federal agencies from using Anthropic’s Claude models.
The case focuses not just on the acquisition decisions, but also on whether the government overstepped legal boundaries in its treatment of the company. However, the final decision is months away.
During the hearings, Lin expressed skepticism about the reasoning behind the government’s actions, suggesting that the measures appeared punitive rather than procedural.
“One of the friend briefings used the term ‘attempted corporate murder.’ I don’t know if it was murder, but it appears to have been an attempt to cripple Anthropic,” Lin said.
‘Supply Chain Risk’ Label Sparked Alarm in the Industry
The dispute began after the Department of Defense officially designated Anthropic as a “supply chain risk”; this classification was generally directed at foreign enemies. Defense Secretary Pete Hegseth had previously stated that the company’s technology poses a threat to US national security.
This appointment has far-reaching consequences. Defense contractors, including Amazon, Microsoft and Palantir, are now required to certify that they are no longer using Anthropic’s Claude models for military-related work. The fact that Anthropic is the first US-based company to be publicly subjected to such a classification raises concerns in the technology industry.
Anthropic challenged the designation under two separate legal frameworks (10 USC § 3252 and 41 USC § 4713), necessitating parallel legal proceedings, including an appeal in Washington, DC.
Trump Directive Intensifies Conflict Over AI Governance
The legal conflict escalated after President Donald Trump ordered federal agencies to stop using Anthropic’s technology. A post on Truth Social mandated an immediate halt to this practice, along with a six-month phase-out period for organizations like the Department of Defense.
“WE will decide the fate of our country, NOT an out-of-control Radical Left AI company run by people who have no idea what the real world is,” Trump said.
The administration’s stance surprised many in Washington, especially given Anthropic’s prior integration into sensitive defense systems. The company was among the first to use AI models in classified Pentagon networks and was widely considered a reliable partner to work with defense contractors.
Contract Dump Reveals Ethical Fault Lines in Military AI
At the heart of the dispute is the failed negotiation of a $200 million Pentagon contract signed in July. The talks broke down months later due to fundamental disagreements over the permissible use of Anthropic’s technology.
The War Department is reportedly seeking unrestricted access to Anthropic’s AI models for all legal applications. Antropik, by contrast, was seeking clear guarantees that its systems would not be used for fully autonomous weapons or local mass surveillance.
This stalemate reflects a broader tension in the AI industry, where ethical concerns increasingly intersect with national security priorities.
Legal Battle Shifts Focus to U.S. Government Authority
Judge Lin emphasized that the case was not about the government’s right to choose its contractors, but rather the legality of its actions to exclude Anthropic.
“Everyone, including Anthropic, believes that the State Department [Defense] At Tuesday’s hearing, Lin said he was free to stop using Claude and look for a more tolerant AI vendor. “I don’t think that’s what this case is about. I think the question in this case is very different, which is whether the government violated the law.”
The decision ensures that Anthropic is protected, for now, from what it claims will cause serious financial and reputational harm. But broader implications touching on executive power, AI governance, and the limits of national security claims remain unresolved.
As the case progresses, it is poised to become a decisive legal test of how governments regulate artificial intelligence companies operating at the frontier of defense technology.




