Microsoft backs Anthropic in court: Top 10 key updates in Claude maker’s lawsuit against US government blacklist

The conflict between Anthropic and the US government has now reached the courts. AI maker Claude had been in talks with the Pentagon for weeks about using AI in covert environments, but talks finally broke down late last month when Anthropic said it refused to accept two US Department of Defense (DoD) red lines.
Highlights of the U.S. v. Antropik case:
1) Anthropic launched a dual lawsuit against the Department of Defense and wider management after the US government decided to label it a ‘supply chain risk’. The lawsuit claims that the Trump administration’s decision to blacklist the AI startup was an attempt to punish it for its AI guardrails.
“The federal government retaliated against a leading AI developer for violating the U.S. Constitution and law by adhering to a protected viewpoint on an issue of great public importance, such as AI safety and the limitations of its AI model,” the Anthropic lawsuit said. he said.
2) The lawsuits were filed after Defense Secretary Pete Hegseth officially labeled Anthropic a supply chain risk last week.
3) Anthropic told the judge on Tuesday that it could lose billions of dollars in revenue this year because of the Trump administration’s decision to label it a supply chain risk.
4) Appearing before U.S. District Judge Rita F. Lin at a hearing in San Francisco, Anthropic’s attorney argued that the federal government’s actions led more than 100 corporate customers to contact the company to express concerns about maintaining their contracts.
5) Anthropic’s lawyer also claimed that the US government was reaching out to its customers and pressuring them to stop doing business with the company.
“And all of this is the foreseeable consequence of the defendant’s actions and the uncertainty they create, as well as the fact that the defendants are reaching out to our customers in a positive way, pressuring them to stop working with Anthropic and switch to other AI companies,” the company said in court.
6) Microsoft showed its support for Anthropic in court. The Pentagon’s blacklisting of Anthropic “will have negative consequences for the entire technology industry and American business,” the company warned in a recent filing.
“This is not the time to put at risk the AI ecosystem that the Administration helps support,” an attorney for the company said in court.
7) Just a day earlier, 37 OpenAI and Google employees had also filed a joint brief with the court in support of Anthropic.
“The government’s designation of Anthropic as a supply chain risk was an inappropriate and arbitrary exercise of power that will have serious consequences for our industry,” the summary said.
8) Sam Altman also previously opposed the designation of Anthropic as a supply chain risk. He also said OpenAI’s deal with the Pentagon was made as a way to diffuse tensions.
9) Emil Michael, the undersecretary of defense for research and engineering, said in an interview with Bloomberg that he sees little chance of continuing negotiations with Anthropic about using AI tools for classified military work.
10) Claude is currently the only AI model used by the US in classified military use cases. Artificial intelligence was reportedly used by the United States both in the capture of Venezuelan President Nicolás Maduro and in the recent attacks on Iran.



