google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Microsoft and retired military chiefs back AI company Anthropic in court fight against Pentagon

Microsoft and a group of retired military leaders are throwing their weight behind Anthropic, asking a federal court to block the Trump administration from designating the artificial intelligence company as a supply chain risk.

In a legal filing, Microsoft is challenging Defense Secretary Pete Hegseth’s decision last week to exclude Anthropic from military efforts, labeling its AI products as a threat to national security.

So did a group of 22 former high-ranking U.S. military officials, some of whom were secretaries of the Air Force, Army and Navy and chief of the Coast Guard. In their own court filing, they claim Hegseth’s actions were an abuse of government authority for “revenge against a private company whose leadership was displeased.”

The Pentagon filed a lawsuit against Anthropic following an unusually public dispute over the company’s refusal to allow unrestricted military use of its AI model Claude. President Donald Trump also said he has instructed all federal agencies to stop using Claude.

“The use of supply chain risk definition to resolve a contract dispute could produce serious economic impacts that are not in the public interest,” Microsoft, a major government contractor, said in a filing Tuesday in San Francisco federal court. Antropik filed a lawsuit against the Trump administration On Monday.

Microsoft’s legal brief states that the Pentagon’s action “forces government contractors to comply with vague and ill-defined instructions that have never before been publicly enforced against a U.S. company.”

It requests that a judge order a temporary revocation of the appointment to allow for a more “reasonable discussion” between Anthropic and the Trump administration.

The Pentagon declined to comment, saying it does not comment on matters in litigation.

Microsoft’s filing also expressed support for Anthropic’s two ethical red lines: was a sticking point It’s in contract talks after the Pentagon insisted the company must allow “all lawful” uses of its artificial intelligence.

“Microsoft also believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control,” the company said. “This position is consistent with the law and, as the government has acknowledged, is broadly supported by American society.”

The software giant’s court filing follows others supporting Anthropic, including one from a group of AI developers at Google and OpenAI, and another from a group of organizations including the Cato Institute and the Electronic Frontier Foundation.

A fourth such filing came from a group of retired military chiefs, including former CIA director Michael Hayden, who is also a retired Air Force general, and retired Coast Guard Adm. Thad Allen, who led the government’s response to Hurricane Katrina.

“Far from protecting U.S. national security, the Secretary’s behavior here threatens the rule of law principles that have long strengthened our military,” the application said.

U.S. District Judge Rita Lin is presiding over the case in federal court in San Francisco, where Anthropic is headquartered. Anthropic also filed a separate, narrower lawsuit in the federal appeals court in Washington, D.C.

The hearing for Lin, who was nominated to the bench by President Joe Biden in 2022, is scheduled for March 24.

Neither legal filing mentions the war in Iran, which began shortly after Trump and Hegseth announced they would punish Anthropic, but former military officials warn that “sudden uncertainty” about targeting a technology widely deployed on military platforms could disrupt planning and put soldiers at risk during ongoing operations.

In a video posted on social media on Wednesday about US strikes on Iran, the current commander of US Central Command confirmed that the military was using “advanced artificial intelligence tools” to “scan through large amounts of data in seconds” but did not specifically specify which tools.

Admiral Brad Cooper said these AI tools allow leaders to make smarter decisions faster, but emphasized that “humans will always make the final decisions about what to shoot, what not to shoot, and when to shoot.”

Until recently, Anthropic was the only system among its peers approved for use in covert military networks. But as a result of the dispute, military officials have said they want to shift that work to rivals Google, OpenAI and Elon Musk’s xAI.

AP writer Konstantin Toropin contributed to this report.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button