google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Judge presses DOD on why Anthropic’s Claude was blacklisted

The Pentagon’s decision to blacklist Anthropic’s Claude AI models “appears to be an attempt to cripple” the company, U.S. District Judge Rita Lin said Tuesday.

Anthropic appeared in San Francisco federal court on Tuesday and asked Lin to temporarily pause the Pentagon’s blacklisting and President Donald Trump’s directive banning federal government agencies from using its technology.

The company said the injunction would not require the U.S. government to use its models or prevent it from switching to another AI provider.

During the hearing, Lin asked Anthropic’s lawyers and the U.S. government a series of questions about the details of the case. He said his concern was whether Anthropic “will be punished for criticizing the government’s contraction position in the press.”

“Everyone, including Anthropic, agrees that the War Department is free to stop using Claude and look for a more tolerant AI vendor,” Lin said. “I don’t think that’s what this case is about. I think the question in this case is very different, which is whether the government violated the law.”

Lin said he expects to issue an order on Anthropic’s motion within the next few days.

If the injunction is granted, the artificial intelligence startup will be able to continue doing business with government contractors and federal agencies. Lawsuit filed against Trump administration playing on the court. Without that, the company said in its filings, it could lose billions of dollars in business and further damage its reputation.

In early March, the Department of Defense identified Anthropic as a supply chain risk; This means that the use of the company’s technology threatens US national security. Defense contractors will be needed if the label is allowed to continue. Amazon, MicrosoftAnd palantirTo confirm that they are not using Claude in their military work.

U.S. government attorney Eric Hamilton said Tuesday that the Department of Defense is “concerned that Anthropic may take action to sabotage or disrupt IT systems in the future,” and so the company has been designated a supply chain risk.

“What happens if Anthropic installs a kill switch or functionality that changes the way it operates? This is an unacceptable risk,” Hamilton said.

Later in the hearing, Lin attempted to press Hamilton on when the Department of Defense considers supply chain risk identification to be the appropriate course of action.

“But what I’m hearing from you is that all it takes is for an IT vendor to be stubborn, insist on certain terms and ask annoying questions, then that could be considered a supply chain risk because they might not be trustworthy. That seems like a pretty low bar.”

Anthropic argued that there was no basis for the company to be considered a supply chain risk.

The company also said it was unfairly retaliated against by the Department of Defense for requesting that it not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use AI models for such purposes.

“This is something that has never been done before for an American company,” Anthropic’s attorney, Michael Mongan, said during the hearing. “That’s a very narrow mandate. It doesn’t apply here and it’s not a normal way to respond to the concerns raised by the other side.”

Before the conflict broke out in late February, Anthropic was one of the first AI companies to partner with many federal agencies as the government sought to quickly upgrade its systems and capabilities with the latest AI technology.

Antropik signed 200 million dollars It signed a contract with the Pentagon in July, becoming the first AI lab to deploy its technology across the agency’s secret networks.

But when the company begins negotiating Claude’s appointment to the Department of Defense GenAI.mil Talks about how the military could use models on the AI ​​platform stalled in September.

The department has insisted on unrestricted access to the company’s technology for all lawful purposes, and Hamilton said Tuesday that Anthropic went beyond the normal scope of a contractor.

“Anthropic is not just being stubborn. It’s not just refusing to accept contract terms. Instead, it’s expressing concerns [DOD] how about [DOD] “Hamilton uses its technology in military missions,” he said.

After Anthropic and the Department of Defense failed to reach an agreement in February, Trump A Real Social post ordering federal agencies to “immediately cease” all use of Anthropic’s technology.

“WE will decide the fate of our country, NOT an out-of-control, Radical Left AI corporation run by people who have no idea what the real world is,” Trump said.

WRISTWATCH: Antropik filed a lawsuit against the Trump administration over the Pentagon’s blacklisting

Antropik filed a lawsuit against the Trump administration over the Pentagon's blacklisting

CNBC’s Jeff Kopp and Dan Manganese contributed to this story.

Select CNBC as your preferred source on Google and never miss a beat from the most trusted name in business news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button