google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Anthropic and Pentagon head to court as AI firm seeks end to stigmatizing supply chain risk label

SAN FRANCISCO (AP) — Artificial intelligence company Anthropic is asking a federal judge Tuesday to temporarily halt the Pentagon’s description of the company as “unprecedented and stigmatizing.” supply chain risk.

The hearing, scheduled for Tuesday in California federal court, marks a critical step in the tug-of-war between Anthropic and the Trump administration over how the company’s artificial intelligence technology could be used in warfare.

Anthropic lawsuit filed Earlier this month to prevent the company from enforcing what it called an “unlawful retaliatory campaign” over the Trump administration’s refusal to allow unrestricted military use of its technology.

The company is seeking an emergency order from U.S. District Judge Rita Lin that would temporarily reverse the Pentagon’s decision to designate the AI ​​company as a “supply chain risk.” Anthropic also aims to reverse President Donald Trump’s order directing all federal employees, not just those in the military, to stop using the AI ​​chatbot Claude.

Lin is presiding over the case in federal court in San Francisco, where Anthropic is headquartered. The AI ​​firm also filed a separate, narrower lawsuit in the federal appeals court in Washington, D.C.

Lin sent both sides a series of questions he wanted them to answer at Tuesday’s hearing, including inconsistencies between Defense Secretary Pete Hegseth’s official directive declaring Anthropic a potential threat to national security and what he posted about it on social media.

Anthropic said it was trying to restrict the use of its technology for mass surveillance of Americans. fully autonomous weapons. Hegseth and other senior officials publicly insisted that the company must accept “all legal” uses of Claude, threatened penalties if Anthropic did not comply, and denounced the firm and its CEO, Dario Amodei, on social media.

When Amodei refused to comply, Trump immediately announced on February 27 that he had ordered all federal employees to stop using Anthropic, calling it a “radical left, woke company” that was putting troops at risk. It gave the Pentagon a longer period of six months to phase out Anthropic technology already deployed on stealth military platforms, including those used in the Iran war.

Anthropic’s lawsuit claims the government’s actions violated First Amendment and due process laws.

“Simply put, the Executive Branch is using its powers to punish a major American corporation for the sin of expressing its views on a matter of deep public concern,” he said in a legal filing last week.

Justice Department lawyers countered in their own court filing last week that the Trump administration’s actions targeted Anthropic’s business conduct, not free speech rights.

They argued that Anthropic’s conduct during contract negotiations caused the Pentagon to “question whether Anthropic represents a reliable partner” and whether its continued access to combat operations poses “an unacceptable risk to national security.”

“Ultimately, AI systems are extremely vulnerable to manipulation, and if Anthropic – in its sole discretion – feels that its corporate ‘red lines’ have been crossed, Anthropic may seek to disable its technology or preemptively alter the behavior of its model during or during ongoing combat operations,” the Trump administration filing said.

The Trump administration’s court filings include an undated memorandum from U.S. Undersecretary of Defense Emil Michael, the Pentagon’s chief technology officer and a former Uber executive.

But it’s unclear when Michael wrote the memo expanding on the Pentagon’s reasoning for labeling Anthropic products as risky. Lin wants more details from the Pentagon about the timing of this.

O’Brien reported from Providence, Rhode Island.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button