STORY: Anthropic says it won’t agree to Pentagon’s request to eliminate safeguards in AI systems.This is despite threats to deem the company a “supply chain risk” and remove it from Defense Department systems, putting a multimillion-dollar contract at risk.The dispute stems from the AI startup’s refusal to remove safeguards that would prevent its technology from being used to autonomously target weapons and conduct surveillance in the United States.:: FileIn a statement on Thursday, Anthropic CEO Dario Amodei emphasized that the company opposes the use of AI models for mass domestic surveillance.He also said “frontier AI systems are not reliable enough to operate fully autonomous weapons.” Earlier in the day, Pentagon spokesman Sean Parnell told X that the department had no interest in using artificial intelligence to conduct mass surveillance on Americans…Nor does it want to use artificial intelligence to develop autonomous weapons that operate without human intervention.What they wanted, he said, was “to allow the Pentagon to use Anthropic’s model for all lawful purposes.”Parnell said the company has until 5:01 p.m. ET on Friday to make a decision.Anthropic, backed by Google and Amazon, has a contract with the department worth up to $200 million.More than 200 Google and OpenAI employees supported his stance in an open letter.None of these companies would immediately respond to requests for comment.