google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Anthropic refuses Pentagon’s demand in AI safeguards dispute

<span>STORY: Anthropic says it won’t agree to Pentagon’s request to eliminate safeguards in AI systems.</span><span>This is despite threats to deem the company a “supply chain risk” and remove it from Defense Department systems, putting a multimillion-dollar contract at risk.</span><span>The dispute stems from the AI ​​startup’s refusal to remove safeguards that would prevent its technology from being used to autonomously target weapons and conduct surveillance in the United States.</span><span>:: File</span><span>In a statement on Thursday, Anthropic CEO Dario Amodei emphasized that the company opposes the use of AI models for mass domestic surveillance.</span><span>He also said “frontier AI systems are not reliable enough to operate fully autonomous weapons.” </span><span>Earlier in the day, Pentagon spokesman Sean Parnell told X that the department had no interest in using artificial intelligence to conduct mass surveillance on Americans…</span><span>Nor does it want to use artificial intelligence to develop autonomous weapons that operate without human intervention.</span><span>What they wanted, he said, was “to allow the Pentagon to use Anthropic’s model for all lawful purposes.”</span><span>Parnell said the company has until 5:01 p.m. ET on Friday to make a decision.</span><span>Anthropic, backed by Google and Amazon, has a contract with the department worth up to $200 million.</span><span>More than 200 Google and OpenAI employees supported his stance in an open letter.</span><span>None of these companies would immediately respond to requests for comment.</span>

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button