google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Pentagon ‘fed up’ with Anthropic pushback over Claude AI model use by military, may cut ties, says report

According to an Axios report based on a Trump administration official, the US Department of Defense has reportedly threatened to cut ties with the artificial intelligence company Anthropic because it insists on some restrictions on the use of artificial intelligence models by the US military.

The Pentagon was reportedly “fed up” with Anthropic’s months-long negotiations to counter the US government’s push for full military use of AI company tools “for all lawful purposes.” This includes use in even “the most sensitive areas such as weapons development, intelligence gathering and battlefield operations,” according to the report.

Reuters, which reported on the same subject, could not confirm the development.

According to The Wall Street Journal, Anthropic’s contract with the Pentagon is worth approximately $200 million.

Also Read | Marco Rubio: ‘I have received India’s commitment to halt additional purchases of Russian oil’

Why does the Pentagon want full control over the use and applications of AI models?

He added that Anthropic’s two sticking points are fully autonomous weapons and mass surveillance of Americans. Notably, the Pentagon has contracts with Anthropic, Alphabet (Google), OpenAI, and Elon Musk’s xAI.

The source told Axios that there was “a significant amount of gray area around what does and doesn’t fall within” the categories in dispute, and that the Pentagon isn’t willing to negotiate with Anthropic on a case-by-case basis or have AI model Claude unexpectedly disrupt some processes.

On whether the ministry could cut the company’s staff, the official said: “Everything is on the table… but if we think this is the right answer, they will have to be replaced on an orderly basis.”

In a statement to Axios, Anthropic said it was “committed to using border AI to support U.S. national security.” The company’s terms of use clearly state that the use of Claude for facilitating violence, weapons development or surveillance is prohibited.

Also Read | Sridhar Vembu once again compares Big Tech to the East India Company: ‘Bigger than most…’

Pentagon vs Anthropic: Did its use during Maduro’s capture raise concerns?

Last month, Reuters reported that Anthropic and the Pentagon were clashing over security measures that would prevent the government from using the AI ​​model to autonomously target weapons and conduct domestic surveillance in the United States.

Specifically, on February 14, WSJ reported that Anthropic’s Claude AI was used by the United States during an operation to capture former Venezuelan President Nicolás Maduro. The app reportedly comes from Anthropic’s partnership with Palantir, whose tools are widely used by the U.S. Department of Defense and federal law enforcement.

An Anthropic spokesperson told the WSJ: “We cannot comment on whether Claude or any other AI model is being used for any specific operations, classified or otherwise. Any use of Claude (whether in the private sector or across government) is required to comply with our Use Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”

Also Read | Public holidays: Are banks closed next week for Losar and Shivaji Jayanti?

Could the US military replace Claude with other AI players?

According to the Axios report, a quick replacement will be difficult because other models do not yet have the same network settings for use in specialized government applications. The official said “other model companies were right behind Claude,” and the report noted that it was the first model introduced into the Pentagon’s secret networks.

Additionally, ChatGPT (OpenAI), Gemini (Google) and xAI (Grok) are used in unclassified settings. The report stated that these people agreed to give up the usual security measures they used while working with the Pentagon, and that negotiations were continuing to shift them to the secret area. On whether they accepted the term “all lawful purposes,” one did, while the other two “showed more flexibility than Anthropic,” the official said.

A statement from Anthropic’s spokesperson to Axios reiterated their commitment to national security: “That’s why we were the first pioneering AI company to put our models on secret networks and provide customized models for national security customers.”

Key Takeaways

  • The Pentagon is reportedly upset with Anthropic’s restrictions on using AI models for military applications.
  • Anthropic’s insistence on some limits on military use, particularly on autonomous weapons, conflicts with the Pentagon’s demands.
  • Anthropic’s $200 million contract could be affected by its pushback on autonomous weapons and domestic surveillance.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button