Altman says agreement reached with Pentagon to deploy OpenAI models for classified work—Did Anthropic refusal clear way?

OpenAI chief Sam Altman said on Saturday that the company had reached an agreement with the US Department of Defense to deploy its models on the Pentagon’s secret network.
In a post on social media platform
“We are committed to serving all of humanity to the best of our ability. The world is a complex, messy and sometimes dangerous place,” the AI chief added.
Altman: ‘AI safety, distribution of benefits’ are core missions
According to Altman, the ChatGPT maker considers “AI security and broad dissemination of benefits” as their core mission, adding: “Two of our most important security principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems.”
The language reflects sticking points in the conflict between Claude’s maker Anthropic and the Pentagon, which has pushed for full military use of AI tools “for all lawful purposes.” This includes use in even “the most sensitive areas such as weapons development, intelligence gathering, and battlefield operations.” axios report.
However, Altman stated that the principles were not compromised. “DoW accepts these principles, reflects them in law and policy, and we put them into our agreement,” he wrote.
Sam Altman on the Pentagon deal: What does it include?
- Altman said the company will create technical safeguards “as DoW wants to ensure our models behave as they should.”
- OpenAI will also “deploy FDEs to assist our models and ensure their security, we will only deploy them in cloud networks,” he added.
- Apparently referring to the Pentagon’s public fight with Anthropic, he also said that OpenAI has asked the department to “offer the same terms to all AI companies, which we think everyone should be willing to accept.”
- “We expressed our strong desire to see events escalate away from legal and official action and towards reasonable agreements,” he said.
Trump exec pushing back on Anthropic’s security concerns?
On Friday, the Trump administration ordered all agencies to stop using Anthropic’s models and imposed penalties on the firm. US President Donald Trump, Defense Secretary Pete Hegseth and other officials publicly rebuked the company on social media and accused it of endangering national security after CEO Dario Amodei refused to back down over AI safety concerns.
Anthropic said in a statement released Friday evening that it would object to what it called an unprecedented and legally unsound action that “has never before been publicly applied to any American company,” the AP reported.
This move is not surprising; An Axios report earlier this month said the department had threatened to cut ties with Anthropic over its insistence on some limits on the use of AI models by the US military. Two contradictory points for Anthropik; fully autonomous weapons and mass surveillance of Americans.
According to one source, there is “a significant gray area about what goes in and what doesn’t” in the disputed categories, and the Pentagon is not willing to negotiate each case on a case-by-case basis or let Anthropic’s models unexpectedly thwart some processes.
According to a report prepared by Wall StreetJournalAnthropic’s contract with the Pentagon is worth approximately $200 million.
The Pentagon also has contracts with Alphabet (Google) and Elon Musk’s xAI, and both are negotiating a move to secret networks. The action against Anthropic would likely benefit Grok and serve as a warning to Google during these negotiations, the AP report said.



