Sam Altman ‘de-escalate’ DoD tensions OpenAI employees back Anthropic

Sam Altman, chief executive officer of OpenAI Inc., at the Artificial Intelligence Impact Summit on Thursday, February 19, 2026, in New Delhi, India.
Prakash Singh | Bloomberg | Getty Images
OpenAI CEO Sam Altman told employees late Thursday that he wanted the company to “try to help de-escalate tensions” between rival Anthropic and the Department of Defense.
“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-risk automated decisions,” Altman wrote in a memo viewed by CNBC. “These are our main red lines.”
Anthropic has until 5:01 PM ET on Friday to decide whether it will allow the Pentagon to use its AI models for all legal use cases without restrictions. The startup wants assurances that its technology won’t be used for fully autonomous weapons or domestic mass surveillance of Americans, but the Defense Department hasn’t budged on that.
Altman’s internal letter on Thursday was intended to show that OpenAI shares Anthropic’s limitations. Wall Street Magazine he was the first to report the note.
Before the Altman memo, OpenAI employees had begun speaking out We support Antropik on social media. According to its website, some 70 current employees have signed an open letter from the ministry titled “We Will Not Be Divided”, which aims to create “common understanding and solidarity in the face of this oppression”.
“Despite all the differences I have with Anthropic, I mostly trust them as a company and I think they really care about safety and I’m happy they support our war warriors,” Altman said in an interview with CNBC on Friday. “I’m not sure where this is going.”
OpenAI was awarded a $200 million contract by the DOD last year, allowing the agency to begin using the startup’s models in unclassified use cases. Anthropic was the first AI lab to integrate its models into task workflows on hidden networks.
Altman said he will see if OpenAI can make a deal with the Department of Defense to deploy its models to classified environments “in line with our principles.” He said the company would create technical security measures and deploy staff to “make sure things are working properly.”
“We want the agreement to cover all uses other than those that are illegal or unsuitable for cloud deployments, such as domestic surveillance and autonomous strike weapons,” Altman wrote.
Altman said that OpenAI has been holding meetings on the issue in recent days and that the company has not yet made a decision on what to do. He said there will be more meetings with OpenAI’s security teams on Friday.
“This is a situation to me where it is important that we do the right thing, not the easy thing that appears strong but is insincere,” Altman wrote. “But I recognize that this may not ‘look good’ for us in the short term and that there is a lot of nuance and context.”
— CNBC’s Kate Rooney contributed to this report.
WRISTWATCH: OpenAI completes $110 billion funding round with support from Amazon, Nvidia and Softbank




