google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Sam Altman tells OpenAI staff ‘operational decisions’ up to government

Open AI CEO Sam Altman speaks during a speaking session with SoftBank Group CEO Masayoshi Son at the “Transforming Business through Artificial Intelligence” event in Tokyo on February 3, 2025.

Tomohiro Ohsumi | Getty Images

OpenAI CEO Sam Altman told employees at a general meeting on Tuesday that the company was “unable to make operational decisions” regarding how its AI technology is used by the Department of Defense.

“Maybe you think the Iranian attack is good and the Venezuelan invasion is bad,” Altman said Tuesday, according to a partial minutes of the meeting reviewed by CNBC. “You can’t weigh in on this.”

The meeting took place four days after OpenAI announced the Defense Department regulation, which came just hours before the United States and Israel began launching attacks on Iran.

Altman told employees that the Defense Department respected OpenAI’s technical expertise, wanted input on where its models fit and would allow the company to build the security stack it saw fit, according to a person familiar with the matter who asked not to be named because the meeting was private.

But Altman said the agency has made clear that operational decisions rest with Secretary Pete Hegseth. Altman has been loudly criticized, including by some OpenAI employees, since announcing the deal with the Pentagon shortly after rival Anthropic was blacklisted and labeled a “Supply Chain Risk to National Security.” President Donald Trump also called on every federal agency in the United States to “stop immediately“They all use Anthropic’s technology.

Anthropic’s artificial intelligence was reportedly used in attacks on Iran over the weekend and in the capture of Venezuelan leader Nicolas Maduro and his wife Cilia Flores, who were ousted in January.

Altman defended OpenAI’s contract in several social media posts. he accepted He said it “seemed opportunistic and sloppy” and that the company “shouldn’t have been in a rush to get this out on Friday.” he said a post He said on Day X that the Ministry of Defense “demonstrated a deep respect for security and a desire to partner to achieve the best possible outcome.”

Anthropic was the first lab to distribute its models to the Defense Department’s secret network and was trying to negotiate the continuing terms of its contract before talks collapsed. The company wanted assurances that its models would not be used for fully autonomous weapons or mass surveillance of Americans, while the Department of Defense wanted Anthropic to agree to allow the military to use the models in all legal use cases.

Last year, OpenAI was awarded a $200 million contract by the Department of Defense, allowing the agency to begin using the startup’s models in unclassified use cases. The new regulation will allow the company to distribute its models to the department’s secret networks.

Elon Musk’s xAI has also agreed to deploy its models to confidential use cases.

“I’m hopeful that we will have the best models that will encourage the government to be willing to work with us, even if our security system makes them uncomfortable,” Altman said Tuesday. he said. “But there will be at least one other actor, who I predict will be xAI, who will effectively say, ‘We’ll do whatever you want.'”

Altman and Musk, who co-founded OpenAI, are locked in a heated legal battle that is scheduled to go to court next month.

XAI did not immediately respond to a request for comment.

— CNBC’s Kate Rooney contributed to this report.

WRISTWATCH: Big Technology’s Alex Kantrowitz says OpenAI could make huge profits from Pentagon deal

Big Technology's Alex Kantrowitz says OpenAI could make huge profits from Pentagon deal

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button