google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Sam Altman’s Pentagon deal defence to OpenAI staff comes with Elon Musk warning, says report

OpenAI CEO Sam Altman defended the company’s decision to allow ChatGPT to be used by the US government for secret missions, according to multiple media reports. The ChatGPT maker faced plenty of criticism after announcing the deal with the Pentagon last week, just hours after US Defense Secretary Pete Hegseth said Anthropic would be classified as a supply chain risk, a designation usually reserved for rival foreign companies.

According to the Wall Street Journal, Altman said he did not regret signing the agreement with the Defense Department, but wished he had not announced the decision so quickly, which made the agreement seem “opportunistic” and “not integrated with the field.”

Altman tells employees: ‘You shouldn’t weigh yourself.’

Meanwhile, Altman also told staff that OpenAI was “unable to make operational decisions” about how its technology was used by the Department of Defense, according to a report by CNBC.

“Maybe you think the Iranian attack is good and the Venezuelan invasion is bad,” Altman said. “You can’t weigh in on this.”

Altman also suggested that the Defense Department’s desire to regulate how it uses its artificial intelligence could be part of the tension between the Pentagon and Anthropic, according to a report from Bloomberg.

The US government reportedly used Anthropic’s AI during the capture of Venezuelan President Nicolas Maduro and recent attacks in Iran. Anthropic is also said to have raised questions about how its AI was used to capture Maduro, upsetting Defense Department officials.

OpenAI signed a $200 million deal with the Pentagon last year that allows the agency to use its models in non-classified use cases. Last week’s agreement also allowed models to be distributed to private networks.

But the Pentagon is also said to be in talks with Elon Musk’s xAI to allow its models to be used in stealth use cases.

OpenAI had set three conditions for use of AI models by the Pentagon: AI not be used for mass domestic surveillance, direct autonomous weapons systems, or high-risk automated decisions.

Altman reportedly stated that xAI would not make such a request to the Pentagon and would do whatever the agency said.

“I hope that we will have the best models that will encourage the government to be willing to work with us, even if our security system makes them uncomfortable,” Altman said, as quoted by CNBC. “But there will be at least one other actor, who I predict will be xAI, who will effectively say, ‘We’ll do whatever you want.’”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button