OpenAI details layered protections in US defence pact

OpenAI said its agreement with the Pentagon to deploy the technology on the US defense department’s classified network includes additional security measures to protect its use cases.
US President Donald Trump on Friday ordered the government to stop working with Anthropic and the Pentagon said the venture would be declared a supply chain risk, dealing a major blow to the AI lab after a showdown over tech guardrails.
Anthropic said it would challenge any risk determinations in court.
Shortly after, rival OpenAI, backed by Microsoft, Amazon, SoftBank and others, announced its own deal late Friday.
“We think our agreement for stealth AI deployments, including Anthropic’s, has more barriers than previous agreements,” OpenAI said in a statement on Saturday. he said.
The artificial intelligence firm said the contract with the defense department, which the Trump administration renamed the War Department, enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, directing autonomous weapons systems or any high-risk automated decisions.
“We maintain our red lines in our agreement with a more comprehensive, multi-layered approach,” OpenAI said.
“We have full discretion over our security stack, we deploy via the cloud, authorized OpenAI personnel are involved, and we have strong contractual protections.”
Last year, the Pentagon signed deals worth US$200 million ($281 million) each with major AI labs including Anthropic, OpenAI and Google.
The Pentagon is trying to maintain full flexibility in defense and not be limited by warnings from the technology’s creators against powering weapons with unreliable artificial intelligence.
OpenAI warned that any violation of its contract by the US government could trigger its termination.
“We don’t expect this to happen,” he said.
The company also said rival Anthropic should not be labeled as a “supply chain risk” and said it had “clearly made our position on this matter clear to the government”.
