Why Pentagon-Anthropic clash is pivotal AI test for future of warfare

The Department of Defense’s conflict with Anthropic over the integration of AI into military operations and limits on its use came to a head this week when Defense Secretary Pete Hegseth gave the AI company until 5:01 p.m. ET on Friday to give in to government demands. Anthropic hasn’t budged, at least to date, but the war between the military and industry over artificial intelligence is just beginning. Pentagon, II. It clashes with private companies that control artificial intelligence in an untested manner in the post-World War II period.
Anthropic on Thursday rejected a request from Defense Secretary Pete Hegseth to relax certain safeguards on its models for military use, including mass domestic surveillance or fully autonomous weapons, on the grounds that it violated company policies. CEO Dario Amodei’s decision comes after the Pentagon warned it could terminate the partnership if the company refuses to support “all lawful uses.”
“It is the Department’s prerogative to select contractors that best suit their vision,” Amodei wrote. A statement on Thursday. “But given the significant value Anthropic’s technology provides to our armed forces, we hope they will reconsider.”
This underscores the fact that private firms developing frontier AI may seek to set their own limits on how the technology is used, even in national security contexts.
In July, the Department of Defense awarded contracts worth $200 million each to four companies — Anthropic, OpenAI, Google DeepMind and Elon Musk’s xAI — to prototype frontier AI capabilities tied to U.S. national security priorities. The awards show how aggressively the Pentagon is moving to bring cutting-edge commercial AI to defense work.
The urgency is also reflected in the Pentagon’s internal planning. A. January 9 memorandum The outline of the military’s AI strategy calls for the United States to become an “AI-first” warfighting force and accelerate the integration of leading commercial AI models across warfare, intelligence and enterprise operations.
“There are no winners in this,” Lauren Kahn, a senior research analyst at the Georgetown Center for Security and Emerging Technology, told CNBC in a recent interview about the standoff between the Pentagon and Anthropic. “It leaves a sour taste in everyone’s mouth.”
But what he does marks a change; It’s a departure from decades of defense innovation in which governments controlled technology as it was created.
“For most of the post-World War II period, the U.S. government defined the frontier of advanced technology,” said Rear Admiral Lorin Selby, former chief of naval research and former general partner of Mare Liberum, an investment firm specializing in maritime technology and infrastructure. “It set the requirements, funded basic research, and financed industry run to government-directed specifications. From nuclear propulsion to stealth to GPS, the government was the main engine of discovery, and industry was the integrator and producer.”
Artificial intelligence is turning this model on its head, Selby said.
“Today, the commercial sector is the primary driver of frontier capability. Private capital, global competition, and the scale of commercial data are advancing AI at a pace that traditional government R&D structures cannot easily replicate. The War Department no longer defines the limit of what is technically possible in AI, it adapts to it,” he said.
United States Secretary of War Pete Hegseth speaks during a visit to Sierra Space in Louisville, Colorado, Monday, February 23, 2026.
Aaron Ontiveroz | Denver Post | Getty Images
This reversal in the balance of power over technology carries both opportunity and risk.
“We shouldn’t be in a place where private companies feel like they have leverage over the U.S. government or its Western allies because of the technological talent they provide,” said Joe Scheidler, former deputy director and special advisor to the White House and co-founder and CEO of AI start-up Helios. “Technologists should build and do this responsibly, but governments should be the organizations making the decisions.”
Anthropic and the Department of Defense did not respond to requests for comment
Why does the military need dedicated AI?
Public-private partnerships have long supported U.S. defense innovations, from World War II industrial mobilization to modern aerospace and cybersecurity programs. But artificial intelligence is different because the most advanced talent is increasingly concentrated in commercial firms rather than government laboratories.
“What gives America an advantage is strong public-private partnerships,” Scheidler said. “You won’t find a more dynamic and innovative talent pool than that of the American entrepreneurial community. The idea of trying to replicate that level of innovation within government…is difficult.”
This concentration is precisely why governments seek partnerships, but according to Selby, dependence is also primarily due to speed. “In venture-backed firms, the innovation cycle moves over months. Traditional procurement cycles move over years. Without commercial AI providers, government would be slower, less adaptable, and much more expensive,” he said.

When critical national security tools are developed by private companies, “the main change is that the government can no longer fully control the development of the most advanced technological tools,” he said. Betsy CooperDirector of the Aspen Policy Academy and former counsel for the U.S. Department of Homeland Security.
Cooper said commercial AI systems are often designed for broad markets rather than military missions, which can create gaps between how companies design their technology and how governments want to deploy it.
This misalignment may become more apparent when corporate policies, reputational concerns, or global customer pressures conflict with government goals; this dynamic can now be seen in the Anthropic conflict as well.
“Companies may not want to risk negative reactions from their customer base if their products are being used for highly controversial reasons — such as creating autonomous lethal weapons or committing preemptive killings before crimes are committed,” Cooper said.
The government has more long-term leverage
Despite the shift toward commercial technology, defense leaders are unlikely to give up control over mission-critical systems.
“The first thing you have to understand is, based on what I’ve seen to date, the Department of Defense is not going to give up ultimate control,” said Brad Harrison, founder of Scout Ventures, an early-stage venture capital firm that invests at the intersection of national security and critical technology innovation. “The government still wants to understand everything that goes into it, all the dependencies and risks.”
Harrison, a former U.S. Army Air Forces Ranger and West Point graduate, said AI could eventually influence decisions like how to thwart incoming threats, so “the government will be extremely careful about how they allow AI to interact with those layers of data.” “No one wants to be in charge of Skynet,” he said, referring to a fictional artificial intelligence in the “Terminator” universe that causes nuclear war.
Governments also have powerful tools to influence companies, including purchasing decisions, export controls, and regulatory authority. “The government has too much influence,” Harrison said. “If you don’t want to work with them, there are a lot of ways to make that a very difficult decision,” he added.
But leverage flows both ways, at least for now, according to Selby. “In the short term, companies with little AI capability and proprietary models can have a significant impact. In the long term, sovereign governments retain regulatory authority, contractual power, financing scale and, if necessary, legal coercion,” he said.
The most important question, according to Selby, is “whether we create a durable public-private partnership that treats AI as core national security infrastructure rather than just another vendor relationship.”
Risks in the new military-Silicon Valley industrial complex
Experts say the question is not about whether companies or governments have staying power, but rather how the relationship evolves as AI becomes the center of national power.
“If we build cohesion and resilience in the public-private relationship, AI can strengthen national security while protecting innovation,” Selby said. “If we don’t do this, we risk a future where capacity is abundant but cohesion is fragile,” he added.
There are many new types of risks in the emerging military-Silicon Valley industrial complex. For example, relying on externally developed AI could introduce vulnerabilities if systems unexpectedly malfunction or become unusable, especially if military units become accustomed to them during operations.
“Overconfidence can be deadly,” said Shanka Jayasinha, founder of Onto AI, a company that develops artificial intelligence tools for military, healthcare, financial institutions and enterprise solutions, describing scenarios where special operations units rely on AI-enhanced mission coordination tools during deployment. If these systems fail after long-term use, “many lives could be in danger,” he said.
Vendor lock-in is another concern. As AI platforms are incorporated into workflows, they may become difficult to replace. “With the current rate of advancement in artificial intelligence, it is difficult to remove any official,” Jayasinha said. he said.
But Harrison says one risk the Pentagon won’t be exposed to is being held captive by a single company. “The US government will not be beholden to any Silicon Valley company,” he said. “They will test the systems very methodically, check the data layer and move forward step by step.”
OpenAI CEO Sam Altman, who has a contentious relationship with Anthropic and Amodei, issued a statement to employees on Thursday offering peer-level support for the “red lines” of the AI rival at the center of the Pentagon conflict.
But the Pentagon has made a very clear statement about the importance of Anthropic or any company. a post on x Thursday night from Under Secretary of War for Research and Engineering Emil Michael: “It’s a shame that @DarioAmodei is a liar and has a God complex. He wants nothing more than to try to personally control the US Military and is okay with risking our nation’s security. @DeptofWar will ALWAYS follow the law, but will not bow to the whims of any for-profit tech company.”
Anthropic said that if the government “leaves” Anthropic, “we will work to ensure a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations or other critical missions.”
One approach that is likely to receive even greater focus in the future is creating systems that some technologies call “standalone AI architectures” that allow governments to leverage commercial innovations while maintaining independence from vendors.
“We talk a lot internally about this concept of sovereign intelligence and vendor independence,” Scheidler said, arguing that the U.S. ecosystem remains broad enough to prevent over-reliance on any single vendor. “New ideas are emerging daily, and we don’t have to rely on a single vendor to do it,” he said.




