google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Anthropic was the Pentagon’s choice for AI. Now it’s banned and experts are worried

Dario Amodei, CEO of Anthropic, at the AI ​​Impact Summit on Thursday, February 19, 2026, in New Delhi, India.

Ruhani Kaur | Bloomberg | Getty Images

Last August, former Pentagon technology chief Emil Michael Uber executive and lawyer took on the role of overseeing the Department of Defense’s artificial intelligence portfolio. An award was given to Anthropic a month ago. 200 million dollars DOD contract that expands its work with the agency.

“I said, ‘I just want to see the contracts,'” Michael said. All in One Podcast On Friday, he reflected on his early days managing the AI ​​portfolio. “You know, it’s the old lawyer in me.”

Michael’s request set off a months-long review process that culminated in the Department of Defense formally banning Anthropic’s technology last week, leaving the military without hand-picked AI models to operate in the most sensitive environments. In an extraordinary move, the Ministry of Defense identified Anthropic as a supply chain risk; this was a label historically applied only to foreign competitors. Defense suppliers and contractors will need to document that they are not using the company’s models in their work with the Pentagon.

Anthropic sued the Trump administration on Monday, calling the government’s actions “unprecedented and illegal” and claiming they “irreparably harmed Anthropic” and jeopardized contracts worth hundreds of millions of dollars.

The Defense Department’s sudden reversal came as a shock to many officials in Washington, who viewed Anthropic’s models as superior; they were the first to be employed in the agency’s models. classified networks – and defended the company’s talent integrate with existing defense contractors palantir. The decision was made all the more confusing because the Trump administration had threatened during negotiations to invoke the Defense Production Act, which could force Anthropic to allow the military access to its technology.

“I don’t know how those two things could actually be true,” said Mark Dalton, a retired Navy Rear Admiral who now directs technology and cybersecurity policy at R Street, a think tank in Washington, D.C. “Something is so necessary that you have to call the DPA, and it is so harmful that you give it a name reserved for foreign enemies.”

Defense experts like Dalton expressed concern about the government’s decision. They argue that this not only sets a troubling precedent, but also means that the administration is expelling a major technology supplier that has become one of the fastest-growing tech startups in the U.S., having been praised for its diligence on AI security, tough rhetoric against China and entrepreneurial meticulousness.

Brad Carson, a former Defense Department official who is now co-founder and president of the AI ​​policy nonprofit Americans for Responsible Innovation, said the move was especially troubling for military personnel who had come to rely on Claude. Carson, a former Marine Corps intelligence officer who served in Iraq, said he met with some retired officers who told him “the fighters weren’t happy about it.”

“If you’re in the military, you’re not that excited,” said Carson, who worked for President Obama’s Department of Defense until 2016 and was deployed to Iraq while in the military and also served two terms in Congress as a Democrat from Oklahoma. “They see Claude as the better product, the most reliable, and with the most user-friendly outputs that they can assimilate into planning.”

CNBC spoke with 17 AI policy experts, former Palantir and Anthropic employees, technology analysts and researchers about Anthropic’s critical role at the Department of Defense and what’s next. Many people asked to remain anonymous because they were not authorized to speak about the matter.

Anthropic declined to comment on the story.

Anthropic CEO Dario Amodei founded the San Francisco-based company in 2021 with his sister, Daniela Amodei, and a handful of other researchers. The group had split from OpenAI before the launch of ChatGPT over concerns about the company’s direction and attitude towards security. They have spent years carefully building Anthropic’s reputation as a firm more committed to responsible AI deployment.

Anthropic launched its family of AI models known as Claude in March 2023, a few months after ChatGPT launched and quickly went viral. In the three years since introducing Claude, Anthropic has raised billions of dollars in capital on its way to a $380 billion valuation.

The company is now under huge pressure to justify that price tag and has had to commercialize its technology rapidly to keep up with OpenAI and other rivals. Google.

Partnership with AWS and Palantir

Anthropic sees rapid sales success as OpenAI wows consumers Large businesses, including the Ministry of Defense. It’s an area that Amodei began focusing on early, recognizing the business and social importance of working closely with the government and military and helping to establish guidelines for the safe use of a technology with the power to cause potential disasters, according to people familiar with the matter.

The company began building relationships and making progress with officials in Washington, and Amodei was among several AI industry executives invited to meet with then-Vice President Kamala Harris in May 2023.

Around the same time, Anthropic turned to a familiar technology partner who could help it succeed among DC technologists: Amazon Web Services.

Multiple sources said Claude was made available on AWS’s Bedrock service that year, which helped the state gain traction in the tech community. Federal agencies could begin testing Anthropic’s models because they were accessible through AWS’s government-approved environment.

Amazon became one of the Anthropics biggest financial supporters It has invested a total of $8 billion in the startup since 2023.

As the pilot projects continued, many federal employees discovered that Claude produced more compelling results than other models from companies like OpenAI and others. MetaSources said: Sources said Claude could provide step-by-step reasons for why he would obtain an answer or complete a mission, which is crucial for federal agencies that require strong oversight and verification.

Sources said the company’s user experience is particularly suitable for desktops, as Anthropic prioritizes installation for enterprise customers. With a powerful artificial intelligence model and an intuitive user interface, Anthropic began to gain credibility with federal employees and planted the seeds of new success. Great partnership with PalantirA software and services provider that relies on government contracts for about 60% of its U.S. revenue.

'What are your red lines?' Activists' chalk addresses OpenAI employees after Pentagon deal

Anthropic’s government push was helped by Michael Sellitto, president of global affairs, who led cybersecurity policy at Homeland Security from 2015 to 2018, and former AWS executive Thiyagu Ramasamy, who was president of the company’s public sector affairs.

One LinkedIn post Last week, Ramasamy, who joined in early 2025, expressed concerns about the Pentagon’s actions.

“Today I mourn the loss of customers for whom I have deep respect,” Ramasamy wrote. “They were moving at a pace I could never have imagined in my twenty years in this industry, and now one weekend it came to a halt. They fell, but they didn’t stop.”

Sellitto and Ramasamy did not respond to requests for comment.

In November 2024, shortly before Ramasamy’s arrival, Anthropic and Palantir announced: a partnership AWS will allow US intelligence and defense agencies to access Claude. Some Anthropic employees were upset with the deal when it was announced, a former employee said, adding that it led to “a lot of major Slack threads” and became a persistent point of tension within the company.

Partnering with Palantir helped Anthropic establish direct lines with the Department of Defense and accelerated its integration into top-tier, classified projects. Lauren Kahn, a senior research analyst at the Georgetown Center for Security and Emerging Technology, said the partnerships were crucial in helping Anthropic become the first model company to be deployed on formally private networks.

“The fact that Anthropic can play well with other companies like Palantir, AWS, Google, etc., especially Palantir, is extremely valuable,” he said.

In its version dated July 2025, 200 million dollars Anthropic said this “accelerates mission impact in US defense workflows with partners like Palantir.” Anthropic said its technology helps government defense and intelligence agencies “rapidly process and analyze large amounts of complex data.”

A month after Anthropic won the Pentagon contract, it partnered with the U.S. General Services Administration to make its AI models available to other participating agencies for $1 a year.

However, by this point Anthropic’s relationship with the government had begun to deteriorate.

President Trump was sworn into office in January, and Amodei was not a fan, as he likened the commander in chief to a “feudal warlord” in a since-deleted Facebook post. Luck.

Other industry executives, including OpenAI CEO Sam Altman, Apple CEO Tim Cook and Google CEO Sundar Pichai, were photographed rubbing elbows with Trump at the White House, but Amodei was conspicuously absent. He did not attend Trump’s inauguration last year.

Amodei told employees earlier this month that management didn’t like Anthropic because it didn’t donate to Trump or offer “dictator-style praise to Trump.” Information.

HE he apologized The tone of those remarks was evident in a statement Thursday, writing that they were written after a “difficult day for the company” and “do not reflect my careful or considered views.”

Amodei has also drawn the ire of venture capitalist and crypto czar David Sacks, who served as White House AI chief, who accused Anthropic of promoting “woke AI” largely because of its positions on regulation.

“This strikes me as a debate about politics and personalities,” Michael Horowitz, senior fellow for technology and innovation at the Council on Foreign Relations, said in an interview. “This is disguised as a policy disagreement.”

By the time the Trump administration blacklisted Anthropic, the startup’s tools had been widely adopted across government agencies. Groups including the U.S. Department of Health and Human Services are currently in a transition process. Department of Treasury and the State Department confirmed they would be leaving Claude.

But this process is particularly complex within the Department of Defense, in part because the United States is conducting an active military operation in Iran. As CNBC previously reported, Anthropic’s models were used to support this operation even after it was blacklisted.

Switching from Anthropic to a new vendor would take the Defense Department’s time and come at a significant cost in terms of efficiency, Jacquelyn Schneider, a Hargrove Hoover fellow at Stanford University’s Hoover Institution, said in an interview.

“Right before you go to war, you’re not going to move away from technologies that are deeply embedded in your wartime processes,” Schneider said.

WRISTWATCH: Why is the US Department of Defense’s Anthropic blacklist so unprecedented?

Why is the US Department of Defense's Anthropic blacklist so unprecedented?
Select CNBC as your preferred source on Google and never miss a beat from the most trusted name in business news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button