How Trump’s Anthropic ban can quickly become existential business risk

Anthropic is experiencing significant growth, with a rapid rise largely driven by enterprise demand for AI systems. Anthropic CEO Dario Amodei told CNBC in February that roughly 80% of the company’s business now comes from enterprise customers; This is in contrast to rival OpenAI, whose products got much of their initial momentum from consumer adoption of ChatGPT. The $30 billion in funding in a new round values the AI developer at around $380 billion, while the annual revenue rate is closer to $20 billion, up from around $14 billion just a few weeks ago, according to sources.
But the AI startup’s sudden and risky battle with the Trump administration will force both its customers and investors to ask: Can this momentum continue?
Defense contractors are abandoning Anthropic’s technology following a harsh response from the Trump administration last week, which said it would designate the company as a supply chain risk. This comes after the Pentagon rejected terms for using AI over security concerns, a designation previously only used for organizations allegedly controlled by foreign governments such as China and Russia when national security or espionage concerns were raised. The move by defense contractors is not a surprise. “Many of our companies are actively involved in major defense contracts and are therefore very strict in interpreting the requirements,” said managing partner Alexander Harstrick. J2 Initiativesthe company, which supports startups in the space, told CNBC.
But other tech executives say there will inevitably be conversations in corporate world boardrooms about Anthropic risk that extends well beyond the defense sector, if not already.
“The administration not only pulled Anthropic contracts. President Trump ordered federal agencies to phase out Anthropic’s technology, and the Pentagon reportedly implemented a ‘supply chain risk’ designation. This statement is important,” said Spencer Penn, co-founder and CEO of AI-powered sourcing platform LightSource.
‘This situation is different’
According to Penn, in the rapidly evolving world of enterprise businesses’ adoption of AI big language models, foundation model choices increasingly resemble infrastructure decisions rather than simple software purchases; This means companies evaluate not only technical performance but also reputational, geopolitical and customer perception risks. “Boards care about this. Risk committees care about this. Customers absolutely care about this,” Penn said.
Anthropic did not respond to a request for comment.
Tensions between the government and Anthropic over the security of artificial intelligence and the use of its technology for military purposes have made the company’s brand useful to consumers. A day after the dispute, on February 28, Anthropic’s Claude chatbot surpassed ChatGPT for the top spot in Apple’s ranking of the best free US apps, leaving Google’s Gemini lower in the rankings. But Anthropic’s coding assistant, Claude Code, has become one of the company’s fastest-growing products, generating billions in annual revenue as developers and large companies increasingly rely on AI tools to automate parts of their software development processes, including tools designed to help developers write and review software and help run daily business operations.
There is anthropic It was stated that the definition of supply chain risk is not legally soundIt told its commercial customers that they were “unimpressed” and said it planned to challenge the decision in court. Many legal experts agree with Anthropic that: statements from the government It goes well beyond the legal authority that the definition of supply chain risk can limit other business activities of private companies, rather than just what they can do under certain government procurement and use scenarios.
Dario Amodei, CEO of Anthropic, leaves during a Senate Judiciary Subcommittee on Privacy, Technology and Law hearing on Tuesday, July 25, 2023 in Washington, DC, USA.
Bloomberg | Bloomberg | Getty Images
Anthropic has received some support from tech industry ranks, and a trade group representing the industry wrote a letter to Defense Secretary Pete Hegseth this week expressing concerns about his designation of a U.S. company as a supply chain risk. But even though Anthropic has been a critical technology in successful military operations in Iran, the government has so far shown little sign of relaxing its stance.
Anthropic’s assurances alone will not satisfy many companies. “When a vendor walks in the door and does good work, most teams don’t proactively look for a reason to reopen work. This is different,” Penn said.
“They closed the door. They didn’t want to do business with us,” Defense Department technology chief Emil Michael told CNBC’s Morgan Brennan this week. “I think their culture, their own constitution, which is the spirit, and their values are not really compatible. It’s a little weird that they want to do business with the War Department, like they’ve been doing for three years, but they don’t want us to do War Department stuff, so if that’s where we end up and we end up confronting that and they don’t want to do business with us, I think that’s their choice.”
Single provider risk in the race to adopt AI
Fortune 500 procurement teams act quickly when a major technology vendor faces regulatory scrutiny, said Michael Murphy, partner and global AI readiness leader at Adaptovate consulting firm, which advises large companies on AI applications. “Any perceived compliance risk could ripple into their own regulatory obligations,” he said, also noting that this could force a broader shift already underway in many organizations: avoiding reliance on a single AI provider.
The government has said its fight with Anthropic and last week’s controversial award of a new contract to OpenAI is partly about addressing single vendor concentration. “We can no longer be dependent on any one vendor, and that’s what happened before I took on this role, and that has to change,” DoD CTO Michael told CNBC.
This will now be an issue for many companies.
“Over-reliance on a single AI vendor is increasingly viewed as a risk,” Murphy said. “Many organizations are already evaluating multiple vendors simultaneously, so there is redundancy in their AI stack.”
“Mature companies understand that each vendor plays a different piece of the larger puzzle. There is power in an ecosystem, but there is also the risk of lock-in,” said Joshua Morley, global head of AI, data and analytics at Akkodis, the technology consulting arm of Adecco Group.
Ultimately, the political and legal fight could accelerate the process already underway as corporate corporate decision-makers diversify their AI bets among companies in the space after initial trials with a single vendor. Disney’s chief financial officer, Hugh Johnston, recently told CNBC that its initial work is with OpenAI, but the company expects that to be expanded. “We’re very open about that. We’ll have a time frame where we’ll just be OpenAI, but it’ll be a relatively short time frame. We need to allow models to be played out. I’d be surprised if there weren’t multiple models going forward instead of just one,” he said on CNBC’s “Squawk Box.”
“This looks more like a short-term disruption than a structural change,” Penn said. “Companies remain committed to implementing AI capabilities, but can move toward more diverse ecosystems rather than relying on a single provider.”
Supply chain risk management classification could strongly impact contractors and subcontractors who rely on the technology, prompting companies to re-evaluate contracts, delay deployments, or consider alternative AI suppliers. Penn said he expects alternative foundation model providers to be quietly evaluated if the designation appears particularly durable for companies with dual use in commercial and defense markets. “Not because teams want to switch, but because concentration risk and compliance risk are things that serious purchasing organizations are paid to manage,” he said. “Most businesses will not make architectural changes within a few days, but they will initiate an investigation immediately. Legal will evaluate what the directive actually requires. Compliance will evaluate exposure. Security will ask about contingency plans,” he added.
For Anthropic investors such as Amazon, Microsoft, Nvidia, and sovereign wealth funds from around the world, this dispute could disrupt Anthropic’s rapid expansion. “Any aggressive government action against a technology company creates risk,” said Brad Harrison, founder of Scout Ventures, an early-stage venture capital firm that invests at the intersection of national security and critical technology innovation. “And the worst thing when you have significant momentum is a major risk that requires time and attention,” he said.
Ben Horowitz, co-founder and general partner of A16Z, an investor in Anthropic rivals OpenAI and xAI, told CNBC at a defense technology conference this week: “Just a week ago Anthropic was complaining that Chinese companies were stealing all their IP from their models. Do you think the Chinese government is constrained by DeepSeek in terms of how they can use Anthropic technology? So we’re very sympathetic to the War Department’s stance on this.”
Like many things in the current administration, policy signals can change quickly. “A constructive conversation between President Trump and Dario Amodei could soften or further solidify the stance,” Penn said. Federal Communications Commission Chairman Brendan Carr told CNBC this week that Anthropic “made a mistake” in its dealings with the Department of Defense and “should try to correct course as best they can.”
At least for now, the unusually public nature of the dispute could accelerate risk talks. “Often these types of compliance issues move quietly through legal channels,” Penn said. “In this case, it was headline news.”





