google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Anthropic’s Daniela Amodei on the company’s ‘do more with less’ bet

SAN FRANCISCO — At Anthropic’s headquarters, President and co-founder Daniela Amodei keeps repeating a phrase that has become a sort of governing principle for the AI ​​startup’s entire strategy: Do more with less.

This is a direct challenge to the prevailing mood in Silicon Valley, where the largest labs and their supporters see scale as destiny.

Companies are collecting record amounts, locking up chips years in advance, and pouring concrete for data centers in the heart of America in the belief that the company that builds the largest intelligence factory will win.

OpenAI has become the clearest example of this approach.

The company has made nearly $1.4 trillion in computing and infrastructure commitments as it works with partners to sustain large data center campuses and secure next-generation chips at a pace the industry has never seen before.

Anthropic’s view is that there’s another way in the race where disciplined spending, algorithmic efficiency, and smarter deployment can keep you on edge without trying to outdo the rest.

“I think what we’ve always aimed to do at Anthropic is to be thoughtful about the resources we have when operating in this space where there’s a lot of computation,” Amodei told CNBC. “Anthropic has always had much less in terms of compute and capital than our competitors have, but we’ve still had the most powerful, highest-performing models pretty consistently for much of the last few years.”

Anthropic betting efficiency could beat brute force scale in AI arms race

Daniela Amodei, CEO of Anthropic and a graduate of Baidu and Google, and her brother Dario Amodei helped create the worldview they now oppose.

Dario Amodei was among the researchers who helped popularize the scaling paradigm that drives the modern model race. Computing is the strategy where increasing data, model size, and capabilities tends to improve the model in a predictable way.

This model has become the financial foundation of the AI ​​arms race.

It underwrites hyperscaler capital expenditures, legitimizes high chip valuations, and ensures that private markets are willing to assign enormous prices to companies that are still spending heavily to achieve profitability.

But while Anthropic capitalizes on this logic, the company is trying to prove that the next stage of the competition won’t be decided solely by who can afford the biggest pre-workout runs.

Its strategy relies on higher quality training data, post-training techniques that improve reasoning, and product choices designed to make models cheaper to run and easier to adopt at scale; This is the part of the AI ​​business where the computing bill never stops.

To be clear, Anthropic is not acting alone. The company has approximately $100 billion in computing commitments and expects those requirements to continue growing if it wants to stay on top.

“The computing requirements of the future are huge,” said Daniela Amodei. “So our expectation is that yes, as we grow, we will need more calculations to stay within the limit.”

Still, the company argues that headline figures circulating across the industry are often not directly comparable, and that the industry’s collective certainty about the “right” amount to spend is less solid than it seems.

“A lot of the numbers floating around, it’s not exactly apples to apples because of how the structure of some of these deals is set up,” he said, describing an environment where players feel pressure to commit early to secure hardware years down the line.

The larger truth is that even the insiders who helped shape the scaling thesis have been surprised at how consistently performance and business growth have combined.

Anthropic's strategy to outpace big-spending rival OpenAI

“Even as people who pioneered this belief in the scaling of laws, we continued to be surprised,” Daniela Amodei said. “What I hear so often from my colleagues is that exponentiality continues until it doesn’t. And every year we thought, ‘There’s no way things can continue exponentially’ – and every year they did.”

This line reflects both the optimism and anxiety of today’s development.

If exponential growth continues, companies locking in electricity, chips, and sites early may seem forward-thinking. If it breaks or lags behind adoption, overcommitted players may be forced to carry years of fixed costs and long-lead-time infrastructure built for demand that will never come.

Daniela Amodei made a distinction between the technology curve and the economic curve; This is an important nuance that is confused in public debate.

From a technological perspective, Anthropic does not see a slowdown in progress based on what the company has observed so far, he said. The more complex question is how quickly businesses and consumers can integrate these capabilities into real workflows, where purchasing, change management, and human friction can slow down even the best tool.

“No matter how good the technology is, it takes time to use it in a business or personal context,” he said. “The big question for me is: How quickly can businesses in particular, but also individuals, take advantage of technology?”

This corporate emphasis is at the heart of why Anthropic has become such a closely watched pioneer for the broader productive AI business.

The company has positioned itself as an enterprise-first model provider, with the majority of its revenue tied to other companies paying to connect Claude to workflows, products, and internal systems; This is a usage that may be more sticky than a consumer application, where customer churn may increase once the innovation wears off.

Anthropic adds $50 billion to AI's growing debt pile with new US data center move

Anthropic said revenue increased tenfold year over year for three consecutive years. And it has created an unusual distribution footprint in a market defined by fierce competition. The Claude model is available on major cloud platforms, including partners that develop and sell competing models.

Daniela Amodei framed this presence as a reflection of customer interest rather than softening, as large companies want choice between clouds and cloud providers want to offer what their biggest customers want to buy.

In practice, this multi-cloud stance is also a way to compete without making a single infrastructure bet.

If OpenAI is trying to build a broad structure around private campuses and private capacity, Anthropic is trying to remain flexible; It shifts where it operates based on cost, availability and customer demand, and focuses internal energy on improving model efficiency and performance per compute unit.

This distinction matters for another reason as 2026 begins: While both companies still operate in a private market world where computing needs are growing faster than certainty, they are being pushed toward a public market readiness discipline.

Anthropic and OpenAI have not announced IPO timelines, but both are making moves that appear to be preparatory, adding finance, governance, forecasting and a workload that can withstand public scrutiny.

At the same time, both are still raising new capital and making increasingly larger computing arrangements to fund the next leg of model development.

This constitutes a real test of strategy rather than rhetoric.

If the market maintains its funding scale, OpenAI’s approach may remain the industry standard. If investors start demanding greater efficiency, Anthropic’s “do more with less” stance could give them an advantage.

In this sense, Anthropic’s claim is not that scaling won’t work. Scaling isn’t the only lever that matters, and the winner of the next phase may be the lab that can continue to thrive while spending in ways the real economy can sustain.

“Exponentiality continues until it doesn’t,” said Daniela Amodei. The question for 2026 is what will happen to the AI ​​arms race and the companies building it if the industry’s favorite curve finally stops working.

WRISTWATCH: Anthropic and OpenAI rivalry goes global

Anthropic's value reached around 350 billion dollars after the investment agreement with Microsoft and Nvidia.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button