google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Nvidia and Broadcom’s AI chips will go head-to-head. How they compare.

Nvidia and Broadcom are increasingly entering into direct competition as AI companies look for the most cost-effective way to train and run AI models. Analysts at UBS alternatively see demand for Broadcom processors growing rapidly.

Nvidia has enjoyed several years as the dominant provider of AI chips due to its strength in graphics processing units (GPUs). But the growth of Google’s Tensor Processing Units (TPUs), which Broadcom helped design, has presented its biggest competitive challenge yet.

The biggest change in the market this year is the potential for TPU sales to increase to external customers. AI start-up Anthropic has placed two large orders for them totaling $21 billion, according to The Wall Street Journal, and social media company Meta Platforms is also in talks to use the processors.

“Many have turned to TPU as an interim alternative to GPU, and we believe demand has accelerated significantly,” UBS analyst Timothy Arcuri wrote in a research note this week.

Arcuri estimates that Broadcom will ship approximately 3.7 million TPUs this year, with that number rising to more than five million in 2027. This will contribute to Broadcom’s AI revenue of approximately $60 billion in 2026, rising to $106 billion in 2027.

By comparison, Nvidia is expected to generate about $300 billion from data center sales in fiscal 2027, which ends in January of next year, largely due to GPU sales, according to FactSet.

The average selling price for Google and Broadcom’s TPUs will be between $10,500 and $15,000, rising to around $20,000 in the next few years, according to a UBS analyst. Nvidia doesn’t disclose the individual price of its chips, but analysts generally say the latest Blackwell chips cost between $40,000 and $50,000 per unit.

This could make TPUs more attractive in terms of inference (the process of generating answers or conclusions from AI); However, Nvidia continues to have an advantage when it comes to training artificial intelligence models.

“The latest Ironwood TPU performance according to benchmarks, [Nvidia’s] GB300 for inference, but ~1/2 of that in training. Anecdotally, Arcuri says that a model that can be trained in 35-50 days on the latest NVDA GPUs would require ~3 months of training on TPUs.

Analysts at Mizuho currently estimate that 20% to 40% of AI workloads are devoted to inference, and that this proportion will increase to between 60% and 80% in the next five years.

But Nvidia could potentially take a step back in the inference market by using technology from AI hardware start-up Groq. Nvidia recently agreed to purchase a non-exclusive license for the technology from Groq, a private company specializing in inference hardware.

Nvidia paid $20 billion for Groq’s technology, including compensation packages for many of the company’s employees who joined Nvidia, The Wall Street Journal reported.

Write to Adam Clark at adam.clark@barrons.com

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button