Meta-NVIDIA ink massive multiyear deal for next-gen GPUs, standalone CPUs, and AI for WhatsApp

Meta Platforms has signed a comprehensive, multi-year deal with Nvidia that will see the social media company deploy “millions” of the chipmaker’s processors into AI data centers, deepening a partnership that has helped define the industry’s modern AI boom.
The deal, announced Tuesday, expands Meta’s use of Nvidia hardware beyond graphics processing units, and the company is poised to become the first major operator to launch Nvidia’s Grace central processing units as discrete chips at scale.
The partnership will also give Meta early access to Nvidia’s next-generation Vera Rubin systems; as both companies will race to build larger computing clusters to power advanced AI models.
Meta doubles down on Nvidia as AI spending rises
The expanded partnership comes as Meta accelerates an infrastructure push that has surprised investors with its scale. In January, the company said it could spend $135 billion on artificial intelligence in 2026. He also pledged to invest $600 billion in data centers and the physical infrastructure needed to run them in the United States by 2028.
“We are excited to expand our partnership with Nvidia to create pioneering clusters using the Vera Rubin platform to deliver personal superintelligence to everyone on Earth,” Meta CEO Mark Zuckerberg said in a statement.
Zuckerberg has repeatedly framed Meta’s AI strategy as a bid to deliver advanced capabilities directly to consumers. He reiterated that ambition in July, announcing a long-term effort to “deliver personal superintelligence to everyone in the world.”
Financial terms were not disclosed, but analysts said the commitment would likely be very large given Meta’s projected capital expenditure.
“The deal is definitely worth tens of billions of dollars.” CNBC Chip analyst Ben Bajarin of Creative Strategies was quoted. “We expect the majority of Meta’s capex to go towards this creation of Nvidia.”
“Millions of Nvidia GPUs” and the switch from Blackwell to Rubin
Nvidia said the deal will include products from the current Blackwell generation and the upcoming Vera Rubin design, securing significant supply to Meta at a time when demand for high-end AI accelerators continues to outpace production.
Nvidia’s Blackwell GPUs have been on hold for months, and Rubin only recently entered production. With the new agreement, Meta positions itself to scale quickly as rivals struggle for capacity.
The commodity currently accounts for about 9% of Nvidia’s revenue, underscoring how much the chipmaker’s growth has become dependent on a small group of megabuyers building industrial-scale AI systems.
Nvidia Grace CPUs: a rare move into the heart of the server
The most notable change is Meta’s plan to deploy Nvidia’s Grace CPUs as standalone chips rather than just using them as part of tightly integrated CPU-GPU systems.
Nvidia said this will be the first large-scale deployment of Grace CPUs on their own. The move also signals a more direct challenge to Intel and Advanced Micro Devices, which have long dominated general-purpose server computing.
“They’re really designed to run these inference workloads, these agency workloads, to accompany the Grace Blackwell/Vera Rubin shelf,” Bajarin said. “The meta that does this at scale is a validation of the soup-to-nuts strategy that Nvidia applies to both infrastructure stacks (CPU and GPU).”
The next generation Vera CPUs are planned to be deployed by Meta in 2027.
Inside Meta’s data center structure: Ohio, Louisiana and beyond
Meta outlined plans for 30 data centres, 26 of which will be US-based. Two of the largest AI facilities are currently under construction: the Prometheus 1-gigawatt facility in New Albany, Ohio, and the 5-gigawatt Hyperion facility in Richland Parish, Louisiana.
The pure energy footprint of these projects has become part of the story. One gigawatt is roughly the amount of electricity needed to power 750,000 homes, and Meta’s largest planned facility is several times that.
Nvidia’s hardware will be at the center of these facilities, connecting large banks of GPUs and CPUs to training and inference clusters capable of running edge-scale models.
Networking, security and a “deep co-design” effort
The partnership extends beyond processors. Meta will also use Nvidia’s Spectrum-X Ethernet switches that connect GPUs to large AI data centers. The companies said their engineering teams will work together “in deep co-design to optimize and accelerate state-of-the-art AI models” for Meta platforms.
According to the statement, Meta will also use Nvidia’s security capabilities in AI features in WhatsApp.
Ian Buck, Nvidia’s vice president of accelerated computing, said the two companies did not disclose a timeline or dollar figure. But he emphasized that Nvidia’s broad product line of chips, systems, networks and software remains difficult to match with its rivals.
“There are many different types of workloads for CPUs,” Buck said. “What we found is that Grace is an excellent back-end data center CPU,” meaning it handles behind-the-scenes computing tasks.
Meta protections with AMD, Google and in-house chips
Despite the commitment’s expansion, Meta has continued to test alternatives as it tries to reduce reliance on Nvidia, whose chips have become a bottleneck in the industry.
Nvidia shares fell in November following reports that Meta was evaluating Google’s tensor processing units for data centers in 2027. Meta also designs its own silicon and uses AMD chips; The relationship attracted attention as AI companies sought second-source suppliers after AMD struck a deal with OpenAI in October.
Still, Tuesday’s announcement is a clear signal that Meta believes it will remain the dominant platform in Nvidia’s cutting-edge AI infrastructure for years to come.
A high-risk infrastructure bet amid Wall Street skepticism
Meta’s AI strategy has been closely scrutinized by investors, especially after the company’s ambitious spending forecasts triggered its worst trading day in three years in October. The stock then took off in January after Meta issued a stronger-than-expected sales forecast.
The company is also working on a new artificial intelligence model called Avocado, which is a continuation of Llama technology. The final release last spring failed to generate widespread excitement among developers, CNBC previously reported.
For Nvidia, the Meta deal is another indication of how its business is evolving from selling discrete chips to selling a full-blown AI computing platform that now extends deeper into the data center than ever before.
It’s a bet that the fastest path to consumer AI goals for Meta will be through the most expensive computing infrastructure Silicon Valley has ever built.


