Meta expands Nvidia deal to use millions of AI data center chips

Meta’s 5-gigawatt Hyperion data center is under construction in Richland Parish, Louisiana, January 9, 2026.
Meta
Meta will use millions of them Nvidia Chips in AI data centers, including Nvidia’s new discrete CPUs and next-generation Vera Rubin systems, will be acquired in a comprehensive new deal announced Tuesday.
Meta CEO Mark Zuckerberg said in a statement that the expanded partnership continues his company’s effort to “bring personal superintelligence to everyone in the world” that it announced in July.
Financial terms of the deal were not specified.
In January, Meta announced plans to spend up to $135 billion on artificial intelligence in 2026. “The deal is definitely in the tens of billions of dollars,” said chip analyst Ben Bajarin of Creative Strategies. “We expect the majority of Meta’s capex to go towards this creation of Nvidia.”
The partnership is nothing new, as Meta has been using Nvidia graphics processing units for at least a decade, but the deal marks a much broader technology partnership between the two Silicon Valley-based giants.
Discrete CPUs are the biggest new thing in the deal; Meta became the first company to use Nvidia’s Grace central processing units as standalone chips in data centers, rather than using them together with GPUs in a server. Nvidia said this is the first large-scale deployment of Grace CPUs on their own.
“They are really designed to run these inference workloads, these intermediary workloads, to accompany the Grace Blackwell/Vera Rubin rack,” Bajarin said. “The meta that does this at scale is a validation of the soup-to-nuts strategy that Nvidia applies to both infrastructure stacks (CPU and GPU).”
The next generation Vera CPUs are planned to be deployed by Meta in 2027.
The multi-year agreement is part of Meta’s overall commitment to spend $600 billion in the U.S. by 2028 on the infrastructure needed by data centers and facilities.
Meta has plans for 30 data centers, 26 of which are US-based. The two largest AI data centers are currently under construction: the Prometheus 1-gigawatt site in New Albany, Ohio, and the 5-gigawatt Hyperion site in Richland Parish, Louisiana.
The deal also includes Spectrum-X Ethernet switches, Nvidia’s networking technology used to interconnect GPUs in large-scale AI data centers. Meta will also use Nvidia’s security capabilities as part of the AI features in WhatsApp.
The social media giant doesn’t just rely on the top chip maker. In November, Nvidia shares fell 4% on news that Meta was considering using it. GoogleIt will put tensor processing units into operation in data centers in 2027.
Meta also develops silicon processors and uses chips in-house. Advanced Micro DevicesThe AI giants won a notable deal with OpenAI in October as it looked to Nvidia for a second source due to limited supply.
Nvidia’s current Blackwell GPUs have been on pre-order for months, and its next-generation Rubin GPUs recently entered production. With the agreement, Meta secured a healthy supply of both.
Nvidia and Meta’s engineering teams will work together “in deep co-design to optimize and accelerate cutting-edge AI models” for the social media giant.
Meta is developing a new frontier model called Avocado as the successor to its Llama AI technology. The latest version, released last spring, failed to excite developers, CNBC previously reported.
Meta’s shares have been on a roller coaster ride in recent months, and its artificial intelligence strategy in particular has surprised Wall Street.
The stock had its worst day in three years in October after the company announced ambitious AI spending, then rose 10% in January after reporting stronger-than-expected sales guidance.
CNBC’s Jonathan Vanian and Kristina Partsinevelos contributed to this report.



