AI memory is sold out, causing an unprecedented surge in prices

Eugene Mymrin | An | Getty Images
All computing devices require a part called memory, or RAM, for short-term data storage, but this year these key components won’t be enough to meet worldwide demand.
Because companies are like that Nvidia, Advanced Micro Devices And Google They need a lot of RAM for AI chips, and these companies are first in line for components.
Three primary memory vendors — MicronSK Hynix and Samsung Electronics — account for almost the entire RAM market, and their businesses are benefiting from the increase in demand.
“We’ve seen a very sharp, significant increase in demand for memory, and it’s far outpaced our ability to deliver that memory and, in our estimation, the ability of the entire memory industry to deliver it,” Micron business chief Sumit Sadana told CNBC at the CES trade show in Las Vegas this week.
Micron’s shares have risen 247% in the past year, and the company reported that net income nearly tripled in the latest quarter. Samsung this week said it also expects operating profit in the December quarter to nearly triple. Meanwhile, SK Hynix is considering a U.S. listing due to rising stock prices in South Korea, and the company announced in October that it had secured demand For the entire 2026 RAM production capacity.
Now memory prices are increasing.
TrendForce, a Taipei-based researcher that closely follows the memory market, said this week that it expects average DRAM memory prices to rise 50% to 55% this quarter compared to the fourth quarter of 2025. TrendForce analyst Tom Hsu told CNBC that this kind of increase in memory prices is “unprecedented.”
Three to one basis
Sadana said chipmakers like Nvidia surround the part of the chip that does the computing (the graphics processing unit, or GPU) with several blocks of a fast, specialized component called high-bandwidth memory, or HBM. HBM can often be seen when chip manufacturers tout their new chips into the air. Micron supplies memory to Nvidia and AMD, two leading GPU manufacturers.
Nvidia’s Rubin GPU, which recently entered production, comes with next-generation HBM4 memory with up to 288 gigabytes per chip. HBM is installed in eight visible blocks above and below the processor, and this GPU will be sold as part of a single server rack called NVL72, which conveniently combines 72 of these GPUs into a single system. In comparison, smartphones usually come with 8 or 12 GB of lower-power DDR memory.
Nvidia founder and CEO Jensen Huang introduces the Rubin GPU and Vera CPU while speaking at Nvidia Live at CES 2026 ahead of the annual Consumer Electronics Show on January 5, 2026 in Las Vegas, Nevada.
Patrick T. Fallon | AFP | Getty Images
But the HBM memory needed by AI chips is in much higher demand than the RAM used for consumers’ laptops and smartphones. HBM is designed for the high bandwidth specifications required by AI chips and is manufactured in a complex process in which Micron stacks 12 to 16 memory layers on a single chip, turning it into a “cube.”
Once Micron produces one-bit HBM memory, it is forced to give up making three-bit more traditional memory for other devices.
“As we increase the supply of HBM, there is less memory left for the non-HBM part of the market because of this three-to-one basis,” Sadana said.
TrendForce analyst Hsu said memory manufacturers are favoring server and HBM applications over other clients because there is higher growth potential in demand because business and cloud service providers are less price sensitive.
In December, Micron said it would do so. It shut down part of its business aimed at supplying memory to consumer computer makers so the company could save money on supplies of AI chips and servers.
Some in the tech industry are amazed at how much and how fast the price of RAM has increased for consumers.
Juice Labs co-founder and chief technology officer Dean Beeler said a few months ago he loaded his computer with 256GB of RAM, the maximum supported by current consumer motherboards. This cost him about $300 at the time.
“Who knew this would be nearly $3,000 worth of RAM just a few months later,” he posted on Facebook on Monday.
‘Memory wall’
AI researchers are starting to see memory as a bottleneck OpenAI’s ChatGPT launches in late 2022, said Majestic Labs co-founder Sha Rabii, an entrepreneur who previously worked on silicon at Google. Meta.
Rabii said previous AI systems were designed for models such as convolutional neural networks, which required less memory than the large language models, or LLMs, popular today.
While AI chips have become much faster, memory has not increased, leaving powerful GPUs to wait to retrieve the data needed to run LLMs, he said.
“Your performance is limited by the amount of memory you have and the speed of the memory, and if you keep adding more GPUs it’s not a win,” Rabii said.
The AI industry calls this the “memory wall.”
Erik Isakson | Digitalvision | Getty Images
“The processor spends more time twiddling its thumbs while waiting for data,” said Micron’s Sadana.
More and faster memory means AI systems can run larger models, serve more customers at once, and add “context windows” that allow chatbots and other LLMs to remember previous conversations with users; This adds a touch of personalization to the experience.
Majestic Labs is designing an AI system for inference with 128 terabytes of memory, or about 100 times more memory than some existing AI systems, Rabii said, adding that the company plans to avoid HBM memory for lower-cost options. Rabii said the additional RAM and architectural support in the design will allow its computers to support many more users simultaneously using less power compared to other AI servers.
Sold out for 2026
Wall Street asks companies in the consumer electronics industry these questions: Apple And Dell Technologieshow they will address the memory shortage and whether they will be forced to raise prices or reduce margins. Today, memory accounts for about 20% of a laptop’s hardware cost, Hsu said. This indicates an increase of between 10% and 18% in the first half of 2025.
In October, Apple finance chief Kevan Parekh told analysts that his company was seeing a “slight pullback” in memory prices, but he downplayed it as “nothing to really consider.”
But in November, Dell said it expected the cost basis of all its products to increase due to memory shortages. COO Jefferey Clarke told analysts that Dell plans to change its configuration mix to minimize price impacts, but the shortage will likely impact retail prices of the devices.
“I don’t see how this wouldn’t reach the customer base,” said Clarke. “We will do our best to reduce this.”
Even Nvidia, which has emerged as the largest customer in the HBM market, is facing questions over excessive memory needs, especially for its consumer products.
At a press conference at CES on Tuesday, Nvidia CEO Jensen Huang was asked whether the company was concerned that gaming customers might be resentful of AI technology due to rising game console and graphics card prices due to memory shortages.
Huang said that Nvidia is a very large memory customer and has long-standing relationships with companies in this field, but ultimately there needs to be more memory factories because the needs of artificial intelligence are so high.
“Because our demand is so high, every factory, every HBM supplier is preparing, and they’re all doing great,” Huang said.
Micron can only meet at most two-thirds of some customers’ mid-term memory needs, Sadana said. But he said the company is currently building two large factories, called fabs, in Boise, Idaho, and that they will begin producing memory in 2027 and 2028. Micron will also break ground on a factory in Clay, New York, that is expected to be operational in 2030.
But for now, “we are out of stock for 2026,” Sadana said.




