Qualcomm announces AI chips to compete with AMD and Nvidia

Qualcomm It announced Monday that it will launch new AI accelerator chips, signaling new competition. NvidiaThe company that has dominated the AI semiconductors market so far.
The AI chips are a shift from Qualcomm, which has so far focused on semiconductors for wireless connectivity and mobile devices rather than large data centers.
Qualcomm said that both the AI200, which will go on sale in 2026, and the AI250, planned for 2027, could come with a liquid-cooled system that will fill the server rack.
Qualcomm matches Nvidia and AMDOffering its graphics processing units, or GPUs, in full-rack systems that allow up to 72 chips to act as a single computer. AI labs need this computing power to run the most advanced models.
Qualcomm’s data center chips are based on the AI parts in Qualcomm’s smartphone chips called Hexagon neural processing units, or NPUs.
“We wanted to prove ourselves in other areas first, and once we built our strength there, it was pretty easy for us to move up to the data center level,” Durga Malladi, Qualcomm’s general manager of data center and edge, said in a call with reporters last week.
Qualcomm’s entry into the data center world marks new competition in the fastest-growing market in tech: equipment for new AI-driven server farms.
Approximately $6.7 trillion in capital expenditures will be spent on data centers by 2030, the majority of which will go to systems based on artificial intelligence chips. a McKinsey estimate.
The industry is dominated by Nvidia, whose GPUs have more than 90% of the market so far, with sales pushing the company’s market value to over $4.5 trillion. Nvidia’s chips were used to train OpenAI’s GPTs, the large language models used in ChatGPT.
But companies like OpenAI have been looking for alternatives, and earlier this month the startup announced acquisition plans. chips The second-place GPU maker will come from AMD, potentially taking a stake in the company. Other companies, e.g. Google, Amazon And Microsoftit is also developing its own AI accelerators for cloud services.
Qualcomm said its chips focus on inferring or running AI models rather than training, while labs like OpenAI are creating new AI capabilities by processing terabytes of data.
The chipmaker said rack-scale systems will ultimately be less costly for customers such as cloud service providers, and one rack uses 160 kilowatts of energy. comparable to high power consumption from some Nvidia GPU racks.
Malladi said Qualcomm will sell its AI chips and other parts separately, especially for customers such as hyperscalers who prefer to design their own racks. He said other AI chip companies like Nvidia or AMD might even be customers of some of Qualcomm’s data center parts, such as the central processing unit, or CPU.
“What we tried to do was make sure our customers were in a position to either get it all or say, ‘I’ll mix and match,’” Malladi said.
The company declined to comment on the price of the chips, cards or rack and how many NPUs can be installed in a single rack. Qualcomm in May partnership announced Saudi Arabia’s Humain will supply AI inference chips to data centers in the region and become a customer of Qualcomm, committing to deploy as many systems as can use 200 megawatts of power.
Qualcomm said its AI chips have advantages over other accelerators in terms of power consumption, cost of ownership and a new approach to how memory is handled. He said the AI cards support 768 gigabytes of memory, which is higher than offerings from Nvidia and AMD.
Qualcomm’s design for an AI server called AI200.
Qualcomm



