Nvidia’s shares tumbled by up to 5% on Tuesday following significant advancements from rivals in the fiercely competitive AI chip market.
Intel’s direct rivalry intensified as the company unveiled its latest AI chip, Gaudi 3.
Generative AI company Stability AI commented on the development, “Alternatives like Intel Gaudi accelerators give us better price performance, reduced lead time, and ease of use with our models taking less than a day to port.”
In other words, “good enough” coupled with lower prices sounds pretty attractive in a supply-constrained market growing by triple digits. And the software, at least for generative AI, seems to be an ever-smaller hurdle to adoption.
Gaudi 3 vs. Nvidia
The Gaudi 3 chip is set to compete directly against Nvidia’s H100 AI chips, which have helped power much of Nvidia’s surge in revenue and income over the past year.
Intel chose not to draw direct comparisons with AMD. Instead, it highlights its performance advantages over Nvidia’s H100 and H200 for training and inference processing. Intel sees its Xeon-Gaudi combination as a key strength.
The newly introduced Gaudi 3 is manufactured using TSMC’s 5nm process. It boasts 128GB of HBM3E memory and doubles the Ethernet performance compared to its predecessor. Despite featuring eight matrix engines and 64 Tensor processors, the Gaudi 3 lacks the 4-bit math operators present in the new B100. Having this feature can effectively help it double the performance of inference processing.
While Ethernet support may facilitate enterprise adoption, it falls short of the performance offered by NVLink5. Thus, the latter will likely be favored by those constructing large-scale AI facilities.
According to Intel, the Gaudi 3 AI accelerator will offer 50% better inference performance than Nvidia’s H100.
Furthermore, it is about 40% more power efficient than the H100. Intel said its new AI chip will be priced “a lot lower” than Nvidia’s AI chips. Intel said its new AI accelerator chip would be available to different companies in the second quarter. These firms include Dell, HPE, and Supermicro.
Alphabet ventures into chip development with Axion
Alphabet is also entering the chip business with the in-house development of its Axion chip. This would help power Google’s big-data analysis and reduce the company’s reliance on Nvidia.
While some analysts on Wall Street see Google’s Axion chip as a direct competitor to Nvidia, Google vice president Amin Vahdat said, “I see this as a basis for growing the size of the pie.”
The Axion chips are CPU-based and can handle various tasks related to the AI workload.
While Nvidia is set to face increased competition, Big Technology founder Alex Kantrowitz said that the company’s edge in AI-related software is what will set it apart from other chip companies.
” Developers look to Nvidia’s software to both train and run these AI models. This just gives it an advantage that is very difficult to catch up to. Intel can tell us all day long about how powerful its chips are, it doesn’t have that software advantage that Nvidia does. So that means AI developers are locked into the Nvidia ecosystem,” Kantrowitz said.