Life Buzz News

AMD's AI punches are landing where it counts - against Nvidia


AMD's AI punches are landing where it counts - against Nvidia

Not every AI customer wants or can afford Nvidia's powerhouse Blackwell chip

As the artificial-intelligence (AI) market struggles to keep up with the demand of infrastructure build-out, computer-chip giant AMD (AMD) has just showcased its collection of AI-ready chips in hopes of capturing greater attention from developers, hyperscalers, enterprises - and, of course, investors.

At AMD's "Advancing AI" event in San Francisco last week, Chief Executive Lisa Su took the stage with a bevy of hardware and software partners to make the case that AMD is the strongest competitor Nvidia (NVDA) has right now.

AMD pointed to growing momentum around its Instinct MI300X data-center AI chip and announced the MI325X, expected to begin shipping later this year, which is designed to improve performance and memory for larger AI models. We also got our first preview of the MI355X, a significant redesign of its GPU architecture, which could start shipping in the second half of 2025.

The new product announcements of the MI325X and MI355X are strong improvements to AMD chips' performance and capabilities. But nothing AMD presented suggests it will be able to compete with Nvidia's Blackwell for the flagship performance crown - at least not yet.

Some might conclude that AMD can't compete, gain market share or improve revenue. But that misses a fundamental truth of the AI market: not every AI customer wants or can afford Blackwell.

Read: Retail investor hype for AI stocks seems to be slowing down

Plus: Where Nvidia ranks among fastest-growing AI companies. No, it's not No. 1.

AI infrastructure investment isn't just about having the biggest, most expensive chips.

Where AMD has a big advantage, is in "performance-per-dollar per watt" - a measurement of not just raw performance, but the cost of that performance. Microsoft (MSFT) CEO Satya Nadella has called this the "most important factor" about the next wave of AI investments, indicating that infrastructure investment isn't just about having the biggest, most expensive chips, but finding the balance that allows vendors to scale AI performance across different workloads with different efficiencies.

While I don't foresee any drastic changes in Nvidia's ability to dominate the AI space, Nvidia could feel pressure on the price and margins of these parts from their customers if AMD can continue to offer compelling options in that "performance-per-dollar per watt" point of view. As we know from watching investors over the past year, the stock market reacts negatively to any change in Nvidia's growth trajectory.

The future looks bright for AMD in an area where Intel is entrenched.

Beyond GPUs, AMD last week announced new server CPUs, codenamed Turin, officially the 5th Gen EPYC processor, along with new networking chips (DPU) that target the growing need for faster and more open networks using Ethernet technology, in direct competition with Nvidia's InfiniBand technology.

One of the more impactful and surprising sets of data that I saw from AMD came on the EPYC data center CPU side. Though not as sexy as GPUs for AI, AMD has a significant opportunity to continue to take share from Intel (INTC) in the data center. As of the end of the third quarter, reports show that AMD hit a new high of 37% market share, a big achievement considering the company was basically at 0% share a decade ago.

Even more interesting is that the key hyperscalers in the market, including Microsoft Azure, Google Cloud (GOOGL), Amazon AWS (AMZN) and Meta Platforms (META) all evidently have much higher adoption rates, averaging more than 50%, with one stretching to nearly 80% share for AMD EPYC CPUs. Meta was on stage at the event and said it continues to ramp EPYC integrations in its data center, with more than 1.5 million of AMD's EPYC CPUs deployed.

Considering that hyperscalers are the early adopters of new, leading technologies, it fits that enterprise and on-premise adoption of EPYC CPUs would be a slow follow. The future looks bright for AMD in this area, where Intel is entrenched.

As the trust in AMD's products and engineering grows with these hyperscalers, I expect moderate increases in the adoption of AMD GPU technology where it makes sense. Su, AMD's CEO, claimed that the company sees the AI-accelerator market growing at a compound annual growth rate of more than 60% through 2028, hitting a total addressable market size of $500 billion in that timeframe. If true, even capturing a high-single-digit or low-double-digit market share would be a huge boost for AMD's revenue.

AMD has a long road ahead to continue competing with and showing value compared to the dominant force of Nvidia in AI, but the combination of hardware, software and developer readiness that AMD presented last week suggests that Su has the company moving in the right direction - and with the right levels of urgency.

Ryan Shrout is the President of Signal65 and founder at Shrout Research. Follow him on X @ryanshrout. Shrout has provided consulting services for AMD, Qualcomm, Intel, Arm Holdings, Micron Technology, Nvidia and others. Shrout holds shares of Intel.

Read: After AMD's AI event, this analyst says Nvidia's stock is the better play

More: AMD's AI event lacked a wow factor, but here's one reason investors should be excited

-Ryan Shrout

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

Previous articleNext article

POPULAR CATEGORY

corporate

8755

tech

9864

entertainment

10637

research

4835

misc

11384

wellness

8306

athletics

11222