RESEARCH PAPER: AMD — The Next-Generation DPU and NIC Optimized for AI Workloads

By Will Townsend, Patrick Moorhead - October 15, 2024

There has been recent debate about whether next-generation AI applications, including generative AI, will truly enable enterprise transformation in the form of improved knowledge-worker productivity and enhanced customer experiences. However, one thing is clear: larger enterprises, public cloud service providers, and hyperscalers are betting big—investing significantly in modern infrastructure to support private AI cloud and other infrastructure-as-a-service offerings designed to unlock new AI monetization opportunities.

AI workloads continue to tax legacy connectivity hardware and software, given the immense data volumes that feed AI models and support the scale-out of larger cluster sizes. At the same time, the high capital and operational expenses associated with new AI infrastructure cannot be overstated. To address these challenges, AI-optimized data processing units and network interface cards can provide valuable performance enhancements and cost-effective offload. An open ecosystem, buoyed by the Ultra Ethernet Consortium, is also making considerable progress to improve the Ethernet standard to support this objective. Consequently, there is an opportunity to leverage all these advancements to deliver cost-effective, highly performant, and highly available connectivity for today’s modern AI applications.

Moor Insights & Strategy (MI&S) believes that the third-generation AMD Pensando Salina DPU and the AMD Pensando Pollara 400 NIC can provide what is required to facilitate cutting-edge AI services at scale. AMD Pensando Salina delivers twice the performance of previous DPU generations. Furthermore, AMD Pensando Pollara 400 represents the industry’s first Ultra Ethernet-ready AI data interconnect solution, fortified with next-generation RDMA transport capabilities. In pairing these two devices, AMD can help IT infrastructure providers meet the stiff demands of AI applications on-premises, in the cloud, and eventually at the network edge.

Click the logo below to download the research paper and read more.

Table of Contents

  • Summary
  • AMD Pensando Salina DPU
  • The Argument for Ethernet over InfiniBand
  • AMD Pensando Pollara 400
  • Call to Action

Companies Cited:

  • AMD
  • Arista
  • Cisco
  • Dell
  • HPE
  • Juniper
Will Townsend
+ posts

Will Townsend manages the networking and security practices for Moor Insights & Strategy focused on carrier infrastructure providers, carrier services, enterprise networking and security. He brings over 30 years of technology industry experience in a variety of product, marketing, channel, business development and sales roles to his advisory position.

Patrick Moorhead

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.