There has been recent debate about whether next-generation AI applications, including generative AI, will truly enable enterprise transformation in the form of improved knowledge-worker productivity and enhanced customer experiences. However, one thing is clear: larger enterprises, public cloud service providers, and hyperscalers are betting big—investing significantly in modern infrastructure to support private AI cloud and other infrastructure-as-a-service offerings designed to unlock new AI monetization opportunities.
AI workloads continue to tax legacy connectivity hardware and software, given the immense data volumes that feed AI models and support the scale-out of larger cluster sizes. At the same time, the high capital and operational expenses associated with new AI infrastructure cannot be overstated. To address these challenges, AI-optimized data processing units and network interface cards can provide valuable performance enhancements and cost-effective offload. An open ecosystem, buoyed by the Ultra Ethernet Consortium, is also making considerable progress to improve the Ethernet standard to support this objective. Consequently, there is an opportunity to leverage all these advancements to deliver cost-effective, highly performant, and highly available connectivity for today’s modern AI applications.
Moor Insights & Strategy (MI&S) believes that the third-generation AMD Pensando Salina DPU and the AMD Pensando Pollara 400 NIC can provide what is required to facilitate cutting-edge AI services at scale. AMD Pensando Salina delivers twice the performance of previous DPU generations. Furthermore, AMD Pensando Pollara 400 represents the industry’s first Ultra Ethernet-ready AI data interconnect solution, fortified with next-generation RDMA transport capabilities. In pairing these two devices, AMD can help IT infrastructure providers meet the stiff demands of AI applications on-premises, in the cloud, and eventually at the network edge.
Click the logo below to download the research paper and read more.
Table of Contents
- Summary
- AMD Pensando Salina DPU
- The Argument for Ethernet over InfiniBand
- AMD Pensando Pollara 400
- Call to Action
Companies Cited:
- AMD
- Arista
- Cisco
- Dell
- HPE
- Juniper