Optimizing Storage Devices for AI Data Centers

Erich Haratsch
Marvell Semiconductor
Abstract

The transformational launch of GPT-4 has accelerated the race to build AI data centers for large-scale training and inference. While GPUs and high-bandwidth memory are well-known critical components, the essential role of storage devices in AI infrastructure is often overlooked. This presentation will explore the AI processing pipeline within data centers, emphasizing the crucial role of storage devices in both compute and storage nodes. We will examine the characteristics of AI workloads to derive specific requirements for storage devices and controllers