Dell is really stepping up its AI game, bringing in an AMD-powered server to beef up its high-performance computing offerings. Picture this: the Dell PowerEdge XE9680, not only powered by Nvidia but also coming soon with eight AMD Instinct MI300X accelerators. These little powerhouses are a game-changer, allowing businesses to flex their muscles in training and running their own large language models (LLMs). With 1.5GB of high-bandwidth memory (HBM3) and a staggering 21 petaFLOPS of performance, it’s like giving your system a dose of superhero serum.
What’s even cooler is that customers can play around with the global memory interconnect (xGMI) standard to scale their systems. And the connection of AMD’s GPUs? Smooth as silk, thanks to the Ethernet-based AI fabric and the trusty Dell PowerSwitch Z9664F-ON. Dell previously dropped a version with Nvidia H100 GPUs, showing they’re not playing favorites and want to give users options.
Dell is throwing in the Dell Validated Design for Generative AI with AMD. It’s a mouthful, but essentially, it’s a toolkit for organizations wanting to build their own hardware and networking setup to run LLMs. It’s like giving you the keys to the AI kingdom, guiding you through integration, installation, and performance tweaks.
Under the hood, Dell is rolling with AMD ROCm powered AI frameworks. That means you get to play with the cool kids – PyTorch, TensorFlow, and OpenAI Triton, all with native support on the PowerEdge XE9680 decked out with AMD accelerators. Dell’s waving the flag for an open approach, marching alongside the Ultra Ethernet Consortium (UEC). They’re all about standards-based networking, letting switches from different vendors mingle in the same system. AMD is on the same wavelength, advocating for an open Ethernet for AI, a different tune than Nvidia’s jam.