Dell and Nvidia have joined forces to deliver AI-powered hardware solutions tailored for enterprises. The collaboration aims to leverage generative AI, including popular AI writer ChatGPT and other LLMs, within on-premises environments. The joint effort, known as Project Helix, will not only provide validated blueprints and reference deployments but also enable businesses to kickstart their AI workloads effectively.
Under Project Helix, Dell will contribute its PowerEdge servers, such as the XE9680 and R760a, while Nvidia will supply its advanced H100 tensor core GPUs, succeeding the A100 that powers ChatGPT. Dell’s storage solutions, PowerScale and ECS, will integrate with the hardware, and Nvidia will offer its Nvidia AI Enterprise software, utilizing the Nvidia NeMo framework for generative AI.
Kari Briski, VP of software product management at Nvidia, emphasized the importance of bringing security and privacy to enterprise customers through on-premises AI solutions. While cloud-based AI is prevalent in enterprises, Briski highlighted the need for on-premises LLMs, stating that every enterprise requires an LLM for their business operations.
Project Helix aims to operationalize LLMs for practical enterprise purposes, providing easily replicable pre-trained foundational models. The initiative’s blueprints will guide enterprises in customizing their AI deployments to meet their specific needs. Varun Chhabra, SVP for product marketing at Dell, emphasized the significance of striking the right balance in computer resources, affirming that Project Helix offers the best approach to achieving this goal.
Chhabra further noted that Dell’s hardware not only supports on-premises AI but also enables enterprises to harness the power of AI deployed in the cloud. Designs based on Project Helix are scheduled to be available from July through traditional channels and APEX flexible consumption options, offering enterprises comprehensive AI solutions to drive their business forward.