AMD announced a strategic partnership with OpenAI to jointly develop AI infrastructure leveraging AMD’s hardware for OpenAI’s advanced models. The agreement includes OpenAI’s option to acquire 160 million AMD shares at $1 per share valued at $160 million and a commitment to deploy AMD Instinct MI300 series accelerators in data centers totaling up to 6 gigawatts of power capacity. This collaboration aims to support OpenAI’s scaling of generative AI technologies like GPT-5 requiring massive computational resources.
The chips will be tailored to power training and inference workloads with the first deployments scheduled for mid 2026. The partnership positions both companies to meet growing demand for AI computing amid expansions in cloud and enterprise sectors.
Table of Contents
The main objective of the partnership
The collaboration focuses on integrating AMD’s Instinct MI300X and upcoming MI400 series GPUs into OpenAI’s infrastructure for training large language models. OpenAI plans to utilize up to 10 000 MI300X accelerators initially providing over 100 exaflops of AI performance. The data centers will incorporate AMD’s ROCm software stack for optimized workloads supporting frameworks like PyTorch and TensorFlow. Objectives include reducing dependency on single suppliers like Nvidia while accelerating OpenAI’s roadmap for multimodal AI capabilities. The partnership extends to joint R&D for energy efficient cooling and power management in hyperscale environments.
OpenAI’s option to purchase 160 million AMD shares at $1 each totals $160 million and could represent up to 0.5 percent of AMD’s outstanding shares post exercise. The deal includes performance milestones for vesting with full ownership upon deployment of specified compute clusters. AMD benefits from recurring revenue through hardware sales estimated at $2 billion over three years for the initial phase. The structure provides OpenAI with equity exposure to AMD’s growth in AI semiconductors while securing long term supply commitments. No cash infusion occurs immediately but the option strengthens strategic alignment.
What about the infrastructure and performance
The buildout targets six data centers with 1 gigawatt capacity each powered by renewable sources to minimize environmental impact. Facilities will locate in the United States and Europe with construction beginning in Q1 2026 and operational phases through 2028. AMD supplies accelerators interconnects and EPYC CPUs for server nodes achieving up to 50 percent cost savings over alternative architectures. Cooling systems incorporate liquid immersion technology reducing energy use by 40 percent compared to air cooled setups. The infrastructure supports hybrid cloud configurations allowing OpenAI to scale compute dynamically. Security features include dedicated networks and compliance with ISO 27001 standards for data protection.
AMD’s MI300 series delivers 5.3 petaflops of FP16 performance per accelerator with high bandwidth memory supporting rapid data transfers essential for transformer models. The ROCm platform provides open source tools for optimization reducing development time for OpenAI’s engineers. Integration with OpenAI’s Triton compiler enables custom kernels for specific workloads improving efficiency by 20 percent. The setup achieves low latency inference for real time applications like chat interfaces and content generation. Benchmark tests show 30 percent faster training times than previous generations for similar power envelopes.
Let’s talk about the deployment timelines
Initial prototypes deploy in Q2 2026 with full data center operations commencing in Q4 2026 scaling to 6 gigawatts by 2028. Share purchase options vest in phases tied to accelerator deliveries and performance validations. AMD commits to quarterly reviews ensuring compatibility with OpenAI’s evolving requirements. The partnership includes talent exchange programs with engineers collaborating on chiplet designs for future iterations. Public announcements of milestones occur at technology conferences like CES 2026.

