Strategic partnership unleashes cost-efficient AI deployments and modernized virtualization with AMD Instinct GPUs and EPYC CPUs
At the Red Hat Summit 2025 in Boston, Red Hat and AMD announced a strategic deepening of their collaboration to fuel next-generation AI inference, virtualization, and hybrid cloud performance. This move underscores Red Hat’s growing leadership in the AI infrastructure space by offering enterprise-ready AI and virtualization solutions that scale efficiently with AMD’s powerful compute platforms.
The collaboration brings full support for AMD Instinct™ GPUs on Red Hat OpenShift AI, enabling enterprises to deploy AI workloads with greater performance and cost-effectiveness across hybrid environments. Testing conducted with AMD Instinct MI300X GPUs on Microsoft Azure successfully scaled both small and large language models (LLMs) across single VMs, significantly cutting performance costs by minimizing the need for multi-VM setups.
Driving Generative AI Performance Forward
Red Hat and AMD are also driving upstream innovation by contributing to the vLLM (Very Large Language Model) open-source community. Together, they are advancing:
- Enhanced performance for quantized and dense AI models on AMD GPUs
- Multi-GPU scalability, improving energy efficiency and throughput
- Enterprise-grade support through Red Hat AI Inference Server with native compatibility for AMD Instinct GPUs
As the top commercial contributor to vLLM, Red Hat ensures organizations can confidently deploy AI models on validated AMD hardware, offering flexibility without compromise.
Modernizing the Datacenter with AMD EPYC and OpenShift Virtualization
Beyond AI, Red Hat and AMD are targeting infrastructure modernization with Red Hat OpenShift Virtualization optimized for AMD EPYC™ CPUs. The combination allows organizations to consolidate VM and containerized workloads on a unified hybrid cloud platform—delivering a lower total cost of ownership and greater power efficiency.
“Our extended collaboration with AMD expands the spectrum of options for organizations seeking to ready their IT environments for an ever-evolving future.”
– Ashesh Badani, SVP & Chief Product Officer, Red Hat
Validated on servers from Dell, HPE, and Lenovo, this stack enables businesses to transition legacy datacenters into AI-ready, cloud-native environments—accelerating innovation without sacrificing current investments.
Executive Perspectives
Ashesh Badani, Red Hat’s SVP and CPO, emphasized the importance of flexibility:
“Fully realizing the benefits of AI means that organizations must have the choice and flexibility to optimize their IT footprint for the rigors of scaling demand.”
Philip Guido, EVP and CCO at AMD, added: “By combining Red Hat’s open source platforms with AMD Instinct GPUs and EPYC CPUs, we’re delivering the performance and efficiency customers demand to accelerate AI, virtualization and hybrid-cloud innovation.”