NVIDIA’s Spectrum-X Ethernet With MRC Redefines AI Networking: OpenAI, Microsoft, Oracle Already Deploying

NVIDIA today announced that its Spectrum-X Ethernet platform, now equipped with the Multipath Reliable Connection (MRC) protocol, is rapidly becoming the backbone of the world’s largest AI factories. The open specification, contributed to the Open Compute Project, has already been deployed by OpenAI, Microsoft, and Oracle to power gigascale AI training runs—setting a new industry benchmark for performance and reliability.

“Deploying MRC in the Blackwell generation was very successful and made possible by a strong collaboration with NVIDIA,” said Sachin Katti, head of industrial compute at OpenAI. “MRC’s end-to-end approach enabled us to avoid much of the typical network-related slowdowns and interruptions and maintain the efficiency of frontier training runs at scale.”

MRC is an RDMA transport protocol that allows a single connection to spread traffic across multiple network paths, improving throughput, load balancing, and availability. Think of replacing a single-lane road with an intelligent grid system that reroutes cars around traffic jams in real time—that is the leap MRC delivers for AI data centers.

Background

Building large-scale AI models requires networking that can handle unprecedented data volumes without bottlenecks. Traditional Ethernet fabrics often suffer from packet loss and congestion, which stalls GPU utilization and slows training. NVIDIA’s Spectrum-X was purpose-built to solve these challenges, combining hardware designed for AI workloads with advanced telemetry and fabric control.

NVIDIA’s Spectrum-X Ethernet With MRC Redefines AI Networking: OpenAI, Microsoft, Oracle Already Deploying
Source: blogs.nvidia.com

Microsoft’s Fairwater and Oracle’s Abilene data centers—two of the largest AI factories ever built—now rely on MRC over Spectrum-X Ethernet to meet the extreme performance and efficiency demands of frontier large language models. These deployments prove MRC works at massive scale, delivering high GPU utilization by balancing traffic across all available paths and dynamically avoiding overloaded routes.

NVIDIA’s Spectrum-X Ethernet With MRC Redefines AI Networking: OpenAI, Microsoft, Oracle Already Deploying
Source: blogs.nvidia.com

What This Means

The open release of MRC means any organization can build AI networks that match the performance of the hyperscalers. By enabling intelligent retransmission and real-time congestion avoidance, MRC minimizes the impact of data loss on long-running jobs—dramatically reducing GPU idle time. Administrators also gain granular visibility into traffic flows, simplifying troubleshooting and operational management.

This development signals a shift from proprietary, closed networking solutions to a standardized, open approach that accelerates AI innovation. With MRC, NVIDIA has effectively raised the bar for what Ethernet can achieve in the AI era, making gigascale training more accessible and efficient. Industry leaders are already voting with their deployments, confirming that this protocol is not just theoretical but a proven, production-ready technology.

Tags:

Recommended

Discover More

10 Essential Facts About Kubernetes Volume Group Snapshots Reaching GA in v1.36Replit CEO Amjad Masad on Staying Independent, Competing with Cursor, and Taking on AppleSmartphone Price Surge Hits Flagship Models as RAM Shortage BitesAWS Launches DevOps and Security Agents, Promises 'Always-Available Teammate' for Cloud Ops10 Essential Insights for Reviving the American Dream