The rise of artificial intelligence is transforming every industry—and at the heart of this transformation lies the network. As AI models grow more powerful, the ability to connect compute seamlessly—within a system, across clusters, and between regional data centers—defines what’s possible.
This keynote explores how Ethernet, the most universal and adaptable networking technology, is becoming the fabric of AI at every scale. From ultra-fast links inside compute nodes to vast XPU clusters, and from resilient regional buildouts AI platforms, Ethernet provides the foundation for performance, efficiency, and growth.
We’ll look ahead at the innovations shaping this journey—advances that unlock scalability, foster openness, and ensure that AI infrastructure can evolve without limits.
AI infrastructure demands a special class of networking-one built not on a single fabric, but on a synergistic architecture of purpose-built networks tuned for specific roles. This session explores how leading organizations are scaling AI by deploying the right interconnects for the job: NVLINK for scale-up GPU communication, Spectrum-X Ethernet or Quantum InfiniBand for east-west, scale-out compute fabrics, and an optimized north-south storage fabrics for high-throughput data access.
As inference scales to production, storage networks must evolve, integrating AI-optimized storage appliances directly into the network to reduce bottlenecks and latency. We’ll also examine emerging technologies such as co-packaged optics (CPO) and how they impact the design of next-generation AI data centers. From training to inference, learn how tailored networking fabrics—working in concert—are unlocking the full performance potential of modern AI workloads.
The rise of artificial intelligence is transforming every industry—and at the heart of this transformation lies the network. As AI models grow more powerful, the ability to connect compute seamlessly—within a system, across clusters, and between regional data centers—defines what’s possible.
This keynote explores how Ethernet, the most universal and adaptable networking technology, is becoming the fabric of AI at every scale. From ultra-fast links inside compute nodes to vast XPU clusters, and from resilient regional buildouts AI platforms, Ethernet provides the foundation for performance, efficiency, and growth.
We’ll look ahead at the innovations shaping this journey—advances that unlock scalability, foster openness, and ensure that AI infrastructure can evolve without limits.