Get the latest tech news

Nvidia outlines plans for using light for communication between AI GPUs by 2026 — silicon photonics and co-packaged optics may become mandatory for next-gen AI data centers


Nvidia's CPO enables faster connections at lower power.

That method produces severe electrical loss, up to roughly 22 decibels on 200 Gb/s channels, which requires compensation that uses complex processing and increases per-port power consumption to 30W (which in turn calls for additional cooling and creates a point of potential failure), which becomes almost unbearable as the scale of AI deployments grow, according to Nvidia. Nvidia claims that by moving away from traditional pluggable transceivers and integrating optical engines directly into switch silicon (courtesy of TSMC’s COUPE platform), it achieves very substantial gains in efficiency, reliability, and scalability. The system also integrates an ASIC featuring 14.4 TFLOPS of in-network processing and supporting Nvidia's 4th Generation Scalable Hierarchical Aggregation Reduction Protocol (SHARP) to cut latency for collective operations.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Nvidia

Nvidia

Photo of plans

plans

Photo of light

light

Related news:

News photo

TechCrunch Mobility: Waymo’s Big Apple score and Nvidia backs Nuro

News photo

Nvidia Release Massive AI-Ready Open European Language Dataset and Tools

News photo

Writing Speed-of-Light Flash Attention for 5090 in CUDA C++