Kyber is the successor to NVIDIA’s Oberon rack design for its scale-up GPU configurations. It will first appear for VR300 NVL576.1
From Nvidia’s Jensen Huang, Ian Buck, and Charlie Boyle on the future of data center rack density:
Quote
Kyber also includes a rack-sized sidecar to handle power and cooling. Therefore, while it is a 600kW rack, it requires two racks’ worth of physical footprint, at least in the current version shown by Nvidia.
Here are some photos I took of a Kyber proof of concept at GTC25:
Per the above, the front of the rack has subchassis, each with 18 compute blades. There are four subchassis per rack, or 72 compute blades per rack. This means there are 8 GPUs per compute blade; given that VR300 will have four GPUs per package, this means each compute blade has two GPU packages.
There was a placard which described:
Inside Kyber
- Compute Blades
- Up to 16 GPUs and two CPUs each
- NVLink Switch Blades
- Connects all GPUs at full bandwidth
- Midplane Board
- Connection hub
- Eliminates two miles of copper cabling
It is unclear where the inconsistency between 16 GPUs per blade and 8 GPUs per blade lies.
The rear of each subchassis has two groups of three NVLink Switch trays, making for a total of 24 NVLink Switch trays. Each group also has a mystery blade; this is either a chassis manager or a different type of NVLink Switch tray (perhaps for cross-subchassis cabling?)