JUPITER is the first European exascale supercomputer being installed for 2024 at Juelich. Technical details are here: JUPITER Technical Overview (fz-juelich.de).

It debuted on the Top500 list at #4 at ISC25 with using 4,650 nodes1 (79% of the system), resulting in 170.6 TF/node, or 42.6 TF/GPU. The run took 1 hour and 47 minutes.

System overview

The system is built on BullSequana XH3000 and will have two partitions:

  • Booster Module:
  • Cluster Module: SiPearl Rhea CPUs
    • “more than 1300 nodes”
    • “more than 5 PetaFLOP/s (FP64, HPL)”
    • 1 NDR200 InfiniBand HCA per node
    • Integrated by ParTec per ISC25

The nodes were on display at SC24:

Network architecture

NDR200 InfiniBand in dragonfly+ will be used throughout:

  • Each dragonfly+ group is a nonblocking fat tree
  • 25x GPU groups
  • 2x CPU, storage, and management groups
  • 867x switches
  • with “25400 end points”
    • At least 24,000 for GPU nodes
    • Leaving 1350 for CPU nodes?

Storage subsystem

Storage: IBM ESS 3500

  • 40 IBM ESS 3500 servers
  • 29 PB raw, 21 PB formatted

Facility

JUPITER is being installed using a modular data center. FRZ Jülich has posted some great photos of what that looks like.

Cost

From Peeling The Covers Off Germany’s Exascale “Jupiter” Supercomputer

Quote

The Jupiter supercomputers core funding – not including that auxiliary storage – was €500 million (about 314.7 million) went to Eviden and ParTec for hardware, software, and services, with the remaining €227 million ($261.4 million) going for power, cooling, and operations people.

Footnotes

  1. Andreas Herten, June 18, 2025 (https://bsky.app/profile/andih.bsky.social/post/3lrvrguvtzc2b)