Glenn's Digital Garden

        • Azure HBv4
        • Azure HBv5
        • Azure ND A100 v4
        • Azure ND GB200 v6
        • Azure ND H100 v5
        • Azure ND MI200 v4
        • Azure ND MI300X v5
        • BullSequana XH3406-3
        • BullSequana XH3515
        • Cray EX154n
        • Cray EX235a
        • Cray EX235n
        • Cray EX254n
        • Cray EX255a
        • Cray EX425
        • A democratic vision for artificial intelligence must prevail over an authoritarian one.
        • Azure Accelerated Networking: SmartNICs in the Public Cloud
        • Big Tech Is Rushing to Find Clean Power to Fuel AI’s Insatiable Appetite
        • Carbon-Removal Firms Have One Very Big Backer. That’s a Problem
        • Datacenters to emit 3x more carbon dioxide because of genAI
        • FASST RFI
        • Machines of Loving Grace
        • Nuclear finance will rely on consumers’ stomach for risk
        • Pre-Training GPT-4.5
        • Recommendations on Powering Artificial Intelligence and Data Center Infrastructure
        • Revisiting Reliability in Large-Scale Machine Learning Research Clusters
        • The Intelligence Age
        • The National Security Case for Public AI - Vanderbilt Policy Accelerator
        • AMD MI250X
        • AMD MI300A
        • AMD MI300X
        • AMD MI325X
        • AMD MI355X
        • Ascend 910
        • Custom A100 GPUs
        • Custom H100 GPUs
        • Google TPUv4
        • Intel Ponte Vecchio
        • Microsoft Maia 100
        • NVIDIA A100
        • NVIDIA B200
        • NVIDIA B300
        • NVIDIA GH200
        • NVIDIA Grace
        • NVIDIA H100
        • NVIDIA H200
        • NVIDIA R200
        • NVIDIA R300
        • Trainium2
        • Alps (CSCS)
        • Aurora (ALCF)
        • Colossus (xAI)
        • Eagle (Microsoft)
        • El Capitan (LLNL)
        • Frontier (OLCF)
        • Fugaku (R-CCS)
        • FugakuNEXT (R-CCS)
        • Horizon (TACC)
        • Isambard-AI (Bristol)
        • JUPITER (JSC)
        • Leonardo (CINECA)
        • Meta's H100 clusters
        • Niagra
        • Perlmutter (NERSC)
        • Project Ceiba (AWS)
        • Project Rainier (AWS)
        • Red Storm (Sandia)
        • Vista (TACC)
          • NSF
          • U.S. Department of Energy
        • Artificial intelligence
        • canada
        • Europe
        • Papers
        • Reliability
        • seedling
        • storage
      • Azure SmartNICs
      • Co-ops
      • discrete event simulation
      • Dragonfly topology
      • elbencho
      • Government's role in AI
      • InfiniBand
      • IOR
      • LLM training
      • LLM training at scale
      • LLM training datasets
      • mdtest
      • Microsoft supercomputers
      • MTBF, FIT, and AFR
      • NFS
      • Nuclear energy
      • Scaling laws
      • Sustainability in HPC
      • thermodynamics
      • Ultra Ethernet
      • 3FS
      • AI datacenters
      • AMD CDNA
      • availability
      • Azure infrastructure
      • blobfuse
      • Broadcom Tomahawk 5
      • BXI
      • cables and connectors
      • Canadian sovereign AI
      • capex
      • checkpointing
      • China
      • CIFAR
      • Coarse-Grained Reconfigurable Array
      • component reliability
      • Cray EX
      • Crusoe
      • DAOS
      • Darshan
      • DeepSeek-R1
      • Denvr Dataworks
      • differences between AI and HPC
      • digital garden
      • Digital Research Alliance of Canada
      • distillation
      • Dragonfly+ topology
      • DRAM architecture
      • Duane Arnold
      • ECC schemes
      • European sovereign AI
      • excursions
      • FASST
      • fio
      • Fir
      • foundation models for science
      • Frontier models
      • GekkoFS
      • GPU terminology decoder ring
      • GPUaaS
      • GSP and SMC
      • GTC25
      • High-Performance Linpack
      • Hypertec
      • IR7000
      • Job Mean Time To Interrupt
      • job reliability
      • jobs
      • LCCF
      • LLM inferencing
      • LPDDR5 Reliability
      • LRA
      • Lustre
      • manufacturing level
      • memory bandwidth
      • Mercury
      • Meta Llama-3.1
      • Meta Movie Gen
      • Minipack2
      • mixture of experts
      • Model FLOPs Utilization
      • Mt. Diablo
      • Multi-plane topologies
      • multicluster training
      • NAIRR
      • natural gas
      • Network flow
      • Networking for LLM training
      • Neuromorphic computing
      • New Frontiers
      • Niagara
      • NVIDIA Vera
      • NVLink
      • Obsidian
      • OPT-175B
      • Optical circuit switch
      • Palisades nuclear plant
      • PCIe Gen6
      • peer review
      • pod
      • Podman
      • POSIX
      • power infrastructure
      • Productivity tools
      • QScale
      • quantum
      • Read-it-later apps
      • Reasoning models
      • reinforcement learning
      • Salishan
      • SC Conference
      • Signal modulation
      • silent data corruption
      • Slingshot
      • Small Language Models
      • Social media platforms
      • Spectrum-X Photonics
      • Stargate Project
      • Storage for LLM training
      • Structured sparsity
      • Superintelligence
      • Synthetic data
      • System architect
      • Tensor cores and Matrix cores
      • test-time compute
      • Three Mile Island
      • transformer
      • VAST
      • wisdom
      • working at Microsoft
    Home

    ❯

    AMD CDNA

    AMD CDNA

    Oct 21, 2024

    CDNA is AMD’s designation for its data center GPU architecture.

    GenerationGPU NameNotable new features
    CDNAMI100matrix core
    CDNA 2MI250X
    CDNA 3MI300X, MI300AXCD instead of GCD, 4:2 structured sparsity

    See AMD CDNA Architecture (amd.com).


    Graph View

    Backlinks

    • AMD MI250X
    • AMD MI300A
    • AMD MI300X
    • AMD MI325X
    • AMD MI355X

    Created with Quartz v4.4.0 © 2025

    • glennklockwood.com
    • @glennklockwood.com