AI ready solutions to suit any workload.
Rapidly deploy your AI infrastructure
Deploy customised AI solutions using Supermicro NVIDIA MGX™ Systems, equipped with NVIDIA’s latest cutting-edge Grace Hopper™ and Grace™ CPU Superchips.
The modular MGX design aims to unify AI infrastructure and enhance computing speed in rack optimised 1U and 2U sized chasis, offering unparalleled adaptability and scalability for current and future GPUs, DPUs, and CPUs workloads.
LUNIQ AI Ready Supermicro Servers
Supermicro’s state-of-the-art liquid-cooling technology supports energy-efficient, highly dense setups like the 1U 2-node system, which includes two NVIDIA GH200 Grace Hopper Superchips. Each chip contains an NVIDIA H100 Tensor Core GPU and an NVIDIA Grace CPU, connected through a rapid interconnect.
1U Grace Hopper MGX Systems
CPU+GPU Coherent Memory System for AI and HPC Applications
- Up to 2 NVIDIA GH200 Grace Hopper Superchips featuring 72-core ARM CPU and H100 Tensor Core GPU tightly coupled with coherent memory
- Up to 96GB HBM3 and 480GB LPDDR5X integrated memory per Grace Hopper Superchip
- NVLink® Chip-2-Chip (C2C) high-bandwidth and low-latency interconnect
- Up to 3 PCIe 5.0 x16 slots supporting NVIDIA BlueField®-3, NVIDIA ConnectX®-7 or additional GPUs
- 8 hot-swap E1.S and 2 M.2 slots
- Air Cooling and liquid cooling options
1U Grace Hopper MGX Systems Configurations at a Glance
Develop novel approaches for accelerated infrastructures that enable engineers and scientists to address important global issues with the use of large datasets, complex models, and new generative AI work. Tucked away in a single 1U frame, Supermicro’s dual NVIDIA GH200 Grace Hopper Superchip systems offer exceptional performance for a wide range of CUDA applications, particularly boosting AI workloads that require a lot of memory. Its adaptable bays not only fit two 1U H100 GPUs on board, but they also enable full-size PCIe expansions, meeting the demands of accelerated computing now and in the future, including sophisticated scale-out and clustering options.
1U 1-node Grace CPU Configurations
SKU | ARS-111GL-NHR | ARS-111GL-NHR-LCC |
Form Factor | 1U system with single NVIDIA Grace Hopper Superchip (air-cooled) | 1U system with single NVIDIA Grace Hopper Superchip (liquid-cooled) |
CPU | 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip | 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip |
GPU | NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e (coming soon) | NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e (coming soon) |
Memory | Up to 480GB of integrated LPDDR5X with ECC | Up to 480GB of integrated LPDDR5X memory with ECC |
Drives | 8x Hot-swap E1.S drives and 2x M.2 NVMe drives | 8x Hot-swap E1.S drives and 2x M.2 NVMe drives |
Networking | 3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 | 3x PCIe 5.0 x16 slots supporting NVIDIA BlueField-3 or ConnectX-7 |
Interconnect | NVLink-C2C with 900GB/s for CPU-GPU interconnect | NVLink-C2C with 900GB/s for CPU-GPU interconnect |
Cooling | Air-cooling | Liquid-cooling |
Power | 2x 2000W Redundant Titanium Level power supplies | 2x 2000W Redundant Titanium Level power supplies |
1U 2-node Grace CPU Configurations
SKU | ARS-111GL-DNHR-LCC | ARS-121L-DNR |
Form Factor | 1U 2-node system with NVIDIA Grace Hopper Superchip per node (liquid-cooled) | 1U 2-node system with NVIDIA Grace CPU Superchip per node |
CPU | 2x 72-core Grace Arm Neoverse V2 CPU + H100 Tensor Core GPU in a single chip (1 per node) | 144-core Grace Arm Neoverse V2 CPU in a single chip per node (total of 288 cores in one system) |
GPU | NVIDIA H100 Tensor Core GPU with 96GB of HBM3 or 144GB of HBM3e per node (coming soon) | Please contact our sales for possible configurations |
Memory | Up to 480GB of LPDDR5X per node | Up to 480GB of integrated LPDDR5X memory with ECC and up to 1TB/s of bandwidth per node |
Drives | 8x Hot-swap E1.S drives and 2x M.2 NVMe drives | Up to 4x hot-swap E1.S drives and 2x M.2 NVMe drives per node |
Networking | 2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7 | 2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField-3 or ConnectX-7 (e.g., 1 GPU and 1 BlueField-3) |
Interconnect | NVLink-C2C with 900GB/s for CPU-GPU interconnect | NVLink™-C2C with 900GB/s for CPU-CPU interconnect (within node) |
Cooling | Liquid-cooling | Air-cooling |
Power | 2x 2700W Redundant Titanium Level power supplies | 2x 2700W Redundant Titanium Level power supplies |