Supercharged Hopper


The NVIDIA H200 Tensor Core GPU transforms generative AI and HPC workloads with unmatched performance and memory capabilities. As the first GPU with HBM3e, its larger and faster memory accelerates generative AI and LLMs while advancing scientific computing for HPC.

Learn MoreContact Us
Experience Next-Level Performance

The World’s Most Powerful GPU

LLM Inference

The H200 boosts inference speed by up to 2X compared to H100 GPUs when handling LLMs like Llama2.


Higher memory bandwidth ensures that data can be manipulated efficiently for up to 110X faster time to results vs. CPUs.

Reduce Energy
and TCO

Unparalleled performance with energy efficiency matching the H100 for speed, eco-friendliness and driving forward AI and scientific communities.


NVIDIA AI Enterprise with H200 simplifies AI platform setup, accelerating development and deployment of AI applications



FP64 34 teraFLOPS
FP64 Tensor Core 67 teraFLOPS
FP32 67 teraFLOPS
TF32 Tensor Core 989 teraFLOPS2
BFLOAT16 Tensor Core 1,979 teraFLOPS2
FP16 Tensor Core 1,979 teraFLOPS2
FP8 Tensor Core 3,958 teraFLOPS2
INT8 Tensor Core 3,958 TOPS2
GPU memory 141GB
GPU memory bandwidth 4.8TB/s
Decoders 7 NVDEC
Max thermal design power (TDP) Up to 700W (configurable)
Multi-Instance GPUs Up to 7 MIGS @ 16.5GB each
Form factor SXM
Interconnect NVLink: 900GB/s PCIe Gen5: 128GB/s
Server options NVIDIA HGX™ H200 partner and NVIDIA-
Certified Systems™ with 4 or 8 GPUs
NVIDIA AI Enterprise Add-on
Your Trusted NVIDIA Partner

Upgrade your data centre with LUNIQ.

We provide our clients with a fully bespoke upgrade plan to ensure your business is ready for the next generation of AI-powered computing. Our experts have years of proven experience and will work with you at every step to make the process straightforward and easy.

Connect With One of Our Specialists

Unlock a new era of AI-powered computing

At LUNIQ, we ensure a seamless, value-driven integration tailored to your organisational needs.