The NVIDIA NGX Hopper platform delivers powerful AI and HPC performance with the latest H100 and H200 GPUs, optimized for large language models, generative AI, and high-throughput computing. Featuring scalable 4- and 8-GPU configurations with high-speed NVLink interconnects, it accelerates training and inference worklo
The NVIDIA HGX H200 platform builds on the Hopper architecture with enhanced memory capacity and bandwidth, making it ideal for large-scale AI, LLMs, and memory-bound workloads. It features up to 8 NVIDIA H200 GPUs connected via NVLink® and NVSwitch, delivering faster performance for training, fine-tuning, and inference.
The HGX H200 offers a seamless upgrade path for organizations looking to accelerate next-gen AI workloads with more memory and speed..
The NVIDIA HGX H100 platform is built to accelerate the most demanding AI and high-performance computing (HPC) workloads. Featuring up to 8 NVIDIA H100 Tensor Core GPUs connected via high-speed NVLink® and NVSwitch, it delivers exceptional performance for training large language models, generative AI, simulation, and scientific computing.
The HGX H100 offers the performance, efficiency, and flexibility needed to power the AI factories and data centres of today.