NVIDIA H100 GPU Card

An Order-of-Magnitude Leap for Accelerated Computing

The NVIDIA H100 GPU delivers exceptional performance, scalability, and security for every workload. H100 uses breakthrough innovations based on the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models (LLMs) by 30X. H100 also includes a dedicated Transformer Engine to solve trillion-parameter language models.

Category: Tag: Brand:

Product Specifications

H100 SXMH100 NVL
FP6434 teraFLOPS30 teraFLOPs
FP64 Tensor Core67 teraFLOPS60 teraFLOPs
FP3267 teraFLOPS60 teraFLOPs
TF32 Tensor Core*989 teraFLOPS835 teraFLOPs
BFLOAT16 Tensor Core*1,979 teraFLOPS1,671 teraFLOPS
FP16 Tensor Core*1,979 teraFLOPS1,671 teraFLOPS
FP8 Tensor Core*3,958 teraFLOPS3,341 teraFLOPS
INT8 Tensor Core*3,958 TOPS3,341 TOPS
GPU Memory80GB94GB
GPU Memory Bandwidth3.35TB/s3.9TB/s
Decoders7 NVDEC
7 JPEG
7 NVDEC
7 JPEG
Max Thermal Design Power (TDP)Up to 700W (configurable)350-400W (configurable)
Multi-Instance GPUsUp to 7 MIGS @ 10GB eachUp to 7 MIGS @ 12GB each
Form FactorSXMPCIe
dual-slot air-cooled
InterconnectNVIDIA NVLink™: 900GB/s
PCIe Gen5: 128GB/s
NVIDIA NVLink: 600GB/s
PCIe Gen5: 128GB/s
Server OptionsNVIDIA HGX H100 Partner and NVIDIA-
Certified Systems with 4 or 8 GPUs
NVIDIA DGX H100 with 8 GPUs
Partner and NVIDIA-Certified Systems with 1–8 GPUs
NVIDIA AI EnterpriseAdd-onIncluded

Reviews

There are no reviews yet.

Be the first to review “NVIDIA H100 GPU Card”

Your email address will not be published. Required fields are marked *