Product details
The NVIDIA H100 Tensor Core GPU enables a significant leap for large-scale AI and high-performance computing (HPC) with unprecedented performance, scalability, and security for any data center. It includes the NVIDIA AI Enterprise software suite to optimize AI development and deployment. The H100 accelerates workloads at exascale with a dedicated transformer engine for language models with trillions of parameters. For smaller tasks, the H100 can be divided into suitable multi-instance GPU partitions (MIG). With Hopper Confidential Computing, this scalable computing power can secure sensitive applications in a shared data center infrastructure. The integration of NVIDIA AI Enterprise with H100 PCIe shortens development time and simplifies the deployment of AI workloads, making the H100 the most powerful end-to-end AI and HPC data center platform.
The NVIDIA Hopper architecture delivers unprecedented performance, scalability, and security to any data center. Hopper builds on previous generations, introducing new compute core functions like the Transformer Engine and faster networking, providing the data center with a significant speed increase over the previous generation. NVIDIA NVLink supports ultra-high bandwidth and ultra-low latency between two H100 cards, as well as memory pooling and performance scaling (application support required). The second generation MIG securely partitions the GPU into isolated instances of the right size to maximize quality of service (QoS) for seven times more secure tenants. The integration of NVIDIA AI Enterprise (exclusive to the H100 PCIe), a software suite that optimizes the development and deployment of accelerated AI workflows, maximizes performance through these architectural innovations of the H100. These technological breakthroughs power the H100 Tensor Core GPU, the world's most advanced GPU ever built.