Speak With An Expert

Mail icon to contact Liqid about our composable infrastructure technology

GTC ‘22: Announcing powerful new NVIDIA H100 GPU, Jensen talks composability and Gen 5 PCIe

Posted on
March 24, 2022
Written By

It’s that time again! GTC is back for NVIDIA’s spring showcase of all things GPU and AI. As an NVIDIA Partner Network member, Liqid is always excited to see what’s new, and this year’s conference is delivering the kind of raw GPU performance to launch an Omniverse

With the announcement of AI platforms based on its new NVIDIA H100 GPU, the industry’s first GPU based on the specifications of new, high-bandwidth gen 5 PCIe interconnect, software-defined flexibility has been thematically central, with NVIDIA CEO Jensen Huang emphasizing in his always lively keynote that “fundamental changes in data center design” are afoot in order to continue to keep up with advancements driven by artificial intelligence.    

Courtesy NVIDIA, March 22, 2022

Once again, NVIDIA showed it means to be a key player in turning today’s data centers into “AI factories” with its NVIDIA H100. According to Jensen, the NVIDIA H100 represents the largest generational leap in the history of NVIDIA’s GPU innovation when compared with its predecessor, the NVIDIA A100. 

Those impressive performance improvements are facilitated in part by  NVIDIA® NVLink® scale-up interconnect technology, which enables seamless communication between GPUs, bypassing CPU-bound bottlenecks for faster data transactions, while freeing CPU and system memory to engage in other tasks, improving efficiency. 

“Moving data to keep these lightning fast GPUs fed is a most serious concern,” Jensen said. “The composability of … NVlink and the Gen5 PCIe link inside [NVIDIA® ConnectX®-7 Infiniband] give us a vast number of ways to address customers’ diverse computing needs.” 

When combined with the new, external NVLink Switch, an H100-based system is capable of bi-directional GPU-to-GPU communications across multiple servers at data speeds 7x faster than the systems based on upcoming PCIe Gen 5 specifications for mind-bending GPU compute power.    

The Liqid Team – along with everyone else in the world of AI computing – is eager to get hands on these impressive GPUs and take them for a spin. For those who need comparable capabilities based on GPU technologies currently shipping, consider the Liqid ThinkTank, a new turnkey AI system based on Liqid Matrix CDI software. ThinkTank offers similar GPU-to-GPU capabilities with its ioDirect feature that enables peer-to-peer communications between GPUs and between GPUs and NVMe storage. Other features include:

Liqid ThinkTank
  • Liqid Matrix CDI software (software licenses and director)
  • Up to 16x GPUs from Nvidia or AMD
  • Up to 60 TB of NVMe storage
  • Up to 4x PCIe expansion chassis
  • Liqid host bus adapters (HBA)
  • Liqid PCIe fabric switching
  • Ubuntu/Linux AI software stack with Liqid CDI enhancements

For NVIDIA, the combined power of the H100 GPUs and NVLink go full-on beast mode in the NVIDIA® DGX™ H100 System, the fourth-generation NVIDIA DGX system, the world’s first AI platform to be built with new NVIDIA H100 GPUs. With training throughput that is 7x faster inference that is 30x faster than its DGX system based on A100 technology (!!!), each NVIDIA DGX H100 features up to eight H100 GPUs per enclosure. The devices can behave as one massive GPU for workloads that need that much power or be dynamically composed into lesser configurations depending upon workload requirements. 

The latest DGX SuperPOD architecture, based on the DGX H100 and featuring the new NVIDIA NVLink Switch System, can connect up to 32 nodes with a total of 256 H100 GPUs. NVIDIA will put the rubber to the road with the first DGX SuperPOD system, NVIDIA Eos. The powerful AI supercomputer is anticipated to provide 18.4 exaflops of AI computing performance. That makes Eos 4x faster than the world’s current record holder, Fugaku supercomputer in Japan. According to NVIDIA, for those doing traditional scientific computing, Eos is expected to provide 275 petaflops of performance.

Again, we can’t wait, but our customers don’t have to. Check out the Liqid ThinkTank AI System today. Based on current NVIDIA or AMD technologies, customers can add and compose disaggregated NVIDIA H100 GPUs as they become available in the market, and start composing the future of their AI operations today. 

Lots more amazing stuff in Jensen’s keynote, so remember to check it out, and register for GTC 2022 for free and take in those workshops as they happen! 

Written by
Posted on
March 24, 2022
in
GTC Spring 2022
category

Would you like to learn more?

Speak with one of our sales experts to learn more about how we aim to deliver complete composability. For other inquiries, you can drop us a line. We'll get back to you as soon as possible.