Skip to content

NVIDIA H200

Northern Data Group’s GenAI platform, Taiga Cloud, is one of the first in Europe to offer instant access to NVIDIA H200 GPU.

Background Image
H200 v2

Revolutionary hardware for next-generation innovation 

With almost double the memory capacity of the NVIDIA H100 Tensor Core GPU, plus advanced performance capabilities, the H200 is a game changer.

  • 141 gigabytes of HBM3e memory at 4.8 terabytes per second
  • 4.8 terabytes of memory bandwidth
  • 4 petaFLOPS of FP8 performance
Download NVIDIA H200 factsheet

Offering sustainable and European Generative AI Infrastructure as a Service, we are certified as an Elite Partner of NVIDIA and an official Cloud Service Provider in NVIDIA’s Partner Network (NPN).

 

NVIDIA Elite Partner

 

Up to 2x the LLM inference performance

AI is always changing. Businesses rely on large language models, such as Llama2 70B, to address a wide range of inference needs. When deployed at scale for a massive user base, an AI inference accelerator must deliver the highest throughput at the lowest TCO.  

110x faster time to results

Memory bandwidth enables faster data transfer and reduces complex processing bottlenecks. It's crucial for HPC applications such as simulations, scientific research and AI. The H200’s higher bandwidth ensures that data can be efficiently accessed and manipulated.

50% reduction in LLM energy use and TCO

The H200 offers unmatched performance within the same power profile as the H100, enabling energy efficiency and TCO to reach new heights. This creates an economic edge for AI and scientific communities.

Ready to get started?

Find out how we work with our partners to accelerate the world's best ideas.