Register today for NVIDIA H100 GPUs - Available Now
Taiga Cloud is the first European Generative AI Cloud Service Provider to provide instant access to a clean, secure and compliant NVIDIA H100 GPU network.
NVIDIA H100
fast, secure and revolutionary
Multiple GPUs are essential for large data sets, complex simulations and GenAI and HPC workflows: the ninth-generation data center NVIDIA H100 Cloud GPUs are designed to deliver an order-of-magnitude performance leap for large-scale GenAI and HPC workloads over the prior-generation NVIDIA A100 GPUs.
- Faster matrix computations than ever before on an even broader array of GenAI and HPC workloads
- Secure Multi-Instance GPU (MIG) partitions the GPU into isolated, right-size instances to maximize quality of service (QoS) for smaller tasks
- Up to 9X faster AI training and up to 30X faster AI inference compared to the prior GPU generation
Offering sustainable and European Generative AI Infrastructure as a Service, we are certified as an Elite Partner of NVIDIA and an official Cloud Service Provider in NVIDIA’s Partner Network (NPN).
NVIDIA H100 GPU use cases
Higher education and research
From your own GenAI, ML, or Big Data project, to rendering in architecture or media, our powerful cloud IaaS stack frees your IT from having to set up and maintain a complex infrastructure for GenAI or other HPC tasks, while protecting your budget from high acquisition expenses and hidden operational costs.
AI-aided design for the manufacturing and automotive Industries
The H100 Cloud GPUs deliver up to 7x higher performance rates and securely accelerate all workloads, from enterprise to exascale. This leads to various GenAI breakthroughs within different areas such as Engineering and Production. There are also exciting use cases in the fields of object detection or image segmentation, design and visualization, which benefit from real-time data interactivity.
Health care and life science
The H100 Cloud GPUs revolutionize advanced medical and scientific research and discoveries, such as weather forecasts or scientific simulations, allowing for more rapid developments within the fields of drug discovery, genomics and computational biology, through the GPUs' increased stability for GenAI or other HPC workloads.
NVIDIA H100 GPU benefits
Hosted in Europe
Achieve sovereignty and compliance standards
Non-blocking network
DE-CIX access and low latency (sub 10Ms)
No overbooking
NVIDIA GPUs as well as CPU and RAM resources
InfiniBand Pods of 512 GPUs
Pods are connected into islands of four pods each (2,048 GPUs) using NVIDIA BlueField DPUs and the NVIDIA Quantum-2 InfiniBand platform. This configuration offers efficient and quick means of training LLMs. It delivers Generative AI solutions in a much shorter timeframe.
PUE performance between 1.15 and 1.09
We leverage best-in-class infrastructure to ensure the environmental footprint of your compute-intensive processes is as light as possible.
Up to 9x higher AI training on largest models
Mixture of Experts (395 Billion Parameters)
Projected performance subject to change. Training Mixture of Experts (MoE) Transformer Switch-XXL variant with 395B parameters on 1T token dataset | A100 cluster: HDR IB network | H100 cluster: NVLink Switch System, NDR IB
Up to 30X higher AI inference performance on largest models
Megatron Chatbot Inference (530 Billion Parameters)
Projected performance subject to change. Inference on Megatron 530B parameter model chatbot for input sequence length=128, output sequence length=20 | A100 cluster: HDR IB network | H100 cluster: NVLink Switch System, NDR IB
Up to 7X higher performance for HPC applications
Projected performance subject to change. 3D FFT (4K^3) throughput | A100 cluster: HDR IB network | H100 cluster: NVLink Switch System, NDR IB | Genome Sequencing (Smith-Waterman) | 1 A100 | 1 H100