NVIDIA H100 with Full-Stack TEE
NVIDIA H100 Tensor Core GPU

Enterprise AI with H100 GPUs

Deploy NVIDIA H100 Tensor Core GPUs with Intel TDX + NVIDIA Confidential Computing. 80GB HBM3 memory, proven performance, complete hardware protection.

80GB
HBM3 Memory
3.35 TB/s
Bandwidth
9x Faster
vs A100
Full TEE
Intel + NVIDIA
Starting at $3.08/GPU/hr on-demand or $2.38/GPU/hr with 12-month commitment (23% savings)

Proven Enterprise Performance

MLPerf-certified AI acceleration with Full-Stack TEE security

GPU TEE Bento
21.8K

tokens/sec

Llama 2 70B Inference

80GB

HBM3 Memory
3.35 TB/s Bandwidth

9x

Faster training
vs A100

700W

Thermal Design Power

Full-Stack TEE

Intel TDX + NVIDIA Confidential Computing protection with cryptographic attestation

Hardware-enforced isolation
<5% performance overhead

MLPerf Certified

GPT-3 TrainingBaseline
ResNet-503.9x vs V100
Available Now

US-East & US-West

1-8 GPUs

Flexible Configurations

Configure Now

H100 vs H200

Official benchmarks from MLPerf v5.0 and NVIDIA Technical Labs

LLM Inference Performance (Llama 2 70B)

Tokens per second (single GPU) - higher is better

H10021.8K tok/s
29%
H20033.0K tok/s
44%
B20075.0K tok/s
100%

H100: 21.8K tokens/sec (baseline) | H200: 33K tokens/sec (1.51x) | B200: 75K+ tokens/sec (3.4x)

Source: MLPerf Inference v5.0 (2024), NVIDIA Technical Blogs

All benchmarks include Full-Stack TEE protection with <5% overhead

H100 Pricing

Flexible pricing for any workload. All prices include Full-Stack TEE protection.

On-Demand

Pay only for what you use

$3.08/GPU/hr

No commitment required

Scale from 1-2 GPUs instantly
US-West region
Full-Stack TEE included
Dual attestation reports
SAVE 23%
Reserved

12-month commitment

$2.38/GPU/hr

Best long-term value

Guaranteed GPU availability
Priority support
Custom configurations
Enterprise SLA options

Available H100 Configurations

1x H100

Available Now
Memory80GB HBM3
Bandwidth3 TB/s
TEE ProtectionFull-Stack TEE
RegionUS-West
Perfect for enterprise AI workloads requiring proven performance and complete security
Configure & Deploy

2x H100

Available Now
Memory160GB HBM3
Bandwidth6 TB/s
TEE ProtectionFull-Stack TEE
RegionUS-West
Perfect for enterprise AI workloads requiring proven performance and complete security
Configure & Deploy
Security

Full-Stack TEE Architecture

Complete hardware protection from CPU to GPU. Intel TDX + NVIDIA Confidential Computing working together.

Full VM Isolation

Intel TDX protects CPU, memory, and VM from host access. Complete isolation.

GPU Memory Encryption

NVIDIA CC encrypts all GPU memory. Model weights and data stay secure.

Dual Attestation

Cryptographic proof from Intel + NVIDIA. Independently verifiable.

End-to-End Protection

Data encrypted in transit (TLS), at rest (AES-256), and during processing (TEE).

Multi-Region Deployment

Deploy in US-West and India. Same Full-Stack TEE protection everywhere.

Compliance Ready

GDPR, HIPAA, SOC 2 compliant. Hardware-backed security guarantees.

What You Can Build with GPU TEE

Real-world applications running on Phala Cloud with complete Intel TDX + NVIDIA Confidential Computing protection

Private Enterprise AI

Train and deploy models on sensitive healthcare, financial, or legal data with complete hardware protection. Your data never leaves the TEE.

User-Owned AI Agents

Build autonomous AI agents that securely manage cryptographic keys and digital assets. Powers platforms like Eliza and Virtuals Game Agents.

ZK Proof Generation

Accelerate zkVM and zkRollup proof generation with GPU TEE. SP1 zkVM runs with <5% TEE overhead—verified with dual attestation.

FHE/MPC Acceleration

Use GPU TEE as 2FA for FHE and MPC systems. Secure key generation, computation integrity, and attestation in one platform. Powers Fairblock and Mind Network.

Multi-Proof Systems

Combine ZK proofs with TEE attestation for double security. Hedge against cryptographic bugs while maintaining verifiability.

Regulatory Compliance

Meet GDPR, HIPAA, and SOC 2 requirements with hardware-backed privacy guarantees. Full audit trail with Intel and NVIDIA attestation.

Three Ways to Deploy GPU TEE

Choose the deployment model that fits your needs—from full control to instant deployment

CVM + GPU: Maximum Flexibility

Deploy your own Docker containers with SSH access to TEE-protected GPUs. Perfect for developers who need complete control.

  • Deploy custom Docker containers with full SSH access
  • Fine-tune models on private data with complete hardware protection
  • Intel TDX + NVIDIA Confidential Computing protection
  • Dual attestation reports (Intel + NVIDIA) for verification
Deploy CVM Now