Trusted AI
Private execution. Verifiable results.
Run agents, private LLM models, and GPU jobs inside hardware-backed TEEs. Keep secrets private, and prove what ran.
CPU machine
GPU machine
Confidential VM
Confidential AI cloud
Move existing Docker Compose workloads into CPU or GPU confidential machines. Keep the deploy path familiar; make the runtime verifiable.
{
}
Attestation
Every result can carry proof
Instead of asking users to trust a cloud claim, Phala emits runtime measurements that software can verify.
Products
Full service for AI privacy: agent sandbox, LLM, and GPU.

Agent sandbox
Run agent tools, app servers, and Docker services inside TEE-backed runtime sandboxes.

H200
US · 24 vCPU
141GB VRAM
Intel TDX + NVIDIA CC
$2.56/GPU/hr
B300
US · 12 vCPU
288GB VRAM
Intel TDX + NVIDIA CC
$5.63/GPU/hr
GPU marketplace
Reserve H200 and B300 confidential GPU capacity for private AI training and inference.
Confidential models
Private LLM models with real model choice.
OpenAI-compatible LLM endpoints, private prompts, and verifiable runtime state.
Explore LLM modelsAll-in-one confidential compute platform for AI workloads.
Platform
Built for private AI work
Write code, dockerize, and deploy it as trustless TEE apps.
marvin@Mac ~/ai-agent % claude code
Claude Code
bun ‹ claude
Proven at Scale
Built for enterprise security and regulatory requirements.
Building with confidential AI
0+
Users
Runtime proofs generated and checked
0+
Daily Attestations
Near-native confidential GPU execution
0%
TEE Performance
Total VMs
Live network source from Dune
Confidential model tokens/day
2026-05-07843M
Crawled from Phala's OpenRouter provider chart during server render.
Real-World Success Stories
Discover how leading companies are leveraging Phala's confidential AI to build exceptional digital experiences, while maintaining complete data privacy and regulatory compliance.

Financial Services
Private Financial AI
Phala enabled us to process sensitive trading data with AI while maintaining complete regulatory compliance. We've reduced compliance costs by 40% while improving model accuracy.

Healthcare Research
Medical AI with Sealed PHI
Multi-party collaboration on patient data without privacy compromise. Accelerated drug discovery by 60% while maintaining HIPAA compliance.

AI SaaS Platform
Enterprise AI SaaS
Phala's confidential AI helped us land Fortune 500 clients who required verifiable data protection. Increased enterprise sales by 300%.

Decentralized AI
Decentralized GPU and AI Economy
Built autonomous trading agents with verifiable execution. Users trust our AI because they can verify every decision on-chain.
Enterprise-Grade Compliance & Security
Deploy confidential AI with confidence. Phala is SOC 2 Type I certified and HIPAA compliant, with ISO 27001 certification in progress and privacy-by-design controls aligned with GDPR.
Visit Trust CenterFAQ
Common Questions & Answers
Find out all the essential details about our platform and how it can serve your needs.
What is Trusted Execution Environment (TEE)?
TEE is a secure area inside a processor that protects code and data from the operating system, hypervisor, and other applications.
How does confidential AI protect sensitive data?
Sensitive data and AI models remain private during processing by running inside hardware-backed secure environments.
Is Phala compatible with existing AI frameworks?
Yes. Phala supports existing Docker services and popular AI frameworks including TensorFlow, PyTorch, and Hugging Face.
What are the performance implications?
Confidential GPU workloads typically target near-native performance, with roughly 5-10% overhead depending on workload and hardware.
How can I verify the security of my AI workloads?
Phala exposes cryptographic attestations so users and systems can verify the workload and runtime state.
How do I get started?
Install the Phala CLI, deploy a Docker workload, then inspect status, logs, and attestation from the command line.
Start building
Build AI you can prove.
Deploy private workloads, verify execution, and scale from models to GPU jobs.

