
Getting Started with Confidential Computing: A Practical Guide
Meta Description: Step-by-step guide to deploying your first confidential computing workload. Learn how to use Phala Cloud, AMD SEV, Intel TDX, and GPU TEE for protecting data in use.
Target Keywords: getting started confidential computing, confidential computing tutorial, deploy confidential VM, TEE setup guide, confidential AI deployment
Reading Time: 16 minutes
TL;DR — Getting Started with Confidential Computing
Get hands-on with Confidential Computing and deploy your first secure workload in under 30 minutes. In this guide, you’ll learn how to:
- Choose the right Confidential Computing platform for your use case
- Deploy a confidential application using Phala Cloud or other TEE-based providers
- Verify security through remote attestation
- Monitor and optimize confidential workloads for performance
Prerequisites: Basic knowledge of Docker and cloud computing, plus a Phala Cloud account (free tier available).
Best Starting Point: Phala Cloud for [GPU TEE](https://phala.com/gpu-tee) and AI workloads, or Google Cloud / Azure for general confidential VMs.
Step 1: Understanding Your Use Case
Before deploying, identify what you need to protect:
Quick Assessment Checklist
Data Sensitivity:
- ✅ Handling regulated data (HIPAA, GDPR, PCI-DSS)
- ✅ Processing personal identifiable information (PII)
- ✅ Working with trade secrets or intellectual property
- ✅ Multi-party data collaboration required
Technical Requirements:
- ✅ Need to protect data during processing (in use)
- ✅ Cannot trust cloud provider administrators
- ✅ Require cryptographic proof of security for audits
- ✅ Need hardware-enforced isolation
Workload Type:
- ✅ AI/ML models (LLM inference, training, fine-tuning)
- ✅ Web applications (APIs, databases, backend services)
- ✅ Data analytics (processing sensitive datasets)
- ✅ Collaborative computing (multi-party computation)
Recommended Platforms by Use Case
| Your Use Case | Best Platform | Technology |
| AI/ML Workloads | Phala Cloud | NVIDIA H100/H200 GPU TEE |
| General Web Apps | Google Cloud | AMD SEV-SNP |
| Microsoft Ecosystem | Azure Confidential Computing | Intel TDX, AMD SEV-SNP |
| Enterprise Production | Multi-cloud (Phala + Azure) | GPU TEE + Confidential VM |
| Quick Testing | Phala Cloud Free Tier | Intel TDX |
Step 2: Deploy Your First Confidential Application
Option A: Phala Cloud (Recommended for AI/ML)
Why Start Here: Best-in-class GPU TEE support, managed infrastructure, OpenAI-compatible AI APIs
2.1 Create a Phala Cloud Account
# Visit https://cloud.phala.network/register
# Sign up with email or social login
# Get free credits for testing2.2 Deploy Pre-Built Confidential AI (Fastest Start)
The easiest way to experience confidential computing is using Phala’s pre-deployed LLMs:
Via Web Dashboard:
- Navigate to Phala Cloud Confidential AI
- Select a model (DeepSeek, Llama, Qwen, GPT-OSS)
- Get your API key
- Make your first confidential AI request:
import openai
# Configure OpenAI client to use Phala Cloud
client = openai.OpenAI(
api_key="your-phala-api-key",
base_url="https://api.phala.network/v1"
)
# Your queries are processed in GPU TEE - even Phala can't see them
response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "user", "content": "Explain confidential computing"}
]
)
print(response.choices[0].message.content)What’s happening:
- Your query is encrypted client-side
- Decrypted only inside NVIDIA H100 GPU TEE
- LLM processes it in encrypted memory
- Response encrypted before leaving TEE
- Phala Cloud operators cannot see your queries or responses
2.3 Deploy Custom Docker Application
For custom applications, use Phala Cloud’s Confidential VM:
Step 1: Prepare your Docker application
# docker-compose.yaml
version: "3"
services:
web:
image: nginx:latest
ports:
- "80:80"
environment:
- API_KEY=${SECRET_API_KEY} # Will be encrypted
volumes:
- ./html:/usr/share/nginx/htmlStep 2: Deploy via Phala Cloud Dashboard
- Go to Phala Cloud CVM Dashboard
- Click “Create New Application”
- Upload your
docker-compose.yaml - Add encrypted environment variables (API keys, secrets)
- Select TEE type:
- Intel TDX: General workloads, most tested
- AMD SEV-SNP: High-performance workloads
- GPU TEE: AI/ML workloads (NVIDIA H100)
- Click “Deploy”
Step 3: Monitor Deployment
# Watch deployment logs in the dashboard
# Or use Phala CLI (optional)
# Install CLI
npm install -g @phala/cli
# Login
phala login
# Check status
phala apps list
phala apps logs <app-id>2.4 Access Your Application
Once deployed, Phala Cloud provides:
Domain Access:
- Format:
<app-id>.dstack.phala.networkor<app-id>.app.phala.network - Automatic HTTPS with RA-TLS (Remote Attestation TLS)
- Example:
3a7f9e2b.dstack.phala.network
Port Mapping:
- Default (port 80/443):
<app-id>.dstack.phala.network - Custom port:
<app-id>-8080.dstack.phala.network(for port 8080) - TLS passthrough:
<app-id>-8080s.dstack.phala.network(with ‘s’ suffix)
Test your deployment:
# Check if app is running
curl https://<your-app-id>.dstack.phala.network
# Get attestation report
curl https://<your-app-id>.dstack.phala.network/.well-known/attestationOption B: Google Cloud Confidential VM
Why Use Google Cloud: Mature ecosystem, good for general workloads, enterprise integrations
2.1 Setup Google Cloud
# Install Google Cloud SDK
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
# Authenticate
gcloud auth login
# Set your project
gcloud config set project YOUR_PROJECT_ID
# Enable Confidential Computing API
gcloud services enable confidentialcomputing.googleapis.com2.2 Create Confidential VM
# Create Ubuntu confidential VM with AMD SEV
gcloud compute instances create my-confidential-vm \
--zone=us-central1-a \
--machine-type=n2d-standard-4 \
--confidential-compute \
--maintenance-policy=TERMINATE \
--image-family=ubuntu-2204-lts \
--image-project=ubuntu-os-cloud \
--boot-disk-size=50GB
# Verify confidential computing is enabled
gcloud compute instances describe my-confidential-vm \
--zone=us-central1-a \
--format="value(confidentialInstanceConfig.enableConfidentialCompute)"
# Should output: true2.3 Deploy Application
# SSH into the VM
gcloud compute ssh my-confidential-vm --zone=us-central1-a
# Inside the VM, install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Deploy your application
docker run -d -p 80:80 nginx:latest
# Verify TEE is active
cat /sys/firmware/efi/efivars/SevStatus-* | od -t x1
# Look for SEV-enabled flagsOption C: Azure Confidential VM
Why Use Azure: Best for Microsoft ecosystem, excellent Windows support
2.1 Setup Azure CLI
# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Login
az login
# Create resource group
az group create --name confidential-rg --location eastus2.2 Deploy Confidential VM
# Create confidential VM with AMD SEV-SNP
az vm create \
--resource-group confidential-rg \
--name my-confidential-vm \
--size Standard_DC4as_v5 \
--image Ubuntu2204 \
--security-type ConfidentialVM \
--os-disk-security-encryption-type VMGuestStateOnly \
--admin-username azureuser \
--generate-ssh-keys
# Connect
ssh azureuser@<VM_PUBLIC_IP>
# Verify SEV-SNP
sudo dmesg | grep -i sev
# Should see: AMD Memory Encryption Features active: SEV SEV-ES SEV-SNPStep 3: Verify Security with Remote Attestation
Critical: Attestation proves your application runs in a genuine TEE before sending sensitive data.
Understanding Attestation
What Attestation Proves:
- Running on genuine TEE hardware (AMD SEV, Intel TDX, NVIDIA H100)
- Software matches expected measurements (correct OS, app, config)
- No tampering or malicious modifications
Attestation Flow
Phala Cloud Attestation
Using Trust Center (Web UI):
- Visit Phala Trust Center
- Enter your application ID or domain
- View attestation details:
- TEE type (Intel TDX, AMD SEV-SNP, GPU TEE)
- Docker image hash
- Environment variables hash
- Hardware signature verification
Programmatic Verification:
import requests
import json
def verify_phala_attestation(quote_hex: str):
"""Verify TEE attestation using Phala Cloud API
Args:
quote_hex: Hex-encoded attestation quote
Returns:
True if verified, False otherwise
"""
# Use official Phala Cloud Attestation API
verify_response = requests.post(
"https://cloud-api.phala.network/api/v1/attestations/verify",
headers={"Content-Type": "application/json"},
json={"hex": quote_hex}
)
result = verify_response.json()
if result.get("success") and result.get("quote", {}).get("verified"):
checksum = result.get("checksum")
tee_type = result.get("quote", {}).get("header", {}).get("tee_type")
print(f"✅ Attestation verified!")
print(f" TEE Type: {tee_type}")
print(f" Checksum: {checksum}")
print(f" Proof URL: https://proof.t16z.com/reports/{checksum}")
return True
else:
print(f"❌ Attestation verification failed")
return False
# Example: Get quote from your Phala Cloud CVM
# Then verify it before sending sensitive data
# quote_hex = get_quote_from_app() # Your app provides this
# if verify_phala_attestation(quote_hex):
# send_sensitive_data()Step 4: Deploy Confidential AI Model (Advanced)
For AI/ML workloads, deploy custom models on GPU TEE:
Using Phala Cloud Model Templates
# Via Dashboard
# 1. Go to https://cloud.phala.network/dashboard/confidential-ai-models
# 2. Click "Deploy Custom Model"
# 3. Select template:
# - Text Generation (Llama, Mistral)
# - Image Generation (Stable Diffusion)
# - Embedding Models (BGE, E5)
# 4. Upload your model weights or use Hugging Face model ID
# 5. Configure resources (GPU: H100, RAM, storage)
# 6. DeployUsing Phala Cloud GPU TEE (Full Control)
For maximum flexibility, rent dedicated GPU TEE:
# ai-service.yaml
version: "3"
services:
llm-inference:
image: your-dockerhub/llm-server:latest
ports:
- "8000:8000"
environment:
- MODEL_PATH=/models/llama-70b
- MAX_BATCH_SIZE=32
volumes:
- /var/run/dstack.sock:/var/run/dstack.sock # For attestation
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]Deploy:
- Upload to Phala Cloud GPU TEE Dashboard
- Select NVIDIA H100 Confidential Computing
- Configure: 1x H100 (80GB), 64GB RAM, 500GB SSD
- Deploy and monitor
Get TDX/TEE Quote Inside Container
# Inside your container, use Dstack SDK to get attestation quote
from dstack_sdk import DstackClient
client = DstackClient()
# Get attestation quote
quote_result = client.get_quote(report_data=b"")
quote_hex = quote_result.encode_quote_hex()
print(f"Quote (hex): {quote_hex}")
# The quote contains:
# - Hardware measurements (TEE type, firmware versions)
# - Container image hash
# - Environment configuration hash
# - Cryptographic signature from Intel/AMD/NVIDIA hardware
# Submit quote for verification
import requests
response = requests.post(
"https://cloud-api.phala.network/api/v1/attestations/verify",
json={"hex": quote_hex}
)
print(f"Verification result: {response.json()}")Step 5: Migrate Existing Application
Most applications run in confidential environments with minimal changes.
Migration Checklist
Pre-Migration:
- ✅ Document current setup (dependencies, ports, volumes)
- ✅ Identify sensitive data flows
- ✅ Choose equivalent confidential platform
- ✅ Plan attestation verification
During Migration:
- ✅ Containerize if not using Docker
- ✅ Update client apps to verify attestation
- ✅ Test in confidential environment
- ✅ Monitor performance (expect 2-10% overhead)
Post-Migration:
- ✅ Implement continuous attestation monitoring
- ✅ Update disaster recovery (no live migration)
- ✅ Document attestation policies for audits
Example: Web App Migration to Phala Cloud
Original: Traditional Docker Compose app on VPS
# Original docker-compose.yaml
version: "3"
services:
web:
image: myapp/backend:latest
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://...
- API_KEY=secret123
db:
image: postgres:15
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=dbpass
volumes:
db-data:Migrated: Same app on Phala Cloud (zero code changes)
# phala-docker-compose.yaml
version: "3"
services:
web:
image: myapp/backend:latest
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://db:5432/mydb
- API_KEY=${SECRET_API_KEY} # Encrypted in Phala Cloud
volumes:
- /var/run/dstack.sock:/var/run/dstack.sock # For attestation
db:
image: postgres:15
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=${SECRET_DB_PASSWORD} # Encrypted
volumes:
db-data:Client Update:
// Before: Direct connection
fetch('https://myapp.example.com/api/data', {
method: 'POST',
body: JSON.stringify(sensitiveData)
});
// After: Verify attestation first
async function sendToConfidentialApp(data) {
// Verify TEE attestation
const attestation = await fetch(
'https://myapp-id.dstack.phala.network/.well-known/attestation'
);
const verified = await verifyAttestation(attestation);
if (!verified) {
throw new Error('TEE verification failed!');
}
console.log('✅ TEE verified. Safe to send data.');
// Now send data
return fetch('https://myapp-id.dstack.phala.network/api/data', {
method: 'POST',
body: JSON.stringify(data)
});
}Step 6: Monitor and Optimize Performance
Performance Benchmarks
| Workload Type | Expected Overhead | Platform |
| Web Server (nginx) | 2-5% | All TEE platforms |
| Database (PostgreSQL) | 5-10% | AMD SEV, Intel TDX |
| AI Inference (CPU) | 5-8% | Intel TDX |
| AI Inference (GPU) | 1-5% | NVIDIA H100 TEE |
| LLM Training (GPU) | 3-8% | NVIDIA H100 TEE |
Phala Cloud Performance: Up to 99% efficiency on GPU TEE workloads (near-native)
Monitoring via Phala Cloud Dashboard
# View real-time metrics in dashboard:
# - CPU usage
# - Memory consumption
# - Network I/O
# - GPU utilization (for GPU TEE)
# - Container logs
# Access logs via CLI
phala apps logs <app-id> --follow
# Or via API
curl 'https://<app-id>.dstack.phala.network:9090/logs/<container>?follow=true×tamps=true'Optimization Tips
1. Right-Size Resources
# Start larger, then optimize down
# For Phala Cloud:
# - Small apps: 2 vCPU, 4GB RAM
# - Medium: 4 vCPU, 8GB RAM
# - AI inference: 8 vCPU, 16GB RAM + 1x H100 GPU2. Use Efficient Images
# Use minimal base images
FROM alpine:latest # Instead of ubuntu
# Or distroless
FROM gcr.io/distroless/python33. Optimize Data Access
# Bad: Frequent small reads (high encryption overhead)
for i in range(1000):
data = read_from_encrypted_storage(i)
# Good: Batch reads
data = read_batch_from_encrypted_storage(range(1000))Step 7: Security Best Practices
1. Continuous Attestation
import schedule
import time
from dstack_sdk import DstackClient
import requests
def check_attestation():
"""Continuously verify TEE integrity"""
try:
# Generate fresh attestation quote
client = DstackClient()
quote_result = client.get_quote(report_data=b"")
quote_hex = quote_result.encode_quote_hex()
# Verify quote with Phala Cloud API
response = requests.post(
"https://cloud-api.phala.network/api/v1/attestations/verify",
json={"hex": quote_hex}
)
result = response.json()
if result.get("success") and result.get("quote", {}).get("verified"):
print(f"[{time.ctime()}] ✅ Attestation valid")
print(f" Proof: https://proof.t16z.com/reports/{result['checksum']}")
else:
print(f"[{time.ctime()}] ❌ ALERT: Attestation failed!")
alert_security_team()
except Exception as e:
print(f"Attestation error: {e}")
alert_security_team()
# Check every 5 minutes
schedule.every(5).minutes.do(check_attestation)
while True:
schedule.run_pending()
time.sleep(60)2. Secure Environment Variables
In Phala Cloud, environment variables are encrypted:
# In dashboard, set encrypted variables:
# SECRET_API_KEY=your-secret-key
# DATABASE_PASSWORD=your-db-password
# These are encrypted client-side before upload
# Decrypted only inside the TEE
# Phala operators cannot see them3. Audit Logging
import logging
audit_log = logging.getLogger('tee-audit')
handler = logging.FileHandler('/var/log/tee-audit.log')
handler.setFormatter(logging.Formatter(
'%(asctime)s | %(levelname)s | %(message)s'))
audit_log.addHandler(handler)
def log_attestation_event(app_id, verified):
if verified:
audit_log.info(f"App {app_id} attestation SUCCESS")
else:
audit_log.error(f"App {app_id} attestation FAILURE")Common Issues and Troubleshooting
Issue 1: Application Not Accessible
Symptom: Cannot reach <app-id>.dstack.phala.network
Solution:
# Check app status in dashboard
phala apps status <app-id>
# Check container logs
phala apps logs <app-id>
# Common issues:
# - Container not listening on exposed port
# - Firewall blocking ports (check docker-compose ports)
# - App still starting (wait 2-3 minutes for CVM boot)Issue 2: Attestation Verification Fails
Symptom: Attestation endpoint returns error
Solution:
# Verify app is fully started
curl https://<app-id>.dstack.phala.network/health
# Check Trust Center for detailed error
# Visit: https://ra-quote-explorer.vercel.app/
# Enter your app ID
# Common causes:
# - App still initializing (wait longer)
# - Docker image hash changed (expected if you updated)
# - Network issues (retry)Issue 3: Performance Lower Than Expected
Symptom: App slower than non-TEE deployment
Diagnosis:
# Check resource usage in dashboard
# If CPU >80% or Memory >80%, upsize
# For Phala Cloud, increase resources:
# Settings → Resize → Select larger instance
# Check if I/O bound
# Add more disk IOPS or use faster storage tierNext Steps
Beginner → Intermediate
- Deploy [GPU TEE](https://phala.com/gpu-tee) workload for AI/ML
- Implement confidential multi-party computation
- Set up production monitoring and alerts
- Integrate with CI/CD pipelines
Intermediate → Advanced
- Build confidential AI-as-a-Service platform
- Implement zero-trust network architecture
- Automate compliance reporting with attestation logs
- Optimize for high-throughput production
Learning Resources
Phala Resources:
General Resources:
Frequently Asked Questions
How long does it take to deploy?
Phala Cloud: 2-5 minutes for container deployment, instant for pre-deployed AI APIs
Google Cloud/Azure: 5-10 minutes for VM provisioning + app deployment
Do I need to change my code?
Application code: Usually no changes required
Client code: Add attestation verification (10-50 lines typically)
Which platform should I use?
- AI/ML workloads: Phala Cloud (best GPU TEE support)
- General web apps: Google Cloud (mature, easy)
- Microsoft ecosystem: Azure (enterprise integration)
- Highest security: Phala Cloud (zero-trust, decentralized KMS)
How much does it cost?
- Phala Cloud: Pay-per-second, transparent pricing, free tier available
- CVM: ~$0.10-0.50/hour depending on size
- GPU TEE: ~$2-5/hour for H100
- Google Cloud: Similar to regular VMs + 10-30% premium
- Azure: DC-series VMs are 10-40% more than standard
Can I access the VM directly?
Phala Cloud: SSH access available for -dev images (for debugging)
Google Cloud/Azure: Full SSH access
How do I prove security to auditors?
Provide attestation reports showing:
- Cryptographic proof of TEE execution
- Measurement logs (what code ran)
- Continuous monitoring records
- Trust Center verification history (for Phala)
Conclusion
You now know how to:
- ✅ Deploy confidential applications on Phala Cloud, Google Cloud, and Azure
- ✅ Verify security through remote attestation
- ✅ Migrate existing Docker apps with minimal changes
- ✅ Monitor and optimize confidential workloads
- ✅ Follow security best practices
Confidential computing shifts security from trust to cryptography. Start with a simple deployment, verify it works, then expand to production workloads.
The future of cloud computing is confidential by default—and you’re now equipped to build it.