Centralized training infrastructure exposes datasets and model IP. Phala enables consortium learning and regulated-industry training with hardware isolation.
Traditional cloud infrastructure exposes sensitive information to operators and administrators.
More InformationHardware-enforced isolation prevents unauthorized access while maintaining computational efficiency.
More InformationEnd-to-end encryption protects data in transit, at rest, and critically during computation.
More InformationCryptographic verification ensures code integrity and proves execution in genuine TEE hardware.
More InformationDeploy a fully optimized system and upgrade your current setup.
Load training datasets directly from private sources inside TEEs. Your sensitive data never leaves the secure enclave during the entire training pipeline.
Training gradients stay encrypted end-to-end. Hardware attestation proves your model updates never leaked.
Every training run generates cryptographic proof that your data remained confidential throughout the process.
Train on distributed TEE clusters worldwide. Scale your confidential training workloads across secure data centers with hardware-level isolation.
Launch distributed pre-training jobs on confidential GPU clusters. Slurm and Kubernetes templates with TEE attestation and sealed checkpoint storage.
# Deploy distributed training on TEE cluster
docker run -d \
--name phala-training \
--gpus all \
--device=/dev/tdx_guest \
-v $(pwd)/data:/data \
-v $(pwd)/checkpoints:/checkpoints \
-e WORLD_SIZE=8 \
-e RANK=0 \
-e MASTER_ADDR=10.0.1.100 \
-e MASTER_PORT=29500 \
-e MODEL_CONFIG=/data/llama-70b.json \
-e TRAINING_DATA=/data/consortium/*.jsonl \
-e CHECKPOINT_DIR=/checkpoints \
phalanetwork/training:latest
# Monitor training progress
docker logs -f phala-training
# Training output from sealed environment
# Epoch 1/10: Loss 2.134 | Throughput 1.2M tok/s
# Epoch 2/10: Loss 1.876 | Throughput 1.2M tok/s
# Checkpoint saved: /checkpoints/epoch-2.bin
# Attestation signed: 0x8a9b7c6d...
Generate cryptographic proofs of your training process. Verify cluster attestation, dataset hashes, and reproducible build IDs for auditors and consortium partners.
# Get cluster attestation and training lineage
curl -X POST https://cloud-api.phala.network/api/v1/training/verify \
-H "Content-Type: application/json" \
-d '{
"job_id": "train-consortium-llama-70b",
"verify_cluster_attestation": true,
"verify_dataset_hashes": true,
"verify_checkpoint_lineage": true
}'
# Attestation proves sealed training
{
"verified": true,
"cluster_size": 8,
"tee_type": "Intel TDX",
"dataset_hashes": [
"0x8a9b7c6d...",
"0x1a2b3c4d..."
],
"checkpoint_lineage": "llama-70b-base -> epoch-10.bin",
"reproducible_build_id": "0xfe7d8c9b...",
"timestamp": "2025-01-15T14:30:00Z"
}
Meeting the highest compliance requirements for your business
Everything you need to know about Confidential Training
Train large-scale AI models on sensitive datasets with multi-GPU TEE clusters and hardware-enforced encryption.
Deploy on Phala