Phala 2025: Year in Review

December 22, 2025
5 min read
Phala 2025: Year in Review

2025 marked Phala's most transformative year yet. In this year Phala fully bloomed into a full-stack confidential AI infrastructure. This wasn’t a cosmetic rebrand. It was a strategic repositioning that reflects where the market actually needed Phala to be.

This is Phala in 2025.


Strategic Shift

Phala Network → Phala.com

In September 2025, Phala Network officially rebranded to phala.com, signaling its evolution from a Web3-native project into a global confidential AI infrastructure company. Founded in 2019 by Marvin Tong and Hang Yin, Phala had built the world's largest decentralized TEE network. But the market was moving, and so was Phala.

After joining NVIDIA's Inception Program in 2025, Phala extended its expertise from blockchain to AI, where enterprises demand systems that are powerful, trustworthy, private, and verifiable. This shift reflects Phala's ambition to move beyond crypto infrastructure into mainstream enterprise AI adoption, positioning itself as the trust layer for global AI economies.

The Migration of Phala Verification: Polkadot → Ethereum

November 2025 brought an even bigger change: sunsetting the Phala parachain and migrating to Ethereum L2. The parachain slot was expiring on November 20th, and the decision was strategic rather than reactive.

Why migrate? Intel's confidential computing roadmap had shifted from SGX (which powered the parachain) to Intel TDX and GPU-based solutions. The parachain ran on ~29,000 SGX miners with 65M PHA delegated, which is stable, but locked into declining technology. Meanwhile, Phala's commercial pilots were already running on TDX CPUs and GPU Confidential Compute.

The migration consolidates Phala’s staking, governance, mining, and asset flows onto Ethereum L2, while preserving Ethereum L1 for staking security. This enables GPU and Intel TDX–based confidential computing with vPHA rewards and a unified asset migration path. It marks a new era for Phala — a deliberate choice of scalability and future-proofing over legacy infrastructure.


Phala’s Full Stack: From Deploy to Prove

Phala’s 2025 platform came together around three core pillars.

Pillar A — Confidential VMs with Confidential GPUs

Phala shipped production‑grade Confidential GPU VMs, combining Intel TDX protection for CPU and memory with NVIDIA Confidential Computing for GPUs. This delivers end‑to‑end protection across the full compute stack.

Teams now run:

  • AI training and inference on sensitive data
  • Zero‑knowledge proof generation with under 5% overhead
  • Non‑custodial AI agents where cryptographic keys never leave the enclave

And crucially: the performance cost stayed within what real systems can tolerate. Internal benchmarking work in the ecosystem showed TEE overhead remains under ~7% for typical LLM queries, with some large-model cases showing minimal penalty—making confidential AI viable for production, not just high-security niches.  

Pillar B — Trust Center: Prove It, Don’t Claim It

In October 2025, Phala launched the Trust Center to address a core industry failure: everyone claims privacy, but nobody proves it.

Every workload running on Phala generates verifiable evidence:

  • Dual remote attestation signed by Intel (TDX) and NVIDIA
  • Source‑code integrity verification
  • Zero‑trust networking enforcement
  • KMS access controls tied to attestation  

The Trust Center turns “trust us” into an auditable posture. In 2025, this became a differentiator: an execution layer with a cryptographic audit trail that even the cloud operator can’t tamper with. 

Pillar C — dstack: The Open Developer Surface

Confidential computing only wins if it feels like normal cloud development. dstack provides a familiar container‑based workflow for deploying confidential applications, with secrets management and zero‑trust HTTPS built in.

In 2025, Phala donated dstack to the Linux Foundation, placing critical infrastructure under neutral governance. This move increased enterprise trust and positioned dstack as open infrastructure for the broader confidential computing ecosystem.


The Methods Behind the Trust

Phala didn’t win 2025 by saying “we’re secure.” It won by operationalizing trust into repeatable mechanisms: All-in-one secure stack for private AI

  • Decentralized root of trust — verification without dependence on a single provider
  • TEE‑controlled HTTPS — certificates managed inside the enclave itself
  • Secure KMS — only attested workloads can access secrets
  • Open verification posture — open‑source tooling and neutral governance as a differentiator

And in regulated domains, trust also requires governance maturity. By the end of 2025, Phala achieved SOC 2 Type I and HIPAA compliance, reinforcing that Phala is built for enterprise-grade operational rigor — not just hardware primitives.  


Traction: Real Workloads, Real Demand

By the end of 2025, more than a thousand teams deployed production workloads, spanning AI, cryptography, and autonomous agents. Growth was driven not by experimentation, but by systems running on sensitive data that could not safely operate in traditional clouds.

Phala Cloud finished 2025 with:

  • 10,004 total users
  • 2,113 subscribed users
  • 398 paid users
  • 2,529 total CVMs, with 813 running
  • 6,725 total vCPU provisioned, with 1,969 running vCPU
  • 22k GB total memory, with ~7k GB running memory
  • 157.6k GB total disk, with ~61.5k GB running disk

Proof at scale: confidential AI is no longer a lab project. Phala demonstrated confidential AI readiness at real throughput—processing over 1.34B LLM tokens in a single day and supporting confidential inference with ~0.5%–5% overhead in production conditions. That matters because it’s the practical threshold: secure enough for sensitive data, fast enough for real applications.


Ecosystem and Go‑to‑Market

Phala’s growth in 2025 was powered by a coordinated multi‑surface strategy:

  • dstack leading developer adoption and open‑source credibility
  • Phala Cloud converting usage into recurring revenue
  • Confidential AI APIs integrating with existing AI workflows
  • Enterprise contracts for high‑intensity, regulated use cases

Partnerships across AI, ZK, and decentralized data ecosystems reinforced Phala’s role as a neutral trust layer rather than a closed platform.

2025 Partnerships

Succinct Labs

Partnership combining ZK proofs with TEE attestation to create multi-proof systems that hedge against cryptographic failures while preserving verifiability. Succinct’s SP1 zkVM runs inside Phala GPU TEEs with <5% verified overhead.

Sentient

Founding members of the Verifiable Compute Consortium, advancing open, decentralized, and trust-minimized AI by uniting Sentient’s community-owned model ecosystem with Phala’s confidential and verifiable compute infrastructure.

Newton

Partnership integrating Newton’s verifiable AI agent platform with Phala TEEs to enable secure, transparent, and cryptographically verifiable autonomous finance agents.

NEAR Protocol & NEAR AI

Deep integration with the NEAR ecosystem, including 11 verified applications in Phala’s Trust Center and TEE deployment templates for NEAR Shade Agents.

OLLM

Phala x OLLM enables hardware-secured, privacy-preserving LLM inference with verifiable execution and simple API access using Phala’s confidential AI cloud.

OpenRouter

Phala became a verified OpenRouter provider, exposing confidential AI models to a large developer base with zero workflow changes—only an API endpoint swap.

Fairblock & Mind Network

Partnerships demonstrating GPU TEE as a second security layer for FHE and MPC systems, unifying secure key generation, computation integrity, and attestation.

Vana Network

Integration with Vana’s user-owned data platform to ensure secure, verifiable data usage while enabling individuals to monetize data for decentralized AI.

OODA AI

Partnership with Nasdaq-listed OODA AI to deliver scalable, privacy-preserving, and verifiable AI inference and Safe AGI infrastructure using TEE-secured GPUs.

Sahara AI

Collaboration powering on-chain AI data ownership and value sharing among contributors, developers, and model builders.

Neurochain AI

Providing GPU TEE infrastructure for scalable, secure, and cost-efficient AI development and machine learning workloads.

Mantis

Partnership enabling Mantis AI agents—including LLM DISE workloads—to run on Phala TEEs, with upcoming NVIDIA GPU TEE support.

zkVerify

Integration of zkVerify as a universal proof verification layer, reducing costs, accelerating TEE attestations, and enhancing privacy.

ChainGPT

Partnership bringing ChainGPT’s open-source SolidityLLM to Phala Cloud, enabling private, trustless smart contract development with verifiable execution.

Autonomys

Collaboration combining Phala’s confidential compute with Autonomys’ decentralized storage and execution to power verifiable AI agents and cross-chain super dApps.

CARV

Partnership merging Phala’s TEE-powered cloud with CARV’s decentralized AI ecosystem to build scalable, privacy-first AI and data frameworks.

Mira Network

Strategic partnership where Mira serves as an official model provider, delivering verifiable LLMs and trustless inference to elizaOS agents on Phala Cloud.

GoPlus Security

Collaboration to develop AI model auditing and security frameworks that strengthen trust and verifiability for TEE-based systems.

NOTAI

Partnership bringing NOTAI’s AI agents to Phala Cloud with TEE-backed security and scalability.

Streamr

Integration combining real-time decentralized data streams with Phala’s secure compute to power live, privacy-preserving AI agents.

PublicAI

Partnership setting a new standard for ethical AI by combining TEE-secured infrastructure with decentralized data annotation for private, verifiable training.

xNomadAI

Collaboration integrating Phala TEEs into AI-driven NFT infrastructure, enabling agents to become transferable, on-chain assets.

Holoworld AI

Launch partnership where Spore by Phala debuted in Holoworld’s Agent Launchpad, combining TEE-secured autonomous AI evolution with composable agent infrastructure.

GAIB

Partnership combining TEE-secured compute with tokenized GPUs, AI-Fi primitives, and GPU-backed stablecoins to power decentralized AI finance.

Privasea

Collaboration powering Privasea’s DeepSea AI Network with Phala TEEs, while enabling Spore agents to participate in Privasea’s decentralized mining ecosystem.

Developer Tools throughout 2025

In 2025, we focused on removing friction between developers and confidential computing.

Templates & Quick Starts

CLI & Infrastructure

Documentation & Support

APIs & Integrations

The result: developers who've never touched confidential computing can confidently deploy their first TEE-secured application in under 10 minutes.


Phala in 2026

2025 proved Phala’s technology works. 2026 is about meeting the market where the demand is — and right now, that demand is everywhere. Enterprises need confidential AI that satisfies regulators. Developers need TEE deployment that doesn't require a PhD. Users need proof their data stays private, not promises. Phala built all three. Now comes scale.

Recent Posts

Related Posts