How a California Law Firm Adopted AI with Phala’s Confidential Compute — Without Losing Client Trust

September 24, 2025
5 min read

How a California Law Firm Adopted AI with Phala’s Confidential Compute — Without Losing Client Trust

In 2025, a mid-sized law firm in California faced a dilemma.

Its partners were eager to use large language models (LLMs) to speed up research and draft discovery motions. But every discussion ended in the same place: What about client confidentiality?

The attorneys weren’t alone. Across the profession, security officers and ethics counsel were sounding the alarm. Legacy cloud providers and mainstream AI platforms offered little transparency about where data went, how it was processed, or whether governments—foreign or domestic—might gain access through back doors. For lawyers bound by California’s Rules of Professional Conduct, particularly Rule 1.6 (confidentiality) and Rule 5.3 (supervision of technology providers), this was more than an IT problem. It was a professional risk.

The firm was caught in a bind:

  • ChatGPT and similar services were powerful, but opaque. Submitting sensitive client information to a black-box service risked discovery exposure or worse.
  • Dedicated GPU servers promised control, but at unsustainable cost. Running a model full-time meant paying for idle machines.
  • Compliance officers wanted assurance that data wouldn’t be “trained on” or stored in ways beyond their control. No provider could offer proof.

The lawyers were stuck in a Catch-22: adopt AI and risk violating client trust, or forgo AI and fall behind peers. The team needed a way forward—a solution that gave them the benefits of AI without violating their duty of confidentiality.


The Turning Point: Confidential AI on Demand

That changed when the firm began piloting a confidential AI platform built on trusted execution environments (TEEs)—a hardware-secured enclave where all data remains encrypted, even during processing. Unlike legacy platforms, this system offered cryptographic attestation: mathematical proof that no one—not even the cloud operator—could access the firm’s briefs, depositions, or client communications.

Equally important, the firm could spin up a dedicated model instance by the hour. For roughly the cost of a billable junior associate’s coffee run, the attorneys could rent a high-end LLM for an afternoon of discovery review, then shut it down. No token-based surprises. No idle hardware fees. Just predictable, controllable cost.

Instead of a shared AI service, the firm gained access to a dedicated large language model instance that ran inside a TEE GPU server. This meant:

  • All client data stayed encrypted end-to-end, even during processing.
  • The firm received cryptographic proof (attestation) that no one—including the cloud provider—could see or use the data.
  • A private API and web interface let attorneys integrate the AI into existing workflows with minimal change.

And crucially: the firm could rent the AI model by the hour. For large projects—like reviewing thousands of pages of discovery documents—they could spin up a powerful model, use it intensively, then shut it down. The result: predictable, controllable costs instead of runaway token-based bills or idle hardware expenses.

Results at a Glance

  • 100% compliance with California confidentiality rules.
  • Significant cost savings, thanks to hourly billing vs. token pricing.
  • Faster turnaround on research and document review tasks.
  • Peace of mind for partners, knowing client trust was never compromised.

Why It Matters for the Profession

For this firm, the decision wasn’t just about technology. It was about upholding the most important promise a lawyer makes: to protect client trust.

By adopting confidential AI, the firm showed that law practices can embrace innovation without breaking professional obligations. Other firms have since taken notice—proof that this approach is more than an experiment. It’s becoming the standard for how the legal profession will use AI responsibly.

The pilot gave the California firm confidence to use AI without crossing ethical lines:

  • End-to-end confidentiality preserved attorney–client privilege.
  • Predictable billing matched how law firms already track time and expenses.
  • Dedicated, isolated environments eliminated the risk of multi-tenant data mingling.
  • Audit-ready proof showed compliance officers that data was never exposed.

For the first time, the firm’s general counsel could tell partners: “Yes, we can use AI—and no, it won’t compromise our obligations.”

Word spread quickly. Other firms in the region, facing the same pressures, began asking how they could join. Legal tech startups saw an opening too: if law firms could rely on confidential AI, new practice-management and research tools could emerge without fear of client data leakage.

The story here is not about one vendor’s features. It’s about the profession shifting from “Can we use AI?” to “How do we use it responsibly?” California firms are showing that confidentiality, compliance, and innovation don’t have to be mutually exclusive.

A Safer Way Forward

As regulators and courts catch up with the rapid advance of AI, the bar is clear: client trust cannot be compromised. The question for law firms is whether they’ll wait on the sidelines or adopt technologies that allow them to lead.

Confidential AI—rented securely by the hour, verifiable by design—may not just be a technical solution. It may be the bridge that lets lawyers embrace the future without breaking the promises that define their profession.

Recent Posts

Related Posts