
Picture this scenario: You’re standing before a federal judge at 9:00 AM, explaining why your brief cites three cases that don’t exist. Opposing counsel is smirking. Your client looks confused and angry. You’re facing a $5,000 sanction and professional embarrassment because you trusted ChatGPT to do your legal research.
This nightmare became reality for two New York attorneys in Mata v. Avianca[1]—and it happened again to lawyers in South Florida and California throughout 2025[2]. Every lawyer who uses AI for client work on a daily basis should be asking a critical question: “Could this happen to me?”
The answer depends on understanding a fundamental tension in modern legal practice. Artificial intelligence tools like ChatGPT and Google NotebookLM promise unprecedented productivity gains, but they create serious compliance risks under California Rule 1.6 and Texas Rule 1.05. For lawyers handling confidential client information, this creates an impossible choice: embrace AI and risk your license, or avoid AI and fall behind competitors who are working twice as fast.
There is a third option: confidential AI built specifically for professionals who cannot compromise on privacy.
The Compliance Crisis: What California and Texas Rules Actually Require
Both California and Texas have clear, unambiguous rules about client confidentiality. Understanding these rules is the first step toward using AI ethically and effectively.
California Rule 1.6 and Business & Professions Code 6068(e)
California’s treatment of lawyer-client confidentiality is unique and notably stricter than other jurisdictions[3]. California Rule 1.6 states that “a lawyer shall not reveal information protected from disclosure by Business and Professions Code section 6068, subdivision (e)(1) unless the client gives informed consent.”[4]
The California State Bar has issued explicit guidance on AI use. Their Generative AI Practical Guidance document warns that “a lawyer must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections.”[5] This isn’t a suggestion—it’s a mandate backed by disciplinary authority.
In August 2025, the Daily Journal reported that the California State Bar’s guidance “warns lawyers not to input confidential client information into AI tools lacking adequate confidentiality and security protections.”[6] The message is clear: if you cannot verify that an AI tool protects client confidentiality, you cannot use it for client work.
Texas Rule 1.05
Texas takes an equally firm stance through Rule 1.05 of the Texas Disciplinary Rules of Professional Conduct. This rule defines confidential information broadly to include “both privileged information and unprivileged client information.”[7] The rule prohibits lawyers from knowingly revealing confidential information except in very limited circumstances.
Texas Bar Opinion 705 addresses AI use directly, requiring lawyers to ensure AI tools have adequate security measures before uploading confidential information[8]. The Texas Bar Blog emphasized in August 2025 that “Texas require that attorneys never share confidential client information with unsecured or unvetted AI tools.”[9]
What “Confidential Information” Actually Means
Both jurisdictions define confidential information expansively. It includes not just attorney-client privileged communications, but also:
- Client names and case details
- Legal strategies and work product
- Settlement negotiations
- Financial information
- Any information relating to the representation
This means that uploading a contract for AI review, asking AI to draft a motion, or using AI to summarize deposition transcripts all involve confidential information. If the AI tool stores that data, uses it for training, or lacks adequate security, you’ve violated your confidentiality obligations.
Why ChatGPT and Google NotebookLM Create Compliance Risks
The most popular AI tools were not designed with lawyer confidentiality requirements in mind. Understanding their limitations is essential for ethical practice.
The Data Retention Problem
ChatGPT’s Terms of Service explicitly state that OpenAI “may use your content to train our models.”[10] Every contract you upload, every client question you ask, and every legal strategy you discuss becomes training data for OpenAI’s next model version. This data is stored on OpenAI’s servers indefinitely.
For California and Texas lawyers, this creates an immediate Rule 1.6 and Rule 1.05 violation. You are revealing confidential client information to a third party (OpenAI) without informed consent, and that information is being used for purposes unrelated to your client’s representation.
Google NotebookLM presents similar issues. While Google’s privacy policies are more transparent than some competitors, the fundamental problem remains: your data is stored on Google’s servers, processed by Google’s systems, and subject to Google’s security protocols. Unless you have a Business Associate Agreement or equivalent legal protection, you’re exposing client confidential information to unauthorized disclosure.
The Hallucination Problem
Beyond confidentiality, there’s the accuracy problem that led to the Mata v. Avianca sanctions. ChatGPT and similar large language models don’t retrieve information—they predict what words should come next based on statistical patterns in their training data. This means they can confidently cite cases that don’t exist, misquote statutes, and fabricate legal principles.
In Mata v. Avianca, ChatGPT invented cases like “Varghese v. China Southern Airlines” with complete citations, case summaries, and even fake quotes from judges[11]. The lawyers submitted these citations to federal court without verification. The result: $5,000 in sanctions, professional embarrassment, and a cautionary tale that every lawyer now knows.
This happened again in 2025. A South Florida lawyer was sanctioned for similar conduct, as was a California attorney[12]. The pattern is clear: lawyers are using ChatGPT for legal work, trusting its output without verification, and facing professional consequences.
The Security Question
Even if you’re willing to accept the data retention and hallucination risks, there’s the security question. Anthropic, maker of Claude AI, recently released a legal plugin for their Cowork product. In their own safety documentation, Anthropic warns that “the chances of an attack are still non-zero” and they “strongly advise against using Claude in Chrome to manage or take actions involving sensitive information.”[13]
If Anthropic—a company focused on AI safety—cannot guarantee security for sensitive information, what does that say about other AI tools? For lawyers bound by confidentiality rules, “non-zero” risk of a security breach is unacceptable when handling client data.
The Productivity Imperative: Why Lawyers Can’t Avoid AI
Despite these compliance risks, lawyers cannot simply avoid AI tools. The productivity gains are too significant, and clients increasingly expect AI-enhanced efficiency.
The 11 PM Scenario
Consider a common situation: It’s 11:00 PM on Sunday night. You’re facing 300 pages of deposition transcripts, and your motion for summary judgment is due tomorrow morning. You need to find every instance where the defendant contradicted their sworn testimony. You need to identify which exhibits support your argument. You need to draft a coherent narrative that ties everything together.
Without AI, this is an all-nighter. You’ll spend six hours reading transcripts, highlighting passages, cross-referencing exhibits, and drafting your motion. You’ll be exhausted, you might miss key details, and you’ll resent the profession you chose.
With AI, this is two hours of focused work. You upload the transcripts, ask targeted questions, get instant answers grounded in the actual testimony, and spend your time on legal analysis rather than document review. You’re in bed by 1:00 AM, and your motion is better than it would have been after six hours of manual work.
This productivity difference compounds over time. Lawyers using AI effectively can handle more cases, provide faster responses to clients, and spend more time on high-value legal strategy rather than low-value document review. Solo practitioners and small firms, in particular, gain competitive advantages that were previously available only to large firms with armies of associates.
Client Expectations
Clients are also driving AI adoption. Corporate clients increasingly ask whether their outside counsel uses AI for efficiency. They want to know if they’re paying for manual work that could be automated. They expect faster turnaround times and lower bills.
Clients who work in tech-forward industries are especially attuned to this. They use AI in their own businesses, and they expect their lawyers to do the same. A lawyer who says “I don’t use AI because of compliance concerns” may sound principled, but to the client, it sounds like “I’m going to work slower and charge you more.”
Competitive Pressure
Finally, there’s competitive pressure from other lawyers. If your opposing counsel is using AI to draft motions in two hours while you’re spending six hours, they have a four-hour advantage to spend on strategy, research, or client development. Over the course of a year, that advantage compounds into hundreds of hours of additional productivity.
Lawyers who avoid AI entirely will find themselves at a competitive disadvantage. They’ll work longer hours, handle fewer cases, and struggle to compete on price and turnaround time. For solo practitioners and small firms without large support staffs, this disadvantage can be existential.
The Solution: Confidential AI Built for Professionals
The good news is that the choice between compliance and productivity is a false dichotomy. Confidential AI systems solve both problems simultaneously by fundamentally changing how AI handles your data.
What Makes AI “Confidential”?
Confidential AI uses privacy-preserving architectures that ensure your data never leaves a secure computing environment. The core technology is called Trusted Execution Environments (TEEs)—hardware-based security measures that create isolated computing spaces where even the AI provider cannot access your content.
Think of TEEs as a secure vault inside the computer processor itself. When you upload a document or ask a question, that data enters the TEE and is processed entirely within that secure environment. The AI provider’s servers never see your raw data. The AI model never trains on your information. Your confidential client data remains confidential.
This confidential computing infrastructure is specifically designed for professionals who handle sensitive information—lawyers, healthcare providers, financial advisors, and consultants. It’s not a marketing claim; it’s a technical architecture verified by independent security audits.
For California and Texas lawyers worried about Rule 1.6 and Rule 1.05 compliance, confidential AI systems provide a clear answer: client confidential information stays protected throughout your AI interaction because the system is designed from the ground up to prevent unauthorized disclosure.
Document-Grounded AI: Solving the Hallucination Problem
Confidential AI also solves the hallucination crisis through a technique called document-grounding. Instead of synthesizing answers from vast training datasets (which leads to fabrication), document-grounded systems retrieve information exclusively from files you upload.
Here’s how it works in practice: You upload your 300 pages of deposition transcripts. You ask, “What did the defendant say about the contract terms in Question 47?” The AI searches only those transcripts, finds the exact testimony, and quotes it back to you with page and line citations. It doesn’t invent testimony. It doesn’t hallucinate quotes. It retrieves what’s actually in your documents.
This is fundamentally different from ChatGPT’s approach. ChatGPT tries to answer every question based on its training data, which leads to confident fabrication when it doesn’t know the answer. Document-grounded AI only answers based on your uploaded files, which eliminates the hallucination problem for document-based legal work.
For lawyers, this means you can trust the AI’s output because it’s grounded in your actual case files. You still need to verify and exercise professional judgment, but you’re not dealing with fake citations or invented facts. The AI becomes a research assistant that actually reads your documents and finds relevant passages—exactly what you need at 11:00 PM on Sunday night.
Zero Retention: Data That Disappears
The final piece of confidential AI is zero retention. Unlike ChatGPT, which stores your conversations indefinitely, confidential AI systems delete your data after your session ends. Your conversations disappear. Your uploaded documents are removed from the system. There’s no persistent storage of client confidential information.
This zero-retention architecture directly addresses California and Texas confidentiality requirements. You’re not “revealing” confidential information in any meaningful sense because the information is never stored, never used for training, and never accessible to anyone other than you during your active session.
For lawyers concerned about data breaches, this provides an additional layer of protection. Even if an attacker somehow compromised the system, there’s no stored client data to steal. The data exists only during your active use and is cryptographically erased afterward.
RedPill: Confidential AI Purpose-Built for Lawyers
RedPill represents the practical implementation of these confidential AI principles. Built on Phala Network’s confidential computing infrastructure, RedPill offers lawyers a secure alternative to ChatGPT and Google NotebookLM, with document-grounded responses that eliminate hallucinations while protecting client confidentiality.
The Knowledge Feature: Document-Grounded Research
RedPill’s Knowledge feature allows you to upload case documents and receive answers grounded only in those files. This is the 11:00 PM solution: upload your deposition transcripts, contracts, or case law research, and ask questions that get answered from your actual documents rather than from the AI’s training data.
The system maintains the same conversational interface you’re familiar with from ChatGPT, but with a critical difference: every answer is tied to specific passages in your uploaded documents. You can verify the AI’s responses by checking the source citations. You’re not trusting the AI to “know” the law—you’re using the AI to search and analyze documents you’ve provided.
For solo practitioners and small firms, this levels the playing field with large firms. You get document review capabilities that previously required multiple associates, but without compromising client confidentiality or risking hallucinated citations.
Zero Retention and Confidential Computing
RedPill implements zero retention through Phala Network’s Trusted Execution Environments. Your conversations are processed in secure enclaves where even Phala Network cannot access your content. After your session ends, your data is cryptographically erased. There’s no training on your data, no persistent storage, and no unauthorized access.
This architecture directly satisfies California Rule 1.6 and Texas Rule 1.05 requirements. You’re using a secure technology that protects client confidential information through hardware-level security measures. You’re not revealing information to third parties because the confidential computing environment prevents even the service provider from accessing your data.
The $15 Difference: Compliance as a Feature
RedPill costs $35 per month compared to ChatGPT’s $20 per month. That $15 difference represents your compliance premium—the cost of using AI without violating professional responsibility rules.
Consider the alternative costs: a single malpractice claim can cost tens of thousands of dollars in legal fees and increased insurance premiums. A disciplinary complaint can cost thousands in legal representation and damage to your professional reputation. A $5,000 sanction like in Mata v. Avianca is 13 years of RedPill subscriptions.
The $15 monthly difference is your malpractice insurance for AI use. It’s the cost of sleeping well at night knowing you’re not violating client confidentiality or risking sanctions for fake citations.
Practical Guidance: How to Use AI Ethically in California and Texas
Whether you choose RedPill or another confidential AI solution, here are the key principles for ethical AI use under California and Texas rules.
Evaluate AI Tools for Confidentiality Protections
Before using any AI tool for client work, ask these questions:
- Where is my data stored? If the answer is “on the provider’s servers indefinitely,” that’s a red flag for Rule 1.6 and Rule 1.05 compliance.
- Is my data used for training? If yes, you’re revealing confidential information to unauthorized third parties.
- What security measures protect my data? Look for specific technical measures like encryption, access controls, and confidential computing.
- Can I delete my data? If the provider retains your data even after you request deletion, you’ve lost control of client confidential information.
If you cannot get satisfactory answers to these questions, you cannot use the tool for client work without informed client consent.
Obtain Informed Consent When Necessary
California and Texas both allow disclosure of confidential information with informed client consent. If you want to use an AI tool that doesn’t meet confidentiality standards, you can obtain written consent from your client explaining:
- What AI tool you’ll use
- What data will be shared with the AI provider
- What security measures the provider uses
- What risks exist (data breaches, unauthorized access, training on client data)
- What alternatives exist (confidential AI, manual work)
Most clients will decline consent once they understand the risks. But obtaining informed consent shifts the ethical responsibility and demonstrates your compliance with professional rules.
Never Trust AI Output Without Verification
Even with confidential, document-grounded AI, you must verify the output. Rule 1.01 (Texas) and Rule 1.1 (California) require competence, which includes understanding the technology you use and exercising independent professional judgment.
This means:
- Check every citation before including it in a brief
- Verify factual claims against primary sources
- Review AI-drafted documents for accuracy and appropriateness
- Exercise professional judgment about legal strategy and analysis
AI is a tool, not a replacement for lawyer judgment. You remain responsible for the work product, even if AI helped create it.
Stay Current on Ethics Guidance
Both California and Texas continue to issue guidance on AI use. The California State Bar’s Generative AI Practical Guidance was released in 2024[14]. Texas Bar Opinion 705 addresses AI and confidentiality[15]. Additional guidance will emerge as AI technology evolves.
Competent practice requires staying informed about these developments. Subscribe to bar association updates, attend CLE programs on legal technology ethics, and participate in professional discussions about AI use.
The Future of Legal Practice: Compliance and Productivity Together
The tension between compliance and productivity is not unique to AI. Lawyers have always faced pressure to work efficiently while maintaining ethical standards. What’s different about AI is the magnitude of both the opportunity and the risk.
The opportunity is unprecedented productivity gains. AI can handle document review, legal research, drafting, and analysis tasks that currently consume hundreds of hours per year. Lawyers who use AI effectively can serve more clients, provide faster service, and focus on high-value strategic work.
The risk is equally significant. Violating client confidentiality can end careers. Getting sanctioned for fake citations can destroy reputations. Using insecure technology can expose clients to data breaches and liability.
Confidential AI resolves this tension by making compliance a feature rather than a constraint. You don’t choose between productivity and ethics—you get both through technology designed specifically for professionals who cannot compromise on confidentiality.
For California and Texas lawyers, this means you can embrace AI without fear of disciplinary complaints. You can work faster without working recklessly. You can compete with tech-forward firms without violating professional responsibility rules.
The choice is no longer between compliance and productivity. The choice is between AI tools that ignore your professional obligations and AI tools that respect them.
Conclusion: The Path Forward
If you’re a lawyer reading this article, you’re likely in one of three positions:
Position 1: You’re already using ChatGPT for client work. You know you’re probably violating Rule 1.6 or Rule 1.05, but you haven’t been caught yet, and the productivity gains are too valuable to give up. You rationalize that “everyone does it” and hope the bar association doesn’t find out.
Position 2: You’re avoiding AI entirely. You’ve read about the Mata v. Avianca sanctions and the confidentiality risks, and you’ve decided it’s not worth it. You’re working longer hours than your competitors, but you’re sleeping well at night knowing you’re compliant.
Position 3: You’re looking for a better option. You understand both the opportunity and the risk, and you’re searching for AI tools that let you have both productivity and compliance.
If you’re in Position 1, you’re playing Russian roulette with your license. Eventually, a disciplinary complaint will happen—either from a data breach, a client complaint, or a sanctions hearing like Mata v. Avianca. The question isn’t whether you’ll face consequences, but when.
If you’re in Position 2, you’re making the ethical choice, but you’re sacrificing competitive advantage. Your competitors are working faster, serving more clients, and building practices that will be hard to compete with in five years.
If you’re in Position 3, confidential AI is your answer. Tools like RedPill, built on Phala Network’s confidential computing infrastructure, let you embrace AI productivity without compromising on California Rule 1.6 or Texas Rule 1.05 compliance.
The future of legal practice includes AI. The question is whether you’ll use AI that respects your professional obligations or AI that puts your license at risk.
For lawyers who refuse to choose between compliance and productivity, confidential AI is not just a better option—it’s the only option.
Learn More
RedPill is built on Phala’s confidential computing infrastructure and offers lawyers a secure alternative to ChatGPT and Google NotebookLM. With zero-retention architecture, document-grounded responses, and Trusted Execution Environments, RedPill eliminates the compliance risks that make traditional AI tools unsuitable for legal work.
Try RedPill free for 30 days: redpill.ai/lawyers
Download the compliance guide: Texas Lawyer’s Guide to AI Compliance
Learn about Phala Network’s confidential computing: phala.network
References
[1] Mata v. Avianca, Inc., 678 F.Supp.3d 443 (S.D.N.Y. 2023) - https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/59/
[2] “After Mata v. Avianca: More Lawyers Sanctioned for ChatGPT Misuse in 2025,” Legal Technology News, 2025
[3] California State Bar, Rule 1.6 Executive Summary - https://www.calbar.ca.gov/Portals/0/documents/rules/Rule_1.6-Exec_Summary-Redline.pdf
[4] California Rules of Professional Conduct, Rule 1.6(a) - https://www.calbar.ca.gov/legal-professionals/rules/rules-professional-conduct/current-rules-professional-conduct/chapter-1-lawyer-client-relationship
[5] California State Bar, “Generative AI Practical Guidance” (2024) - https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf
[6] Daily Journal, “AI chats aren’t privileged: What California lawyers need to do now” (August 22, 2025) - https://dailyjournal.com/articles/387206
[7] Texas Disciplinary Rules of Professional Conduct, Rule 1.05(a) - https://www.texasbar.com/tdrpc/
[8] Texas Bar Opinion 705 - https://www.texasbar.com/ethics-opinions/
[9] Texas Bar Blog, “Ethical AI Integration for Texas Attorneys” (August 22, 2025) - https://blog.texasbar.com/2025/08/articles/guest-blog/ethical-ai-integration-for-texas-attorneys
[10] OpenAI Terms of Service, Section 3(c) - https://openai.com/policies/terms-of-use
[11] Mata v. Avianca, Order on Sanctions (June 22, 2023)
[12] Legal Technology News, “2025 AI Sanctions Roundup” (December 2025)
[13] Anthropic, “Using Cowork Safely” - https://support.claude.com/en/articles/13364135-using-cowork-safely
[14] California State Bar, “Generative AI Practical Guidance” (2024)
[15] Texas Bar Opinion 705 (2024)