Client confidentiality is among the oldest and most fundamental obligations in law. It is also the obligation that AI creates the most immediate practical tension with — because almost all mainstream AI tools work by processing your inputs on third-party infrastructure. This article works through what that means for UK solicitors in 2026: what the obligation actually requires, where AI creates risk, and what compliant AI use looks like.
This is not a theoretical concern. Law firms using cloud AI tools for client work are already taking on confidentiality risk that many have not fully evaluated. The SRA has been clear that it expects firms to have thought through exactly these questions. The answer to "can we use AI for client work?" is yes — but the answer has conditions attached.
The Confidentiality Obligation in Plain Terms
SRA Code of Conduct 6.3 (for solicitors) requires you to keep the affairs of current and former clients confidential unless disclosure is required or permitted by law, or the client consents. The obligation is broad, enduring, and does not have a technology exception.
The duty of confidentiality in legal practice is broader than data protection law. It covers all information about a client's affairs — not just personal data within the definition of GDPR. It extends to information provided in circumstances giving rise to a reasonable expectation of confidence, which encompasses substantially everything a client shares with their solicitor.
This means that the question "does GDPR permit us to do this?" is necessary but not sufficient. The professional obligation is an additional layer that operates independently of data protection compliance.
How Cloud AI Creates Confidentiality Risk
When a fee earner pastes a client document, summary, or correspondence into a cloud AI tool, that information is transmitted to the AI provider's servers for processing. The data leaves the firm's controlled environment and is processed by a third party.
This creates a confidentiality risk in two ways:
Direct disclosure risk
The AI provider receives and processes confidential client information. Whether this constitutes a breach of Code 6.3 depends on whether the disclosure is "required or permitted by law" or the client has consented. A data processing agreement with the AI vendor addresses the DPA 2018 / GDPR dimension but does not constitute client consent to the disclosure of their confidential information to a third party. Most clients have not been told that their matter information is being processed by AI tools, let alone consented to it.
The consent question
The clearest way to address the cloud AI confidentiality risk is client consent — explaining to clients how AI is used in their matter and obtaining their agreement. Some firms are beginning to include AI disclosure clauses in retainer letters. This is good practice and creates a documented consent trail, but it is not yet universal.
Legal professional privilege risk
Legal professional privilege (LPP) protects confidential communications between a solicitor and client for the purpose of legal advice, and confidential documents created for the dominant purpose of litigation. LPP is waived when the privileged communication is voluntarily disclosed to a third party without good reason.
The question of whether submitting privileged material to a cloud AI service constitutes voluntary disclosure to a third party sufficient to waive privilege has not been definitively determined by UK courts. The orthodox professional view — supported by the SRA's approach to confidentiality — is that it should be treated as a risk. The conservative position is to ensure that privileged material is not processed through systems where disclosure to a third-party processor could be characterised as a waiver.
What "Third Party" Means in This Context
There is sometimes confusion about whether an AI vendor operating as a data processor under GDPR is truly a "third party" in the confidentiality and privilege sense. The distinction matters.
Under data protection law, a data processor acts on the data controller's instructions and is subject to GDPR's processor obligations. This is a carefully defined legal relationship that limits what the processor can do with the data.
Under confidentiality law, "third party" is a broader concept. The question is whether the information has moved outside the confidential relationship between solicitor and client. When client matter information is processed by an AI vendor's systems — even under a robust DPA — it has left the solicitor-client relationship and entered a third-party system. The DPA controls how that third party handles the data; it does not undo the fact that the data has been disclosed to them.
The Aggregation Risk
There is a second, less commonly discussed risk: aggregation. When multiple client matters are processed through the same AI tool, and when AI tools log, retain, or analyse inputs (as some do by default), there is a risk that information from different client matters is present in the same system simultaneously.
For most large AI providers with enterprise agreements, robust data segregation reduces this risk. But for firms using consumer or small-business AI tiers — which often have weaker data isolation — the risk that one client's information could, through the system's logging or training processes, be exposed to contexts involving another client is not trivial.
What Compliant AI Use Looks Like
The confidentiality obligation does not prevent AI use. It shapes how AI should be used. The following approaches manage the risk to a defensible level:
Option 1: On-premises or air-gapped AI
An AI system that runs entirely within the firm's infrastructure eliminates the third-party disclosure risk. No data leaves the firm's environment, so there is no disclosure to manage. This is the most defensible position for high-sensitivity work: M&A advisory, litigation involving sensitive evidence, criminal defence, family law with particular sensitivity.
Option 2: Explicit client consent
For firms using cloud AI, the clearest risk mitigation is informed client consent. Retainer letters or engagement terms should explain that AI tools are used in the delivery of services, what data is processed by those tools, which vendors are involved, and what safeguards are in place. Clients should have an opportunity to object or request that AI is not used on their matter.
Option 3: Task-level segregation
For firms that use cloud AI only for tasks that do not involve client-specific confidential information — general legal research, precedent drafting, administrative tasks — the confidentiality risk profile is materially lower. Clear internal policies that specify which tasks AI may be used for, and which require client data to remain within the firm's systems, are a practical middle ground.
Option 4: Anonymisation before processing
Anonymising client data before submitting it to a cloud AI can reduce (though not eliminate) the confidentiality risk. Genuinely anonymous data is no longer personal data under GDPR, but anonymisation must be robust: simple name redaction is not sufficient if the remaining content identifies the matter. And the professional confidentiality obligation attaches to all client information, not just personal data — so anonymisation is a risk reduction measure, not a complete solution.
What Your AI Policy Should Address
Every firm using AI should have a written policy that addresses the following minimum points relevant to confidentiality:
- Which AI tools are approved for use and which are not
- Which categories of data may and may not be processed using each tool
- Whether and how client consent is obtained and documented
- The process for handling client requests not to use AI on their matter
- How AI outputs are reviewed and documented in the matter file
- The supervision chain for AI-assisted work product
- How the policy is communicated to all fee earners and updated as tools and regulations evolve
For detailed guidance on drafting an AI policy for a UK law firm, see our AI policy template and guide.