The SRA has not banned AI. It has not endorsed any particular tool. What it has done — through its October 2024 guidance — is something more useful: clarified that every professional obligation that applied to solicitors before AI existed applies equally to AI-assisted work. No new rules. No special exemptions. The same standards, applied to new technology.
That sounds straightforward. In practice, it creates real questions that law firms are working through right now. Which tools can be used for which tasks? What happens when client data enters a cloud system? Who is responsible when AI gets something wrong? This article works through each of those questions with specific reference to the SRA Codes and the regulatory guidance that matters.
What the SRA Actually Says About AI
The SRA published its AI guidance in October 2024 following significant industry discussion. The key message was deliberate in its restraint: the SRA is not creating a new regulatory regime for AI. Existing obligations under the SRA Code of Conduct for Solicitors and the SRA Code of Conduct for Firms apply.
This matters because it removes a line of reasoning that some firms had been implicitly relying on: the idea that, because there was no specific AI rule, the existing rules might not fully apply to AI-assisted work. The October 2024 guidance closes that gap explicitly.
The SRA also noted that it would continue to monitor AI developments and may publish further guidance as the technology and its use in legal practice evolves. Firms should not treat the October 2024 statement as the final word — it is a floor, not a ceiling.
The Four SRA Principles That AI Touches Most
Principle 2: Acting With Integrity
Integrity means your work product is honest and accurate. If AI generates a document with errors, misstatements, or hallucinated references and you submit it without proper review, that is an integrity issue — not merely a technical one. This is not theoretical. The 2023 Mata v. Avianca case in the United States, where a lawyer submitted AI-generated case citations that did not exist, is the cautionary tale the profession now references globally. The cases were invented. They read as plausible. They were not. The lesson for UK solicitors: AI generates confident-sounding output that may be entirely wrong.
Principle 4: Acting With Honesty
This applies when communicating with courts, regulators, and clients. If AI drafts a witness summary, a pleading, or a letter of advice, the solicitor signing it is representing that its contents are accurate. There is no AI exemption to the duty not to mislead.
Principle 7: Acting in the Best Interests of Clients
Acting in the client's best interests includes protecting their confidential information. A tool that exposes client data to third-party servers, foreign law enforcement compulsion powers, or model training processes is not acting in the client's best interests — regardless of how useful its output might be.
The Competence Obligation: Code 3.4
Code 3.4 requires solicitors to maintain competence in their areas of practice. The SRA guidance is clear: competence now includes understanding the AI tools you use, their limitations, and the risk of over-relying on their output. A solicitor who uses AI but cannot explain what it did or verify that it did it correctly is not meeting the competence standard.
The Data Sovereignty Problem Most Firms Miss
When a fee earner uploads a client document to a cloud AI tool — ChatGPT, Microsoft Copilot, a legal AI SaaS platform — that document travels to servers outside the firm's control. In many cases, those servers are in the United States. This creates two distinct problems that operate independently of each other.
The first is the UK GDPR problem. Under Article 32 of the UK GDPR, controllers must implement appropriate technical and organisational measures to ensure data security. Using a third-party cloud AI tool to process personal data contained in client documents requires a Data Processing Agreement under Article 28, a legitimate basis for processing, and adequate security assurances from the processor. Many firms using ad hoc AI tools have none of these in place.
The second is the US CLOUD Act problem. If client data is processed on US-based infrastructure by a US-headquartered company, the Clarifying Lawful Overseas Use of Data Act 2018 allows US law enforcement to compel disclosure of that data — regardless of where the data physically sits or what UK law says about it. For matters involving sensitive commercial negotiations, litigation strategy, or regulatory investigations, this is a material risk that most fee earners have not considered when reaching for their ChatGPT subscription.
SRA Code 6.3 requires solicitors to keep client affairs confidential and to disclose confidential information only with the client's informed consent or as required by law. Uploading client documents to a system where third-party access is technically possible — through terms of service, model training, or foreign law enforcement — sits uncomfortably with this obligation.
Supervision Obligations: Partners Remain Responsible
The supervision framework under the SRA does not change when AI is involved. Partners and supervisors are responsible for the quality and accuracy of work produced under their supervision — whether that work was drafted by a junior associate, a paralegal, or an AI system.
In practice, this means that every AI output used in a client matter must be reviewed by a qualified person who takes professional responsibility for it. The review must be meaningful — not a rubber stamp. A solicitor who approves an AI-drafted letter without reading it properly has not fulfilled their supervision obligation; they have merely created a false paper trail.
Firms should document AI use in matter files. When AI was used, for what task, which tool, and who reviewed the output. This documentation serves two purposes: it creates a defensible record if the work is ever challenged, and it creates an institutional learning process about where AI works well and where it does not in your particular practice area.
Three Models for Compliant AI Deployment
The risk profile of AI deployment varies significantly depending on the architecture. Understanding the three main models helps firms match their AI approach to their regulatory obligations.
Model 1: SaaS AI with Data Processing Agreements
Tools like Microsoft 365 Copilot, Harvey AI, or Lexis+ AI operate as cloud services. When properly configured — with UK data residency, a signed Data Processing Agreement, and appropriate access controls — they can be used for certain work. However, they remain subject to the terms of service of a third party, and the firm retains responsibility for ensuring that configuration remains appropriate. They are not appropriate for the most sensitive client matters without careful assessment.
Model 2: Private Cloud with Data Residency
Some providers offer dedicated cloud environments that keep data within a firm's own tenant or within UK/EEA data centres. This reduces but does not eliminate the third-party risk. The question of who can access the infrastructure at the cloud provider level, and under what legal compulsion, remains relevant.
Model 3: On-Premises Air-Gapped AI
The highest-compliance deployment model is on-premises AI with no external network connectivity. All processing happens on hardware within the firm's own building. No data leaves the network. No third-party access is possible. For firms handling the most sensitive client matters — litigation involving commercial secrets, high-value M&A, regulatory investigations — this is the architecture that eliminates the data sovereignty risk entirely.
Nerdster Vault is built on this model. It runs entirely on-premises, with zero internet connectivity required. We demonstrate this during every deployment by unplugging the Ethernet cable during the demonstration — the system keeps running. For firms where client confidentiality is non-negotiable, that architecture is what compliance actually looks like.
Pre-Deployment Checklist for Law Firms
Before deploying any AI tool for client-facing work, firms should be able to answer yes to each of the following:
- A data processing agreement is in place with the AI provider (Article 28 UK GDPR)
- Data residency is confirmed as UK or EEA — not just claimed, but contractually guaranteed
- A confidentiality risk assessment has been conducted for the specific use cases planned
- The supervision policy for AI-assisted work is documented and communicated to fee earners
- Fee earners using AI tools have been trained on their obligations and the tool's limitations
- AI use is documented in matter files (tool used, task, reviewing solicitor)
- The tool has been assessed for compliance with SRA Code 6.3 confidentiality obligations
- A review cadence is in place to reassess compliance as tools and regulatory guidance evolve
This is a starting point, not an exhaustive list. Regulated practice areas — financial services, immigration, family — carry additional sector-specific obligations that layer on top of the SRA baseline.