Six months into working with UK professional services firms on AI implementation, there are patterns that are clear enough to be worth writing down. What works. What does not. What we predicted correctly and what we got wrong. This is not a vendor success story — it is an honest account of what AI adoption looks like in practice when it is done thoughtfully in a regulated environment.

The honest version of this account includes the things that did not work as expected. That section is more useful than the success stories. Anyone can report successes. The failures and surprises are where the actual learning lives.

What Has Actually Worked

Works

Starting narrow and specific

Firms that defined a specific, bounded first use case and implemented it well before expanding performed significantly better than firms that attempted broad AI transformation. A law firm that implemented AI-assisted NDA review for one practice group, measured the results, and then expanded to other document types had 3x the adoption rate at 6 months compared to firms that tried to roll out AI across all practice areas simultaneously. Narrow and deep beats broad and shallow in the first 90 days.

Works

Training people on how to review AI output

The single highest-leverage investment in AI implementation is training people specifically on how to review AI outputs — what to look for, what to question, what can be trusted and what cannot. Firms that invested in this training saw adoption rates 2x higher than those that relied on self-directed learning. The training does not need to be long: 90 minutes of structured guidance on what good AI review looks like for the specific use case and role produced measurable behaviour change.

Works

Solving the governance question before deployment

Every firm that resolved the data governance question (which tools are approved, for what data, under what terms) before go-live had a smoother implementation than those that addressed it after. Partly because it avoided mid-deployment stops and rework. But also because having a clear, documented governance position gave fee earners and staff the confidence to actually use the tools — removing the uncertainty about whether they were doing the right thing.

Works

Measuring and communicating results

Firms that measured time saved in the first use case deployment and communicated those results internally had significantly higher buy-in for subsequent phases. Seeing that colleagues were saving 2 hours a week on document review was more persuasive than any vendor case study. Internal proof points beat external ones in every case we observed.

Works

AI champions within teams

Every successful implementation had at least one person within the team who was genuinely enthusiastic about AI and willing to learn the tools deeply before the wider rollout. These AI champions became the practical support resource for colleagues — answering the "how do I do X" questions that slow adoption when they go unanswered. Finding and empowering these individuals is more valuable than any formal training programme.

What Has Not Worked

Doesn't Work

Top-down mandates without enablement

Leadership announcements that "we are now an AI firm" or "all fee earners should use AI tools" without accompanying training, governance frameworks, and practical support reliably produced shadow AI use (people using unapproved tools on personal devices) rather than structured adoption. Mandates without enablement create the worst possible outcome: everyone trying to use AI, nobody using the right tools in the right way.

Doesn't Work

Treating AI output as final

Across every firm, the AI implementation errors that created real problems were caused by AI output being relied upon without adequate review. In one case, an AI-generated legal summary contained a factual error about a statutory deadline that was not caught before it went to a client. The error was not AI's fault — it was a workflow design failure that allowed AI output to bypass review. Every instance of this we observed was preventable with correct workflow design.

Doesn't Work

Underestimating change management

The consistent underestimation in AI implementation planning is change management. The technology works. The adoption is the hard part. Professionals who have been excellent at their jobs for years do not change their working practices because a new tool is available. They change when the new tool is demonstrably better, when they have been trained to use it effectively, and when the organisational environment supports the change. Planning for 80% technology, 20% change management produces the wrong ratio.

Doesn't Work

Ignoring the sceptics

In every team, there are people who are sceptical of AI. Implementations that ignored these individuals (hoping enthusiasm would outweigh scepticism) consistently underperformed those that engaged sceptics directly. Sceptics often have legitimate concerns — about accuracy, about jobs, about professional standards — that, when addressed, convert them from opponents to engaged users. The sceptics are often the most thorough reviewers of AI output once they adopt, which is exactly the professional culture you want.

What We Got Wrong

Honesty about our own errors is the more uncomfortable part of this account. Two predictions we made at the start of the year turned out to be wrong in instructive ways.

We underestimated how long the data governance conversation would take. In almost every engagement, the data governance question — which tools are approved, for what data, under what terms — took significantly longer to resolve than we anticipated. Not because it was technically complex, but because it required coordination between IT, compliance, legal, and senior leadership who had not previously had this conversation together. The right answer to "how long does data governance take?" is apparently "longer than you think, schedule extra time."

We overestimated how quickly adoption would spread after initial success. The assumption that a successful pilot would generate organic adoption was too optimistic. Successful pilots generated enthusiasm at leadership level. Turning that enthusiasm into consistent daily use across the team required active enablement: follow-up training sessions, regular check-ins, internal case study communication, and continued AI champion support. Success at pilot stage is the beginning of the adoption journey, not the end of it.

AI Implementation

Implement AI with the benefit of these lessons

We help UK professional services firms implement AI using an approach shaped by what actually works — not by vendor case studies. Narrow use cases, proper training, governance first, measurement throughout.

The Five Things That Consistently Differentiate Successful Implementations

  1. A defined first use case with clear success metrics established before go-live. Not "use AI more" — a specific task, a specific team, a specific measure of success.
  2. Data governance resolved before any client data touches an AI tool. The conversation might be uncomfortable. Have it before deployment, not after an incident.
  3. At least one trained AI champion per team. Not a theoretical champion. Someone who has been through the tool in depth and is visibly using it.
  4. A structured review process for AI output that is documented. Not assumed. Not informal. Written down, communicated, and followed.
  5. A measurement and communication rhythm in the first 90 days. What did we save last month? How many documents reviewed? What did we learn? Keep the results visible.

None of these are technologically complex. All of them require organisational commitment and follow-through. That is, in the end, what AI implementation comes down to: not whether the AI works — it does — but whether the organisation is ready to change how it works.

If you are building a business case for AI adoption, our analysis of the cost of not using AI in UK professional services provides the productivity numbers and worked examples to support your proposal. For firms in regulated industries, starting with an AI audit helps identify the right starting point and governance requirements before implementation begins.