The FCA has published guidance, discussion papers, and regulatory statements touching on artificial intelligence. What it has not done is produce a single, comprehensive AI rulebook. That gap creates genuine uncertainty for compliance officers, risk functions, and technology teams at FCA-regulated firms who need to make AI decisions now. This article summarises what the FCA has actually said, where obligations are clear, where they are ambiguous, and what a compliance-conscious AI deployment looks like for a UK financial services firm in 2026.

The most important thing to understand is the FCA's stated regulatory philosophy on AI: it is outcomes-based, not technology-prescriptive. The FCA cares about what AI does — what outcomes it produces for clients and what risks it creates for the system — not the specific technology choices firms make. This is good news and bad news. Good: there is no single prohibited AI tool or required AI architecture. Bad: you cannot point to a specific rule and say "we comply" — you have to demonstrate that your outcomes are consistent with regulatory expectations.

The FCA's AI Publications: A Summary

Published 2024

Discussion Paper DP24/2: AI and Machine Learning in Financial Services

The FCA's primary substantive AI publication. A consultation document setting out the FCA's current thinking across: model governance, explainability, fairness, data governance, human oversight, and third-party AI dependencies. Not binding rules, but the clearest signal of regulatory direction. The FCA invited feedback and indicated further publications would follow. Key takeaways: firms should manage AI under existing operational risk frameworks (SYSC), outcomes for consumers should be fair and explainable, and human oversight of AI decisions is expected.

Joint Publication — FCA, Bank of England, PRA

AI and Machine Learning: Discussion Paper (Joint)

A joint paper from the three main UK financial regulators. Reinforces the outcomes-based approach. Notes the FCA's view that AI adoption should not come at the cost of consumer protection or market integrity. Emphasises the need for firms to understand AI systems they deploy, including third-party AI models, and to maintain meaningful human oversight.

Ongoing

Consumer Duty

The Consumer Duty (PS22/9), fully effective since July 2023, is increasingly relevant to AI use in retail financial services. The Duty requires firms to deliver good outcomes for retail customers — including outcomes from AI-assisted advice, AI-generated communications, AI-driven product recommendations, and AI-supported claims handling. Any AI that produces poor consumer outcomes is a Consumer Duty failure, regardless of the technology involved.

What Existing Rules Apply to AI Right Now

While the FCA develops specific AI rules, existing regulatory requirements apply fully to AI-assisted activities. The following are the most relevant for most FCA-regulated firms.

SYSC: Systems and Controls

SYSC requires firms to have robust management systems, internal controls, and risk management processes. AI introduces specific operational risks that SYSC's existing frameworks must address: model risk (the risk that AI produces incorrect outputs), data risk (the risk that training or input data is biased, incomplete, or compromised), dependency risk (the risk that third-party AI providers fail or change their systems), and cyber risk (the risk that AI systems are compromised).

The practical implication: AI systems used in regulated activities must be within the scope of your risk management framework. Ad hoc AI tool adoption outside any governance structure is a SYSC problem.

COBS: Conduct of Business

COBS rules govern how firms communicate with and provide services to clients. AI used in client communications, advice generation, or client-facing processes must produce communications and advice that meet COBS requirements: clear, fair, not misleading (COBS 4.2), suitable for the client (COBS 9), and in the client's best interests (COBS 2.1). The technology producing the output is irrelevant to whether the output meets these standards.

Senior Managers and Certification Regime (SM&CR)

Senior managers are personally accountable for the activities under their prescribed responsibility. Where AI is used in activities within a senior manager's area, that senior manager remains accountable for outcomes. Delegating decision-making to an AI system does not transfer accountability away from the responsible individual. This has significant implications for firms using AI in credit decisions, investment recommendations, or compliance monitoring.

The Five Compliance Priorities for FCA-Regulated Firms

1. Model Risk Governance

AI models should be subject to the same validation, testing, and ongoing monitoring as any other quantitative model used in regulated activities. This means: pre-deployment validation against representative data, ongoing monitoring for performance degradation, documented understanding of model limitations and failure modes, and a clear process for when a model's output is overridden by human judgement.

2. Data Governance for AI

AI systems are only as good as their training and input data. Data governance for AI includes: understanding what data the model was trained on and whether it is representative of your client population, monitoring for data bias that could lead to unfair outcomes, ensuring input data is complete, accurate, and appropriately processed, and meeting UK GDPR requirements for personal data used in AI decision-making.

3. Human Oversight Design

The FCA expects meaningful human oversight of AI decisions — particularly those with significant consequences for clients. "Meaningful" is the operative word: a human who rubber-stamps AI outputs without understanding or challenging them is not providing oversight in the regulatory sense. Human oversight needs to be designed into the workflow, not added as a checkbox.

4. Third-Party AI Dependencies

Many firms use AI through third-party providers. The FCA's existing operational resilience and outsourcing rules apply to AI providers as they do to any critical third party. This means: due diligence on AI vendors, contractual rights to audit and access data, contingency planning for vendor failure or service change, and understanding of what the vendor does with your data.

5. Documentation and Auditability

The FCA expects to be able to understand, audit, and challenge AI systems used in regulated activities. For the AI systems you deploy: document what they do, how they were validated, what their known limitations are, what human oversight processes exist, and what the decision audit trail looks like. If a regulator asks how an AI-influenced decision was reached, you need a credible answer.

AI Compliance

Navigate FCA AI requirements with confidence

We help FCA-regulated firms build AI governance frameworks that are compliant, practical, and proportionate. From model risk documentation to data governance and vendor due diligence.

FCA Compliance Checklist for AI Use

For a practical comparison of AI deployment models that address data sovereignty requirements, see our guide to private AI vs ChatGPT for business. If your firm handles particularly sensitive client data, air-gapped AI for regulated industries explains the most secure deployment option available. And for a broader view of the regulatory landscape, our breakdown of the EU AI Act and what it means for UK businesses covers how European regulation interacts with FCA obligations.