On 13 March 2026, the EU Council agreed to push back enforcement of high-risk AI system requirements to December 2027 — and August 2028 for AI embedded in regulated products. If you have been tracking EU AI Act compliance timelines, those dates just shifted. Here is what the change actually means, what is still in force today, and what UK businesses should be doing between now and Q4 2026.

The delay is real, but it is not a reprieve. The prohibited practices provisions are already enforceable. General-purpose AI model obligations have applied since August 2025. And organisations that leave documentation and audit work until 2027 will find themselves in exactly the same scramble that GDPR laggards faced in May 2018. The window is open. The question is whether you use it.

Dec 2027
New enforcement date for standalone high-risk AI systems (EU Council, March 2026)
Feb 2025
Prohibited practices already in force — including social scoring and real-time biometric surveillance
Aug 2025
General-purpose AI model transparency and copyright obligations now apply

What the EU Council Actually Changed on 13 March 2026

The EU Council's streamlining agreement on 13 March 2026 has two significant elements that businesses need to understand separately.

The first is the timeline extension for high-risk AI systems listed in Annex III of the Act — systems used in employment decisions, credit scoring, critical infrastructure management, education, law enforcement, and similar areas. The original compliance deadline for these systems was August 2026. Under the amended schedule, standalone high-risk AI applications now have until December 2027. AI components embedded within existing regulated products (medical devices, machinery, vehicles) have until August 2028. This recognises that retrofitting compliance requirements into already-certified hardware products is a longer process than updating a software application.

The second element is a new explicit prohibition. The Council added AI-generated non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM) to the list of banned AI applications. This is an addition to the prohibited practices category, which is already in force. These bans apply immediately alongside the existing February 2025 prohibitions.

What this does NOT change

The delay does not affect prohibited practices (in force since February 2025), general-purpose AI model obligations (in force since August 2025), or the requirement to have an AI literacy programme in place for staff working with AI systems. If your organisation has any exposure to these provisions, the delay is irrelevant to your immediate compliance position.

The Full EU AI Act Timeline: Where We Are Now

For UK businesses assessing their exposure, the phased structure of the Act matters more than any single date. Here is the complete picture as of March 2026:

Aug 2024
Act entered into force

The EU AI Act was published in the Official Journal and became law. The 24-month main implementation clock began.

Feb 2025
Prohibited practices enforceable

Bans on social scoring, subliminal manipulation, exploitation of vulnerable groups, most real-time biometric surveillance, and (from March 2026) AI-generated NCII and CSAM. Violations can result in fines up to €35 million or 7% of global annual turnover.

Aug 2025
GPAI model obligations apply

Providers of general-purpose AI models (large language models, foundation models) must comply with transparency requirements, copyright policy documentation, and — for models with systemic risk — adversarial testing and incident reporting.

Dec 2027
High-risk AI systems (Annex III standalone)

Full compliance required for standalone AI applications in employment, credit, education, critical infrastructure, and law enforcement. This is the deadline extended by the March 2026 Council agreement.

Aug 2028
High-risk AI in regulated products

AI components embedded in existing regulated products (medical devices, vehicles, industrial machinery) must meet Act requirements. Extended to account for existing product certification cycles.

Does This Apply to UK Businesses?

Brexit did not insulate UK companies from the EU AI Act. The Act's extra-territorial provisions follow the same logic as GDPR: if your AI system is placed on the EU market, used by EU-based operators, or produces outputs that affect people in the EU, the Act applies to you regardless of where your company is incorporated.

This means the Act is directly relevant to any UK business that:

For UK-only businesses with no EU-facing AI activity, the Act does not apply directly — but aligning voluntarily is increasingly sensible as UK regulation develops its own shape, and as procurement requirements from larger UK organisations begin to reference EU AI Act compliance standards. We covered the foundational risk landscape in our earlier guide to the EU AI Act for UK businesses, which provides useful background on risk categories.

EU vs UK AI Regulation — Where They Diverge

UK businesses operating across both jurisdictions need to understand that these are materially different regulatory frameworks, not different implementations of the same standard. Compliance with one does not mean compliance with the other.

Dimension EU AI Act UK Approach
Legal framework Single binding regulation with mandatory requirements Principles-based, sector-led. No standalone AI Act as of March 2026.
Enforcement body National Market Surveillance Authorities + EU AI Office Existing regulators: ICO, FCA, CQC, SRA applying AI guidance within existing powers
High-risk categories Defined in Annex III (exhaustive list) No equivalent list. Sector regulators define risk within their remit.
GPAI model rules Binding obligations on model providers (Article 53+) Voluntary frameworks. Frontier AI Safety Institute focuses on most capable models.
Fines Up to €35m or 7% global turnover for prohibited practices No dedicated AI fines regime. Enforcement via ICO (GDPR), FCA, sector regulators.
Automated decisions Covered under high-risk provisions for employment, credit Data (Use and Access) Act 2025 updates UK GDPR automated decision-making rights
Copyright & AI GPAI providers must document copyright compliance policy Reports due 18 March 2026 under Data (Use and Access) Act will shape UK policy

The UK's principles-based approach offers more flexibility, but it also offers less predictability. Sector regulators are each developing their own AI expectations, which means a firm regulated by the FCA and the SRA simultaneously faces two separate AI compliance frameworks, neither of which is fully harmonised with the other or with the EU. We have covered FCA-specific AI expectations in detail in our guide to FCA AI compliance for financial services.

The UK Copyright and AI Question

One dimension of UK AI regulation that comes to a head in March 2026 specifically is copyright. Under the Data (Use and Access) Act 2025, the UK government is required to publish two reports by 18 March 2026: one on the impact of AI on copyright holders, and one on the transparency of AI training data.

These reports will significantly shape UK policy on whether AI training on copyrighted material is permissible, under what conditions, and what transparency obligations should apply to model providers. For UK businesses that either train AI models or rely on third-party models trained on web-scraped data, the direction these reports take will matter.

The EU AI Act has already addressed this at the model level: GPAI providers must publish a copyright compliance policy and maintain a register of training data. UK rules have not yet reached the same specificity, but the March 2026 reports are likely to signal where they are heading. If you use or deploy AI models commercially, tracking this development is not optional.

AI Compliance

Know where your AI sits in the compliance landscape

Our AI Readiness Assessment maps your current AI systems against both EU and UK regulatory frameworks — and tells you what needs attention before 2027.

5 Things to Audit by Q4 2026

The extension to December 2027 is useful runway, but 21 months disappears quickly when you factor in procurement cycles, legal review, technical documentation, and staff training. Organisations that want to be genuinely ready — rather than scrambling in late 2027 — should have the following five items addressed before the end of Q4 2026.

Using the Delay Strategically, Not as an Excuse

The history of EU regulatory deadlines is not encouraging. With GDPR, most large organisations spent the first year after the May 2018 enforcement date still conducting their data mapping exercises. The cost of catching up under deadline pressure — in external legal fees, rushed technical work, and emergency board-level attention — was substantially higher than a measured two-year preparation would have been.

The EU AI Act delay to December 2027 gives well-organised businesses a genuine opportunity to do this properly. The organisations that will be in the strongest position at the end of 2027 are the ones who used 2026 to get their AI inventory documented, their GPAI supply chain audited, and their high-risk system documentation at least at draft stage — rather than treating the extended deadline as permission not to start.

"A deadline extension is not a compliance holiday. It is an invitation to do the work without the cost of a crisis."

For businesses in regulated UK sectors, the EU AI Act compliance process is also a useful vehicle for getting AI governance in order ahead of tightening domestic requirements. The ICO, FCA, and SRA are each moving in the same direction as the EU Act — towards documented risk assessments, human oversight requirements, and transparency obligations — even if the specific rules differ. A compliance framework built for the EU AI Act will cover most of what UK sector regulators will eventually require.

What to Do This Week

Three actions have the highest immediate value relative to the time they require:

The regulatory landscape for AI is evolving faster than most compliance teams can comfortably track. But the EU AI Act's phased structure means there are defined milestones to plan against. The December 2027 deadline for high-risk AI systems is the most significant, and the time to begin preparing is now — not when the enforcement date is six months away.