On 13 March 2026, the EU Council agreed to push back enforcement of high-risk AI system requirements to December 2027 — and August 2028 for AI embedded in regulated products. If you have been tracking EU AI Act compliance timelines, those dates just shifted. Here is what the change actually means, what is still in force today, and what UK businesses should be doing between now and Q4 2026.
The delay is real, but it is not a reprieve. The prohibited practices provisions are already enforceable. General-purpose AI model obligations have applied since August 2025. And organisations that leave documentation and audit work until 2027 will find themselves in exactly the same scramble that GDPR laggards faced in May 2018. The window is open. The question is whether you use it.
What the EU Council Actually Changed on 13 March 2026
The EU Council's streamlining agreement on 13 March 2026 has two significant elements that businesses need to understand separately.
The first is the timeline extension for high-risk AI systems listed in Annex III of the Act — systems used in employment decisions, credit scoring, critical infrastructure management, education, law enforcement, and similar areas. The original compliance deadline for these systems was August 2026. Under the amended schedule, standalone high-risk AI applications now have until December 2027. AI components embedded within existing regulated products (medical devices, machinery, vehicles) have until August 2028. This recognises that retrofitting compliance requirements into already-certified hardware products is a longer process than updating a software application.
The second element is a new explicit prohibition. The Council added AI-generated non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM) to the list of banned AI applications. This is an addition to the prohibited practices category, which is already in force. These bans apply immediately alongside the existing February 2025 prohibitions.
What this does NOT change
The delay does not affect prohibited practices (in force since February 2025), general-purpose AI model obligations (in force since August 2025), or the requirement to have an AI literacy programme in place for staff working with AI systems. If your organisation has any exposure to these provisions, the delay is irrelevant to your immediate compliance position.
The Full EU AI Act Timeline: Where We Are Now
For UK businesses assessing their exposure, the phased structure of the Act matters more than any single date. Here is the complete picture as of March 2026:
The EU AI Act was published in the Official Journal and became law. The 24-month main implementation clock began.
Bans on social scoring, subliminal manipulation, exploitation of vulnerable groups, most real-time biometric surveillance, and (from March 2026) AI-generated NCII and CSAM. Violations can result in fines up to €35 million or 7% of global annual turnover.
Providers of general-purpose AI models (large language models, foundation models) must comply with transparency requirements, copyright policy documentation, and — for models with systemic risk — adversarial testing and incident reporting.
Full compliance required for standalone AI applications in employment, credit, education, critical infrastructure, and law enforcement. This is the deadline extended by the March 2026 Council agreement.
AI components embedded in existing regulated products (medical devices, vehicles, industrial machinery) must meet Act requirements. Extended to account for existing product certification cycles.
Does This Apply to UK Businesses?
Brexit did not insulate UK companies from the EU AI Act. The Act's extra-territorial provisions follow the same logic as GDPR: if your AI system is placed on the EU market, used by EU-based operators, or produces outputs that affect people in the EU, the Act applies to you regardless of where your company is incorporated.
This means the Act is directly relevant to any UK business that:
- Sells software, SaaS, or AI-powered products to EU customers
- Provides professional services to EU clients using AI tools (legal, financial, HR, recruitment)
- Uses AI systems whose decisions affect EU employees or contractors
- Deploys AI in EU facilities or operations
- Sources AI models or tools from EU-regulated providers who require deployer compliance documentation
For UK-only businesses with no EU-facing AI activity, the Act does not apply directly — but aligning voluntarily is increasingly sensible as UK regulation develops its own shape, and as procurement requirements from larger UK organisations begin to reference EU AI Act compliance standards. We covered the foundational risk landscape in our earlier guide to the EU AI Act for UK businesses, which provides useful background on risk categories.
EU vs UK AI Regulation — Where They Diverge
UK businesses operating across both jurisdictions need to understand that these are materially different regulatory frameworks, not different implementations of the same standard. Compliance with one does not mean compliance with the other.
| Dimension | EU AI Act | UK Approach |
|---|---|---|
| Legal framework | Single binding regulation with mandatory requirements | Principles-based, sector-led. No standalone AI Act as of March 2026. |
| Enforcement body | National Market Surveillance Authorities + EU AI Office | Existing regulators: ICO, FCA, CQC, SRA applying AI guidance within existing powers |
| High-risk categories | Defined in Annex III (exhaustive list) | No equivalent list. Sector regulators define risk within their remit. |
| GPAI model rules | Binding obligations on model providers (Article 53+) | Voluntary frameworks. Frontier AI Safety Institute focuses on most capable models. |
| Fines | Up to €35m or 7% global turnover for prohibited practices | No dedicated AI fines regime. Enforcement via ICO (GDPR), FCA, sector regulators. |
| Automated decisions | Covered under high-risk provisions for employment, credit | Data (Use and Access) Act 2025 updates UK GDPR automated decision-making rights |
| Copyright & AI | GPAI providers must document copyright compliance policy | Reports due 18 March 2026 under Data (Use and Access) Act will shape UK policy |
The UK's principles-based approach offers more flexibility, but it also offers less predictability. Sector regulators are each developing their own AI expectations, which means a firm regulated by the FCA and the SRA simultaneously faces two separate AI compliance frameworks, neither of which is fully harmonised with the other or with the EU. We have covered FCA-specific AI expectations in detail in our guide to FCA AI compliance for financial services.
The UK Copyright and AI Question
One dimension of UK AI regulation that comes to a head in March 2026 specifically is copyright. Under the Data (Use and Access) Act 2025, the UK government is required to publish two reports by 18 March 2026: one on the impact of AI on copyright holders, and one on the transparency of AI training data.
These reports will significantly shape UK policy on whether AI training on copyrighted material is permissible, under what conditions, and what transparency obligations should apply to model providers. For UK businesses that either train AI models or rely on third-party models trained on web-scraped data, the direction these reports take will matter.
The EU AI Act has already addressed this at the model level: GPAI providers must publish a copyright compliance policy and maintain a register of training data. UK rules have not yet reached the same specificity, but the March 2026 reports are likely to signal where they are heading. If you use or deploy AI models commercially, tracking this development is not optional.
5 Things to Audit by Q4 2026
The extension to December 2027 is useful runway, but 21 months disappears quickly when you factor in procurement cycles, legal review, technical documentation, and staff training. Organisations that want to be genuinely ready — rather than scrambling in late 2027 — should have the following five items addressed before the end of Q4 2026.
-
1Map and classify your AI systems Create a complete inventory of every AI system your organisation uses or deploys. For each one, determine its risk classification under the EU AI Act: prohibited, high-risk (Annex III), GPAI-based, or limited/minimal risk. If you provide AI services to EU customers, also assess your role: are you a provider, deployer, importer, or distributor? Each role carries different obligations. See our AI audit guide for a step-by-step approach.
-
2Confirm you are not already in breach on prohibited practices The February 2025 prohibited practices bans are enforceable now. Review whether any AI system you use or provide involves real-time biometric identification in public spaces, social scoring, manipulation of vulnerable groups, or any practice now covered by the March 2026 addition (NCII, CSAM). If you are uncertain whether a system qualifies, that uncertainty is itself a risk that requires legal review, not a deferral.
-
3Audit your GPAI model supply chain If your products or services are built on a third-party foundation model or LLM, your compliance position depends partly on that provider's Article 53 compliance. Request documentation confirming the provider's copyright compliance policy, training data transparency, and — for frontier models — systemic risk assessments. This is now standard due diligence for any EU-facing AI product. Providers who cannot produce this documentation are a liability.
-
4Begin your technical documentation and risk assessment files High-risk AI systems require a technical file covering: system description, intended purpose and foreseeable misuse, training data governance, performance metrics, human oversight mechanisms, and the conformity assessment process. Starting this documentation now — even if you are only at draft stage — means you will not be building it from scratch under deadline pressure in late 2027. Use the delay to get the foundations right. Our AI policy template provides a starting framework for internal governance documentation.
-
5Implement an AI staff literacy programme Article 4 of the Act requires operators to ensure that staff working with AI systems have sufficient AI literacy — an understanding of what the system does, its limitations, and how to identify and escalate issues. This obligation is not phased; it applies to all organisations using AI now. A basic internal training programme that documents who has received it creates a compliance record and reduces the risk of misuse that could trigger enforcement attention under other provisions.
Using the Delay Strategically, Not as an Excuse
The history of EU regulatory deadlines is not encouraging. With GDPR, most large organisations spent the first year after the May 2018 enforcement date still conducting their data mapping exercises. The cost of catching up under deadline pressure — in external legal fees, rushed technical work, and emergency board-level attention — was substantially higher than a measured two-year preparation would have been.
The EU AI Act delay to December 2027 gives well-organised businesses a genuine opportunity to do this properly. The organisations that will be in the strongest position at the end of 2027 are the ones who used 2026 to get their AI inventory documented, their GPAI supply chain audited, and their high-risk system documentation at least at draft stage — rather than treating the extended deadline as permission not to start.
"A deadline extension is not a compliance holiday. It is an invitation to do the work without the cost of a crisis."
For businesses in regulated UK sectors, the EU AI Act compliance process is also a useful vehicle for getting AI governance in order ahead of tightening domestic requirements. The ICO, FCA, and SRA are each moving in the same direction as the EU Act — towards documented risk assessments, human oversight requirements, and transparency obligations — even if the specific rules differ. A compliance framework built for the EU AI Act will cover most of what UK sector regulators will eventually require.
What to Do This Week
Three actions have the highest immediate value relative to the time they require:
- Check your prohibited practices exposure today. Do not assume the February 2025 bans do not apply because you are not a tech company. Any organisation using AI for employment screening, customer segmentation, or real-time identification should have a legal opinion on whether their system crosses any of the prohibited lines.
- Request GPAI documentation from your AI model vendors. If you are using OpenAI, Anthropic, Google, or any other GPAI provider in an EU-facing product, email your account manager requesting their Article 53 compliance summary. Competent providers have this ready. If yours cannot produce it, escalate the question.
- Start the AI inventory. Even a simple spreadsheet listing every AI system in use, its purpose, its provider, and the data it processes is a better foundation than nothing. This list will be the starting point for every subsequent compliance step.
The regulatory landscape for AI is evolving faster than most compliance teams can comfortably track. But the EU AI Act's phased structure means there are defined milestones to plan against. The December 2027 deadline for high-risk AI systems is the most significant, and the time to begin preparing is now — not when the enforcement date is six months away.