DigitalXRAID

EU AI Act Explained: How to Get Ready 

The European Union has passed its landmark EU Artificial Intelligence Act, the first comprehensive regulation aimed at governing how AI systems are developed, deployed and monitored across its member states.  

Although the UK is no longer part of the EU, many UK businesses, especially those operating AI or offering AI-enhanced services to EU customers, will be impacted by this law.  

The stakes are high. Failure to comply could result in significant fines, reputational damage and constraints on ability to serve EU markets. 

In this guide, we’ll walk you through everything you need to know to navigate the EU AI Act, from definitions, obligations, deadlines and applicability to how your current compliance frameworks help you, a clear roadmap of readiness steps to follow, and common pitfalls to avoid. By the end, you’ll have a practical blueprint to get your organisation prepared, not just reactive. 

Key Takeaways 

  • The EU Artificial Intelligence Act is the world’s first comprehensive AI law, imposing obligations based on risk level. 
  • It applies to UK organisations offering AI systems or services that reach or impact people within the EU. 
  • Compliance will be phased from 2025 through 2027, so early preparation is essential. 
  • Organisations certified under ISO 27001, ISO 42001, NIS2 or GDPR are already partly aligned with key obligations. 
  • This guide explains what to do now to prepare your organisation and avoid compliance gaps. 

Introduction – A New Era of AI Regulation 

Artificial Intelligence (AI) is rapidly becoming integral to business operations, from automated decisioning and predictive analytics, to chatbots, fraud detection and security automation.  

The EU Artificial Intelligence Act – commonly referred to as the EU AI Act – represents the European Commission’s effort to impose a consistent, enforceable regime that ensures AI systems are safe, transparent, trustworthy and respectful of fundamental rights. 

So why should UK businesses care, post Brexit? Because the Act includes extra-territorial reach. If your AI systems are placed within the EU market, make decisions affecting EU citizens, or are operated in that context, you will likely fall in scope to comply with this new regulation.  

The phased enforcement of the new Act means that obligations are gradually kicking in, making it critical to embed AI governance and compliance readiness now rather than later.  

Globally, other jurisdictions are taking note. The EU AI regulation framework is becoming a de facto benchmark, much as GDPR reshaped global privacy practice when it was introduced in 2016. 

EU AI Act

What is the EU Artificial Intelligence Act? 

The EU Artificial Intelligence Act is the first legislation of its kind, designed to regulate how AI is developed and used within the European Union.  

It introduces a comprehensive, risk-based framework aimed at ensuring AI systems are safe, transparent and aligned with ethical and legal standards. For UK organisations interacting with EU markets, understanding its structure and obligations is essential to remain compliant and competitive. 

Purpose and Scope 

The EU Artificial Intelligence Act is a regulation, not mere guidance. Once in force, it’s directly applicable in EU member states. Its primary purpose is to ensure that AI systems used or offered in the EU are safe, respect fundamental rights, and are subject to adequate governance and oversight mechanisms. 

“AI” under the Act is broadly defined. It covers machine learning (ML), logic/knowledge-based approaches, statistical methods, and systems using logic and probabilistic reasoning.  

The Act applies not only to AI developers (providers), but also to deployers (users), importers, distributors and organisations that integrate AI into products or services. 

For UK organisations, that means if you provide AI or AI-enabled services to EU customers, use AI that affects EU citizens, or operate AI systems whose outcomes reach the EU, you can’t ignore the Act. 

The Risk-Based Classification System 

One of the trademarks of the EU AI Act is its risk-based approach. AI systems are categorised into four tiers: 

  1. Prohibited (Unacceptable Risk)
    Some AI uses are outright banned. Examples include; social scoring by governments, behaviour manipulation that undermines autonomy, real time remote biometric identification in publicly accessible spaces (with limited exceptions), and systems exploiting vulnerable populations. These prohibitions take effect from 2 February 2025.  
  2. High Risk
    These systems are permitted, but subject to strict obligations. They include sectors such as critical infrastructure, biometric identification, education, employment, credit scoring, law enforcement, medical devices, and other systems listed in Annex III. 
  3. Limited Risk
    Systems with moderate risk might require transparency obligations, for example informing users they are interacting with AI, but are less heavily regulated. 
  4. Minimal Risk
    This is the default category for most AI systems. These are largely unrestricted under the Act, beyond good general practice and existing laws. 

For example, a UK business providing a fraud detection AI tool to EU clients would likely fall into high risk, while using a simple chatbot for general FAQs may be limited risk, or minimal risk, depending on its design and impact. 

High Risk AI Obligations Explained 

When an AI system falls into the high risk category, technical and operational obligations kick in.  

These include: 

  • Risk management system: You must have a methodology to identify, assess, mitigate, monitor and reevaluate risks through the AI lifecycle. 
  • Data governance and quality: You must ensure datasets used for training, validation and testing are relevant, representative, free of bias, and properly documented. 
  • Technical documentation & record-keeping: Maintain a detailed technical file (model rationale, performance, limitations, misuse risk, change logs). 
  • Logging & traceability: Comprehensive logs of data input, model version, prompts or queries, changes over time, and decisions made. 
  • Transparency and user information: Inform users of how the AI system operates, limitations, and instructions for use. 
  • Human oversight: Mechanisms allowing human review, override or intervention, especially in critical decisions. 
  • Robustness, security, accuracy: Systems must be resilient to attacks, for example adversarial inputs, ensure consistency and reliability, and guard against data/model poisoning. 
  • Post-market monitoring & incident reporting: You must monitor real-world performance, detect drift or anomalies, and report serious incidents to competent authorities. 
  • Conformity assessment and CE marking: High-risk systems must undergo conformity procedures (internal or via a notified body) before they are placed on the EU market. 

Enforcement is national. Member states must designate authorities to oversee compliance. Penalties for non-compliance are severe: Up to €35 million or 7% of global turnover for prohibited practices, and up to €15 million or 3% for other violations.  

Key Compliance Dates and Phased Deadlines 

One of the more complex aspects of the EU AI Act is its staged implementation. Here’s a timeline of dates you must watch out for closely: 

Milestone  Obligation  Effective Date 
Act enters into force  The regulation is officially on the books  1 August 2024  
Prohibited practices begin  Banned AI uses (unacceptable risk)  2 February 2025  
Governance & GPAI rules apply  Transparency, general purpose AI obligations, and structural rules  2 August 2025  
High-risk AI obligations  Full obligations for most high-risk systems  2 August 2026  
Annex I systems (embedded in regulated products)  Extended transition for certain safety systems  2 August 2027  

It’s critical to note that you shouldn’t wait until 2026 to begin to take action to build compliance readiness. Some of the most consequential obligations, including prohibited uses and general governance, are already active. 

The European Commission has confirmed it will hold firms to the existing timeline.  

The EU AI Office has published codes of practice for General Purpose AI (GPAI) models, to guide compliance. These codes are non-binding, though alignment may lead to reduced administrative burden.  

Does the EU AI Act Apply to UK Companies? 

One of the biggest misconceptions is that, because the UK is now outside of the EU, UK companies are exempt. That’s not correct in a number of cases that we’ll outline below. 

Extra-territorial Reach 

The EU AI regulation applies to providers and deployers located outside of the EU, if any of the following applies: 

  • Their AI system is placed on the EU market. 
  • Their AI system produces outputs used in the EU (i.e it affects people in the EU). 
  • They are involved in the value chain of an AI system offered in the EU. 

What UK companies are in scope for the EU AI Act? 

Here’s a list of real world examples that illustrate who would be in scope to comply with the Act:  

  • A UK AI SaaS provider with EU customers  
  • A UK company supplying fraud detection or risk scoring AI to EU banks. 
  • A managed services provider embedding AI modules in platforms sold to EU users. 
  • An analytics vendor whose model is used by EU clients to make decisions. 
  • AI models with “cross-border effects” (e.g translation, content moderation) that affect EU individuals. 

UK Regulatory Parallel 

The UK Government has expressed a preference for a principles based approach rather than duplicating the EU’s prescriptive regulation. At present, there’s no UK AI Act with equivalent force.  

But many UK firms will face dual obligations, needing to adhere to the European Union Artificial Intelligence Act for EU reach, and also to align with UK principles/regulations. 

Being ahead of EU compliance will put you in an advantageous position against future UK regulation being introduced. 

The EU Artificial Intelligence Act

Overlapping Frameworks and Partial Compliance 

There are some existing compliance frameworks that will give you a head start on the EU AI Act. If you comply with the following frameworks, you’re ahead of the game, but we’ll also share where you’ll need additional controls. 

How ISO 27001 and ISO 42001 Help 

  • ISO 27001 — Your Information Security Management System (ISMS) for ISO 27001 provides robust governance, logging, supplier management, change control, access control and continuity. All of these are foundational for AI risk operations. 
  • ISO 42001 (AI Management System, AIMS) — This newer standard is designed to map strongly to the requirements of the AI Act (lifecycle management, audit trails, model change control, oversight). Industry reports confirm that ISO 42001 can be a vehicle to meet technical and organisational AI Act demands.  

When harmonised standards are published and adopted by the EU (CEN/CENELEC in the standardisation process), compliance with ISO 42001 may give a presumption of conformity under the Act.  

GDPR and Data Governance Alignment 

  • Your GDPR processes and compliance already enforces lawful basis, transparency, data minimisation, DPIAs and data subject rights. These are vital to AI data handling. 
  • The AI Act builds on GDPR further, demanding dataset representativeness, bias testing, provenance tracking, data cleansing, and documentation of model training/validation/test splits. These are new, but your GDPR controls give you a headstart. 

Leveraging Other Frameworks: NIS2, DORA, SOC 2 

  • Sector specific NIS2 and DORA regulations include incident reporting, logging, supplier resilience, cyber risk management and oversight. These correlate strongly with the AI Act’s post-market monitoring and incident duties. 
  • SOC 2 – The control domains (security, availability, processing integrity, confidentiality) help you build and document technical validation, change control, traceability and audit evidence for AI systems. 

Framework Mapping 

AI Act Obligation  Overlapping Frameworks  Coverage Level 
Risk management (Art. 9)  ISO 27001, ISO 23894, NIST AI RMF  High 
Data governance (Art. 10)  GDPR, ISO 27701, ISO 42001  Partial 
Logging & traceability  ISO 27001, NIS2, DORA  High 
Technical documentation & model files  ISO 42001, SOC 2  Partial 
Human oversight  ISO 42001  Partial 
Incident reporting / post-market  ISO 27001, DORA  High 
Conformity assessment / CE mark  ISO 42001, standards  Minimal initially 

If your organisation is already ISO 27001 certified or GDPR compliant, you’re well on your way. However, you will need to extend existing controls to cover AI-specific aspects like model versioning, bias testing, transparency disclosures and drift monitoring. 

In priority order, the frameworks you should align with are: 

  • ISO/IEC 27001 — your ISMS backbone 
  • ISO/IEC 42001 — AI management system for regulation alignment 
  • ISO/IEC 23894 — AI risk management guidance 
  • NIST AI RMF 1.0 — to operationalise “govern, map, measure, manage” 
  • GDPR — especially for data handling in AI 
  • NIS2 / DORA — for residual cyber risk, resilience and incident alignment 

How to Get Ready Now — Practical Steps for Compliance 

Here is a readiness roadmap to move from theory to action. Use this as your internal working plan: 

Step 1 – Identify AI Systems in Use 

Begin with a comprehensive inventory: 

  • List AI/ML systems (in-house, vendor, embedded). 
  • Note the functions, user impact, data types, and whether they engage EU users. 
  • Classify by likely risk category (unacceptable, high, limited, minimal). 

Step 2 – Assess Risk Level and Applicability 

  • Map each system against Annex III of the EU AI Act. 
  • For systems that are likely high-risk, document their use cases, decision criticality and potential harms. 
  • Prioritise for early assessment those most likely to face scrutiny. 

Step 3 – Strengthen AI Governance 

  • Extend your ISMS to include AI governance — integrate ISO 42001 or NIST AI RMF practices. 
  • Define roles and responsibilities: AI risk owner, data steward, compliance liaison. 
  • Establish change control, versioning policies and guardrails for AI modifications. 

Step 4 – Embed Data Quality and Transparency Controls 

  • Document training, validation and test dataset provenance, attributes and distribution. 
  • Perform bias and fairness assessments and log mitigation steps. 
  • Produce “model cards” or system fact sheets describing limits, performance, misuse risks and intended use. 

Step 5 – Update Policies, Contracts and Vendor Controls 

  • Add AI compliance clauses in contracts: model access, audit rights, versioning transparency, data provenance warranties. 
  • Require vendors and subcontractors to commit to the EU AI Act obligations or support your compliance. 
  • Update internal policies (change, incident, procurement, risk) to include AI systems. 

Step 6 – Plan for Post-Market Monitoring and Incident Reporting 

  • Integrate AI failure, drift or anomaly detection into your existing incident response framework. 
  • Define serious incident triggers (e.g a model malfunction causing harm). 
  • Prepare to notify competent authorities in the EU as required. 

Step 7 – Prepare for GPAI Requirements (from 2025) 

  • If you use or offer General Purpose AI models, start preparing transparency disclosures and risk mitigation plans now. 
  • Monitor publication of codes of practice from May 2025 and align to them. 
  • While code adherence is voluntary, following them may reduce administrative burden.  

EU AI Act Explained

Common Compliance Pitfalls to Avoid 

As you execute this roadmap, watch out for these traps: 

  • Assuming the Act does not apply to UK-based systems — many UK firms are in scope. 
  • Treating AI governance purely as a technical exercise — it must integrate legal, ethical, business and technical functions. 
  • Not embedding AI risk into your existing ISMS — siloing AI work makes audits and compliance harder. 
  • Ignoring dataset bias, representativeness or provenance — these are assessed closely. 
  • Waiting for regulators’ guidance before acting — many obligations are legally active already. 

How DigitalXRAID Can Help 

Navigating a regulation as complex and new as the EU AI Act demands deep technical, compliance and governance expertise. At DigitalXRAID, we offer: 

  • AI readiness assessments, AI compliance consultancy and gap analysis mapped to the EU AI Act, ISO 27001, ISO 42001, NIS2 and DORA. 
  • Advisory support in framework alignment, documentation design, contract upgrading and control implementation. 
  • 24/7 SOC & compliance experts to help integrate AI security controls, monitoring, incident response and resilience. 
  • A compliance partner you can trust — we already deliver many cybersecurity and regulatory services, so AI compliance is a natural extension. 

Get in touch with our compliance specialists to get started on your EU AI Act readiness. 

Final Thoughts: Compliance with the EU AI Act 

The EU Artificial Intelligence Act ushers in a new regulatory reality for AI in Europe — and UK organisations are increasingly part of that landscape. By leveraging your existing compliance foundation and acting early, you can transform regulation from a burden into a differentiator. 

Start with system inventory and risk classification, embed AI governance into your ISMS, document your data and model logic rigorously, and plan for monitoring and incident reporting. The phased deadlines demand urgency, and gaps left unaddressed will only become more expensive over time. 

Adopting ISO 42001 alongside your existing ISO 27001 and GDPR programmes will set you up not only to satisfy the EU AI Act, but to anticipate future regulation demands.  

Take the first step now – and position your organisation ahead of compliance, not scrambling behind it. 

Cyber Protection - speak to an expert

FAQs: EU AI Act 

When will the EU AI Act be enforced? 

  • The regulation itself took effect on 1 August 2024.  
  • Prohibited AI uses begin 2 February 2025.  
  • Governance and GPAI obligations apply 2 August 2025.  
  • High-risk AI obligations in full take effect 2 August 2026.  
  • For AI systems embedded in regulated products (Annex I), compliance extends until 2 August 2027.  

Does it apply to UK-based AI providers? 

Yes — if your AI systems are placed on the EU market, have outputs affecting EU citizens, or you are part of the AI value chain serving Europe, you are in scope. 

What are the penalties for non-compliance? 

Penalties include up to €35 million or 7% of global turnover for prohibited practices and up to €15 million or 3% for other violations.  

How does it differ from GDPR or UK AI policy? 

Where GDPR regulates personal data, the AI Act regulates algorithms, models, datasets, transparency, oversight, and operational safety, whether or not personal data is involved. The UK has signalled a more flexible, principles led approach rather than strict prescriptive rules, but UK firms dealing with the EU will still need to meet the AI Act. 

Protect Your Business & Your Reputation.

With a continued focus on security, you can rest assured that breaches and exploits won't be holding you back.

Speak To An Expert

cybersecurity experts
x

Get In Touch

[contact-form-7 id="5" title="Contact Us Form"]
DigitalXRAID
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.