EU AI Act Summary: Obligations and Exceptions for Businesses

by Dr. Phil Winder , CEO

The European Union (EU) Artificial Intelligence (AI) Act has been signed into law. In previous articles and webinars I investigated the potential impact of the Act for those that use AI systems. The Act has changed significantly since the draft version was released, but now it is signed I can provide a summary based on the final text.

The information provided here summarizes the key elements of the EU AI Act and is not a substitute for legal advice, which you should seek If you are unsure how the EU AI Act applies to your business.

If you need any help with the technical implementation, please consider our AI consulting services.

This article is written from the perspective of an engineer using AI for business purposes. I ignore any part of the regulation that relates to national or internal security (e.g. military or police use). Additionally, I omit discussions on the more controversial aspects, such as impacts on civil liberties and potential over-regulation.

Throughout these notes I have referenced the section of the Act that I am summarizing. For example, Art. 5(1)(f) means Article 5, point 1, sub-point f. Direct links to the relevant section are provided where possible.

Video Of This Post

The following video is a recording of a webinar that talked about this article. It doesn’t quite go into so much detail but you should find it useful if you prefer to listen, rather than read.

Download Slides

Goal of the EU AI Act

The EU AI Act provides a set of AI regulations that aim to make the consumer use of AI more trustworthy, safe, and fair. The Act introduces a risk-based approach with proportionate rules. The Act is based on the 2019 Ethics guidelines for trustworthy AI developed by the independent AI High-Level Expert Group.

Many of the requirements set out below are in line with engineering best practice and are likely to be familiar to engineers who work in regulated industries. However, the AI law does have new burdens especially for those that use AI systems in high-risk applications.

Brief Glossary

Before diving into the details, it is important to understand some key definitions used in the Act. Please refer to the introductory notes and Art. 3 for full definitions. I use familiar approximate language where appropriate, such as GenAI instead of “general-purpose AI models”.

  • AI System: The definition of AI is broad meaning any autonomous system capable of generating outputs from inputs. (Art. 3(1))
  • Provider/Deployer/Importer/Distributor/Operator: Anyone placing an AI system on the market. Subtle differences in requirements depending on how you are exposing the system. E.g. distributors have to ensure systems have been appropriately registered. (Art. 3(3-8))
  • making available on the market: the “supply of … in the course of a commercial activity, whether in return for payment or free of charge” – this is a crucial definition because it might suggest that all internal applications are except. (Art. 3(10))
  • Performance: the ability to achieve its intended purpose. (Art. 3(18))
  • Biometric categorisation system: Using biometric data (e.g. sex, age, tattoos, behavioral traits) in classification. (Art. 3(40))
  • Serious incident: death, disruption to critical infrastructure, infringement of fundamental rights, serious harm to property or the environment. (Art. 3(49))
  • Profiling: use of personal data to analyze or predict personal aspects. (EU 2016/679 Art. 4(4))
  • real-world testing plan: a document. (Art. 3(53))
  • testing in real-world conditions: temporarily putting an AI system into service to prove compliance, providing that Art. 57 and 60 are fulfilled. (Art. 3(57))
  • general-purpose AI model/system: any AI model that is capable of competently performing a wide range of distinct tasks. Despite not being precisely correct I refer to these models as GenAI. (Art. 3(63) and Art. 51)

Applicability of the EU AI Act

The EU AI Act applies to any party that exposes an AI system to be used in the EU. (Art. 2(1)(a))

So it doesn’t matter if you’re based in the EU or not, if you’re exposing an AI system for use in the EU market you must comply.

General Exceptions

Models released under a “free and open-source license” are exempt from all obligations. (Art. 53(2))

This means that if you release your AI system under an open-source license you are exempt from all the requirements of the EU AI Act. This is a significant exception and may be a good reason to consider open-sourcing your AI systems.

Exceptions to the General Exceptions

These exceptions do not apply to general-purpose AI models with “systemic risks”. (Art. 51)

Unfortunately, if you release a GenAI model with “high impact capabilities” you must comply with the regulation, irrespective of whether it is open source or not. See Obligations for Providers of GenAI Models for more details.

Obligations For All Use Cases

This section applies to all AI systems, irrespective of the use case or risk level.

The provider of any AI system must register that system in the EU database referred to in Art. 71. (Art. 49(2))

AI systems that interact directly with natural persons must ensure that the users are aware they are interacting with an AI system. (Art. 50(1))

AI systems that are generating synthetic content must ensure that they are marked as such in a machine-readable format. (Art. 50(2))

Obligations for Specific Use Cases

The following sections describe the use cases or applications where the regulation applies. As this article targets business users, I omit exceptions and alterations for policing, military, or border control uses to simplify the analysis.

Prohibited Applications

The following use cases are prohibited:

  • any AI system that causes an individual to make a decision, that they otherwise would not have taken, that harms themselves or others. (Art. 5(1)(a))
  • any AI system that exploits an individual’s vulnerabilities that causes them to make a decision that harms themselves or others. (Art. 5(1)(b))
  • any AI system that results in unjustified detrimental treatment, or in social contexts unrelated to the original context where the data was collected. (Art. 5(1)(c))
  • any AI system used to infer emotions in the workplace or educational institutes. (Art. 5(1)(f))
  • any biometric categorisation system. (Art. 5(1)(g))

High Risk Applications

The following use cases are deemed “high risk”:

  • any AI system that is or is used in an application that ensures safety. (Annex III(1))

  • biometric identification systems. (Annex III(1)(a))

  • any other biometric categorisation or emotion recognition applications that are not prohibited. (Annex III(2-3))

  • any AI system used in critical infrastructure. (Annex III(2))

    Note: The wording here is particularly bad and at first glance suggests safety components only. But further reading of the notes at the start makes it clear that any AI system in the supply of water, gas, heating and electricity is the intention.

  • any AI system used in education that: (Annex III(3))

    • determines access to education. (Annex III(3)(a))
    • evaluate outcomes. (Annex III(3)(b-c))
    • monitors or detects “prohibited behavior during tests.” (Annex III(3)(d))
  • any AI system used in employment that: (Annex III(4))

    • is used for recruitment selection. (Annex III(4)(a))
    • is used to make decisions affecting contracts. (Annex III(4)(b))
  • any AI system that is used in the access of essential services: (Annex III(5))

    • in healthcare or public benefits. (Annex III(5)(a))
    • credit scoring, except for detecting fraud. (Annex III(5)(b))
    • life and health insurance. (Annex III(5)(c))
    • dispatching emergency services. (Annex III(5)(d))
  • any AI system that is used to administer justice or democratic processes: (Annex III(8))

    • used by the judiciary. (Annex III(8)(a))
    • influencing the outcome of an election. (Annex III(8)(a))

High-Risk AI System Exceptions

Art. 6(3) makes exceptions to the above where:

  • the task is “narrow” (Art. 6(3)(a))
  • the AI system improves the result of a previously completed human activity (Art. 6(3)(b-c))
  • is a “preparatory task” (Art. 6(3)(d))

No exceptions for profiling. (Art. 6(3))

If the provider believes they are exempted, they must document the assessment. (Art. 6(4))

Example Implementations

Art. 6(5) says that the EU Commission will provide examples of high-risk applications and implementations “no later than 2 February 2026.”

Surprisingly, this is 6 months after GenAI models must comply (see Timelines).

Future Changes

The EU Commission may change the definitions of prohibited and high-risk applications over time. (Art. 7) They may also change the exceptions. (Art. 6(6-8))

Art. 7(2) provides a solid framework for how the EU Commission evaluates the risk of an application.

Requirements for High-Risk AI Systems

First, all products that use an AI system must also comply with all other applicable laws, known as “Union harmonisation legislation.” (Art. 8(2)). See Annex I.

Once high-risk systems are compliant they must include a “CE” mark followed by the identification number of the notified body responsible.

Risk Management System

Art. 9 states that a risk management system must be used throughout the lifecycle of the AI system. It must identify and evaluate risks. (Art. 9(2)) Risks must then be mitigated through design, development or provision of adequate information. (Art. 9(3-5))

AI systems and risk management measures must be tested. (Art. 9(6,8)) Tests may include testing in real-world conditions. (Art. 9(7))

Special consideration should be given if there is a risk to persons under the age of 18 or other vulnerable groups. (Art. 9(9))

A fundamental rights impact assessment must be performed, see the article for more details. (Art. 27)

Quality Management System and Record Keeping

Providers of high-risk AI systems must have a quality management system. (Art. 17) The system must comprise a range of resources that include strategies and procedures for complying with the regulation, see the article for more details. (Art. 17(1))

The implementation of the quality management system should be proportionate to the size of the provider’s organization. (Art. 17(2))

The quality management system requirements may overlap with other harmonized standards, especially if your business is a financial institution. (Arg. 17(4))

All documents must be retained for at least 10 years after the high risk AI system has been placed on the market. (Art. 18)

Data Governance

Art. 10 sets out the general requirements for a data governance framework, but doesn’t explain how or what to do with it.

Training, validation, and testing datasets should all have governance practices that include the design choices, the data collection techniques, the data cleaning techniques, an assessment of the suitability, an examination of possible biases, appropriate measures to detect and prevent biases, and an identification of issues and how those can be addressed. (Art. 10(2-4))

Providers may exceptionally process personal data for the purposes of bias detection. (Art. 10(5)) There are strict safeguards around this.

Technical Documentation

Technical documentation about the high risk AI system is required and must be kept up-to-date. (Art. 11)

The technical documentation must include the following:

  • Descriptions of the AI system and its intended use, from the perspective of a user. (Annex IV(1))
  • Descriptions of the technical approach of the AI system including key design decisions. (Annex IV(2))
  • Details of the monitoring capabilities of the system and ongoing operating procedures. (Annex IV(3))
  • Details about the metrics used to prove performance. (Annex IV(4))
  • Details about the risk management system. (Annex IV(5))
  • Administrative information and details about the quality management system. (Annex IV(6-8), Annex V-VII)
  • Details about post-market monitoring plan. (Annex IV(9) and Chapter IX)

Technical Documentation Exceptions

SMEs and start-ups may provide the technical documentation in a simplified manner. (Art. 11(1)) Examples or the format of the simplified technical documentation is to be decided.

Logging

The high-risk AI system should automatically log events. (Art. 12(1-2))

Logging systems should at a minimum be capable of recording timestamps, inputs and outputs and identification of the natural persons involved. (Art. 12(3))

Logs should be retained for at least 6 months, unless required not to by another law (e.g. GDPR deletion). (Art. 19)

Monitoring

Providers must establish and document a monitoring system. Chapter IX introduces the need for such a system and that the goal is to systematically analyze data to ensure compliance with the rest of the regulation. But there aren’t any concrete requirements on what data the monitoring system should collect other than a template for a plan that will be produced 18 months after it comes into force.

If providers establish a reasonably likely causal link between the AI system and a serious incident (as defined in the glossary) the provider must inform the surveillance authorities within 15 days. For serious cases (see Art. 73), for example, death, it must be within 2 days.

Technical Documentation for Deployers

High-risk AI system providers must provide technical usage documentation to deployers. (Art. 13)

This must include instructions for use and the capabilities and limitations of the system. (Art. 13(a-b)) This information is quite similar to a model card.

Human Oversight

Art. 14 states that people must be able to oversee AI systems to ensure ongoing risks are monitored and minimized.

If the AI system is provided to a deployer, then the oversight system must be usable to the deployer. (Art. 14(4))

Accuracy, Robustness, and Security

Art. 15 states that providers must ensure that AI systems are accurate and robust. Definitions of accuracy and robustness depend on the application but are to be reported. (Art. 15(2-4))

Obligations for Providers of GenAI Models

Those that are publishing GenAI models in the EU must comply with the following requirements.

Providers of GenAI models must:

  • provide technical documentation (see below) (Art. 53(1)(a))
  • provide deployer documentation (see below) (Art. 53(1)(b))

If the GenAI model is deemed to have systemic risk (see below), providers must (Art. 55(1)):

  • evaluate the model
  • assess and mitigate risks
  • track, document, and report the use of GenAI models with systemic risk
  • ensure adequate cybersecurity measures
  • follow codes of practice as they are produced, available 9 months after in force (Art. 56)

Definition of Systemic Risk

After registering, the EU commission will decide whether a general-purpose AI model has a systemic risk using the following criteria (Annex XIII):

  • number of parameters
  • “quality” or size of the dataset
  • amount of computation used to train the model (greater than 10^25 FLOPs)
  • input and output modalities of the model
  • benchmarks
  • market reach (at least 10000 registered “business users”)
  • number of registered users

Technical Documentation Requirements for GenAI Models

Providers required to submit technical documentation (Art. 53(Section 1)(a)) must provide the following information at a minimum (Annex XI(1)):

  • A general description of the model, which includes information about intended usage, modality, the license, etc.
  • A detailed technical description of:
    • the integration requirements, model specification and training methodologies
    • the data use for training and testing and any potential biases
    • the computation resources required to train the model
    • the estimated energy consumption of the model

If the general-purpose AI model is found to have systemic risk, then the technical documentation must also include (Annex XI(Section 2)):

  • Evaluation methodology, results and justifications
  • Measures for improving safety and robustness
  • Description of system architecture

Deployer Documentation Requirements for GenAI Models

Providers must provide documentation to deployers that includes at least the following (Annex XII):

  • intended use, acceptable use, distribution details, interaction patterns, software versions, architecture details, modalities, licensing.
  • developer documentation for integration, modalities, and information on training data.

Note that this overlaps with the previous technical documentation.

Obligations of Importers or Distributors

Third parties that are importing or distributing high-risk AI systems must verify that it bears the required CE marking and is accompanied by a copy of the EU declaration. (Art. 23-24)

Sandboxes

Chapter VI refers to a “regulatory sandbox” to “support innovation” that shall be delivered by the EU Commission no later than 2 August 2026.

Interestingly, this is 12 months after GenAI models fall within the regulation (see Timelines).

Chapter VI does provide detail on how a sandbox should work, but since they aren’t due to enter use for at least 2 years I leave this chapter as future work.

EU AI Act Timeline

Art. 111 states that any high-risk AI system that have been on the market or “put into service” before 2 August 2026 shall be brought into compliance by 31 December 2030.

The above only applies if there are no “significant changes” in design. (Art. 111(2))

In summary, this provides a large cushion for those that already have high-risk AI systems in place. It is reasonable to assume that the majority of AI systems will be replaced or significantly updated within 6 years. But this could be a burden if you don’t have a central database recording all AI systems in use.

Application of the Regulation

Art. 113(a) states that the prohibited use cases will come into force on 2 February 2025.

Art. 113(b) states that the following shall apply from 2 August 2025:

  • The establishment of notifying authorities in member countries (Chapter III, Section 4)
  • GenAI models (Chapter V)
  • The establishment of the AI office (Chapter VII)
  • The establishment of penalties (Chapter XII)
  • Confidentiality (Art. 78)

Everything except high-risk applications apply from 2 August 2026. For example, this is when the EU AI system database is due to be live. (Art. 71)

And finally Art. 113(c) states that high-risk applications must abide by the regulation 2 August 2027.

Enforcement and Penalties

The EU will delegate enforcement activities to surveillance authorities. Summarising Art. 74, surveillance authorities are responsible for assessing conformity will have the power to:

  • access source code
  • verify testing or auditing procedures
  • view all documentation
  • have full access to data

These are all treated with confidentiality (Art. 78).

Art. 75 extends the same powers to the AI Office.

If the surveillance authority finds an AI system on the market that does not comply with the regulation the provider has 15 days to remove the product or bring it into compliance.

Chapter IX, Section 4, introduces slightly different powers for providers of GenAI models, but they roughly amount to the same level of access.

EU AI Act Penalties

There are a range of penalties depending on the severity of the non-compliance. (Chapter XII, Art. 99)

The table below states the violation and the penalty. The fines are up to an amount or a percentage of turnover, whichever is higher.

ViolationMonetary FineAnnual Worldwide Turnover
Prohibited use€35 million7%
High-Risk use€15 million3%
GenAI use€15 million3%
Misleading information€7.5 million1%

For SMEs, the fines are whichever is lower.

There are also a few notes about administrative fees.

Summary

In Summary: The EU AI Act mandates compliance from all businesses exposing AI systems in the EU. There is a significant onboarding period that will soften the blow for those that already have AI systems in place. However, the Act does introduce new clear boundaries for those that use AI in high-risk applications.

But aside from the administrative burden, the requirements of the Act are in line with best engineering practices. The Act requires that AI systems are well documented, tested, and monitored. These are all things that businesses should be doing anyway. And there are also reduced burdens for small businesses and those that open-source their AI systems.

Overall, the regulation appears fair and reasonable. Being compliant with the regulation will make your AI systems more trustworthy and transparent. And that can only be a good thing for your business.

There are a few articles that were not relevant to this analysis. These are:

Art. 28-39 deal with the formation of the assessment bodies and the notification procedures.

Art. 40-43 discuss the interactions with other harmonized standards.

Art. 44-49 talk about markings and registration.

Chapter VI talks about the governance of the EU AI Act at a European level. This is relevant if you are interested in learning how the EU AI Act can change over time.

Chapter X talks about some aspects of the codes of conduct and guidelines that the AI office is to create.

Chapter XI discusses how the EU commission delegates power.

Frequently asked questions

More articles

Interview: How The EU AI Act Was Born With Javier Campos

Explore the EU's AI Act, its proactive approach to consumer protection, risk-based legislation, and its impact on global AI regulation and business practices.

Read more

Introduction to the EU AI Act

A short introductory webinar to introduce you to the new EU AI Act. Learn what it is, how it impacts you, and what you need to do.

Read more
}