Introduction to the EU AI Act

by Dr. Phil Winder , CEO

This is a video of a presentation introducing the EU AI Act by explaining what it is, how it impacts you, and what you need to do. In subsequent webinars I will delve into the details and provide specific examples. We will also be speaking to other industry experts to provide their insight.

Presentation

Download Slides

Abstract

The EU AI Act is an extraordinary new piece of legislation that places expectations on companies that leverage AI models. The industry has never before had to deal with such constraints and so it is important that everyone understands these new expectations.

In this webinar I will introduce the legislation by explaining what it is, how it impacts you, and what you need to do. In subsequent webinars I will delve into the details and provide specific examples. We will also be speaking to other industry experts to provide their insight.

At the end of this talk you will understand the basic themes of the legislation and have a good understanding of how it might affect you.

Introduction

As an AI consultancy, we’re very interested in what’s coming around corner in terms of regulation.

Technical industries have typically flown under the radar when it comes to legislation. But since technology plays such an important part of people’s lives, it’s getting harder to suggest that industry is capable regulate itself. Governments around the world are now considering enacting legislation that places basic requirements on vendors.

This presentation introduces the EU AI Act. I’ve split it into 8 sections which represent what I believe are the most important questions about the act.

Context

I’m investigating this legislation from the perspective of an Engineer. I’m interested in the practical steps that I need to take to fulfil my obligations as a vendor.

I’m going to take the position of a company that operates a space that is not controversial. I think this represents most people. But be aware that my interpretation may not be applicable if you work in the defence industry or with children.

I also want to be clear that I consider a lot of the implementation details to fall under the banner of technical AI Governance. These are procedures and systems to ensure AI products do what they are meant to do. This is not the same as corporate governance.

And finally, this is not legal advice. I am not a lawyer. Please take everything I say with a pinch of salt.

Status

Let’s start by talking about the current status of the act. In December 2023 the three branches of the EU legislature got together to debate the final wording of the act. We are now waiting for the publication of the final act which will subsequently be voted into law in the EU parliament.

So let’s be clear. The legislation is not finalised. It is not agreed, it is not in effect. The final legislation has not been published yet. This presentation is based upon the version that was produced in June 2023.

The media will have you believe that an agreement has been reached but that is only in principle. The EU expect this to become law sometime in early 2024, And should apply about 2026. So we still got a good few years to get our heads around what we need to do.

Also note that this is not the first AI law (despite everyone saying so). China’s similar AI law came into effect on 13 July 2023; the scope isn’t as broad but this was the first AI law. Singapore also already has similar legislation.

Definitions

The AI act has a big list of definitions for some of the more technical terms that it uses in the document. There are few in particular that I want highlight:

  • “AI system”: “data inputs” -> “machine learning and/or logic- and knowledge based approaches” -> “system-generated outputs”
  • “general purpose AI system”: “generally applicable functions”
  • “safety component”: “malfunctioning which endangers health and safety of persons or property”
  • “performance”: “achieve its intended purpose”
  • “testing in real world conditions”: “shall not be considered as placing the AI system on the market”

Another point of interest is the definition of a provider. This is a definition that clarifies who these rules apply to. You are a provider if your name is on the product, or you have substantially modified somebody else’s product, or you have substantially modified the intended tended purpose of somebody else’s product, or if your service becomes part of a general purpose AI system.

This is interesting because foundation models are being enhanced with the capability to call out to external systems. I wonder if these external systems would fall under part of this legislation?

Scope

Right at the start of the act it clarifies the scope of the legislation. The most interesting element in this section is that it includes any AI tool consumed within the EU. So it doesn’t matter if you’re based in the US these laws will still apply to you if you have customers in the EU.

It also includes any general purpose AI. To recap this includes large language models, image and audio generation models, or any other model that could be deemed as generative. This inclusion overrides any subsequent discussions about the applications of the AI later on. If you’re producing a language model for example and you have users in the EU then you must abide.

The legislation excludes military or national security applications.

And in a recent addition it also excludes research and development projects and non-professional projects. These weren’t excluded from the original scope of the legislation and the open source community quite rightly threw their hands up in the air. But of course once a business leverages an open-source project, then they may become liable.

One thing that is unclear here is whether this scope includes internal applications. If you only serve internal users (i.e. not the public), then it could be considered to fall under the R&D exemption.

Structure

Now let’s talk about the underlying structure of the legislation. The act is based upon an underlying appraisal of risk for specific applications. There are four levels of risk: no risk, low risk, high risk and prohibited.

Minimal Risk

The first and lowest level is minimal risk It is intended to represent the situation where it would be very unlikely to cause harm. This level comprises of all of the applications of AI that are not explicitly specified in subsequent sections of the act.

Limited Risk

Next we have the limited level of risk. This includes all generative AI technologies and will have a basic requirement to effectively explain that the individual is communicating with a system and not with a real person.

However, forward-looking businesses might want to investigate how they can improve their transparency obligations beyond what is expected. For example, you might take a leaf out of the UK financial services playbook and begin to explain to the user why decisions have been made and what they can do if they would like to contest it.

High Risk

In the third level, you will find the high risk set of applications. I will detail what this category includes shortly. Much of the legislation is spent introducing legislative procedures for high-risk applications.

Prohibited

And finally, The last element in the risk based approach represents prohibited AI applications. These are deemed to be unacceptable in terms of the likelihood of harm. Again, I will list these shortly.

Penalties

So what happens if you break any of these rules?

The legislation has outlined fiscal penalties for any company that does not comply. The legislation says that they should be “proportionate and dissuasive.”

The highest penalty goes to those who use AI in applications that are designated as prohibited. Fines can extend up to a maximum of 6% of global revenue or €30 million, whichever is higher.

For a high risk violation fines can be as high as 4% of global revenue or €20 million, whichever is higher.

Finally if a business is found to be misleading the relevant authorities they can receive a fine of up to 2% of global revenue €10 million, whichever is higher.

So these fines are significant and are in line with other legislation such as GDPR. It’s interesting to notes however that no criminal penalties for violation are included, unlike the U.K.’s online safety bill, for example.

Prohibited AI Applications

Now let’s move to the section that everybody is interested in hearing. What classifies as prohibited.

There’s been a few interesting developments about the prohibited section. It’s got more complicated over time. The French, in particular, required exceptions for national security use.

The current legislation has three distinct groups of entities that specific laws apply to. The first group is everyone; every single user. The legislation prohibits the use AI to distort behaviour of EU citizens to the point where it causes harm.

The second group include specific demographic groups. The legislation prohibits AI systems that take advantage of the vulnerabilities of specific demographic groups.

Finally there’s a third group that prevents specific AI applications for law enforcement. In particular it will be illegal for law enforcement agencies to use biometric identification techniques. This is a very interesting proposition that has its opponents. Let’s see if it makes it into the final draft.

High-Risk AI Applications

The bulk of the legislation introduces procedures to deal with high-risk AI applications.

An application is classed as high risk if it is deemed high risk by other legislation, or if it is part of a safety critical system, or if it the application uses or is used in following domains:

  • biometrics,
  • critical infrastructure,
  • education,
  • employment,
  • essential services,
  • law enforcement,
  • borders and migration,
  • justice…

The following use cases were added in the latest draft:

  • credit scoring,
  • use in life or health insurance.

There’s also a stipulation in there that announces that these high risk applications may change over time. So just because it’s not there now doesn’t mean it’s not going be there in the future.

Related to the list of high-risk applications, the legislation also includes several tidbits that are easy to miss. Here, I’ve included them as quotes because it’s sometimes quite hard to interpret.

  • “Distributors shall verify conformity markings and declarations.” – Article 27
  • Providers can develop applications in “regulatory sandboxes” and test in “real world conditions” – no definition of what a sandbox is – Article 53-54
  • “SMEs” get “priority access” to “regulatory sandboxes”, fees are commensurate to size – Article 55
  • “The Commission shall… set up and maintain an EU database… [of] high-risk AI systems” – Article 60
  • “Serious incidents shall be made… not later than 15 days after the providers became aware” – no definition of a serious incident – Article 62.1

High-Risk AI Application Requirements

If you are developing an AI Application that operates within these high-risk use cases then you must follow a set of procedures to ensure that you have de-risked your high-risk application.

All of these high-risk requirements are evaluated by a compliance review that is yet to be determined. There are some specifications as to what must be included in a compliance review but it’s not yet clear how it is going to be performed, or by whom.

I have taken all of the requirements and categorised them in engineering friendly language.

I have called the first set of requirements risk management because this includes procedures that attempt to identify and control risks. They specify that you should perform tests to ensure effectiveness. And they introduce special requirements for any high-risk AI application that is going to be used by under 18s.

The second requirement falls under data governance. Here there are recommendations that you should have good quality data and specific review processes to ensure that your data is fit for purpose.

I call the third category quality management. This includes a wide range of documentation including design documentation product documentation operational documentation risk documentation and proof of conformity. They also state that these documents should be kept for at least 10 years after publication.

The fourth category is MLOps. The legislation states that the AI systems must be overseen by “natural persons”, not another AI system. And there should be systems in place to ensure that your application is robust, monitored and logged. Interestingly they include specific requirements for logging and state that all logs and metrics should be stored for at least 6 months. Six months worth of logging of monitoring data is a lot of data!

Finally we have a section called MLSecOps. The legislation states that the AI application must be secure. They’ve included technical language like: feedback loops, data poisoning and adversarial attacks. Again it’s interesting that they’ve included specific technical language here but I would interpret that to mean that you should attempt to mitigate any potential attack vector or any potential cause of instability.

Low-Risk Applications

In the lowest-risk category, there used to be a definition of how developers should classify low-risk applications. They’ve removed that language and replaced it with “certain AI systems”.

This list now includes:

  • biometric categorisation systems,
  • emotion recognition systems,
  • any AI application that generates or manipulates image audio or video content that appreciably resembles existing persons objects places other entities or events.

If you fall under that list, then you have one very simple requirement.

The requirement is is that the user is informed that they are interacting with an AI system. I would suggest that all businesses using AI should be transparent from the beginning. It’s quite likely that the list of low risk applications is going to increase. So be ahead of the curve and be as transparent as you can now.

Final Thoughts

I’ve tried to condense the legislation into a handful of slides. I hope they have achieved that, but of course this is just my interpretation. If you have any questions about the legislation then I recommend either:

  1. get in touch with Winder.AI, and we can help walk you through the legislation, or,
  2. read the legislation yourself.

For each slide I’ve included citations to the specific article in the legislation where you can find the wording. I’ve also included links to the legislation and other interesting documents the end.

In summary the EU AI act is a groundbreaking piece of legislation that places wide reaching burdens on the developers of AI systems. But the burdens are not insurmountable. In general, they make sense. It’s likely that you’re doing this already in your organisation. It’s also important to know this legislation is at least two years away from becoming a legal requirement. So you have time.

In two weeks time I’m presenting another webinar which is an interview with someone that was actually involved in the process of developing the legislation: Javier Campos. If you found this presentation interesting then I’d definitely recommend that you attend that as well.

More articles

How Social Media Platforms use MLOps and AI Governance to Help to Moderate Content

Winder.AI helped a UK government regulator interview and analyse how online platforms are using MLOps and governance to help operate their AI solutions. Download the report here.

Read more

MLOps in Supply Chain Management

MLOps consulting in supply chain management. Learn how Winder.AI supported the MLOps platform team at Interos to deliver an internal AI platform for their developers.

Read more
}