Interview: How The EU AI Act Was Born With Javier Campos

by Dr. Phil Winder , CEO

In this Webinar our CEO Phil Winder sat down with Javier Campos to discuss the EU AI Act. He is chief innovation officer at Fenestra, and is the author of “Grow Your Business with AI: A First Principles Approach for Scaling Artificial Intelligence in the Enterprise”, published by Apress.

Javier was involved in the development of the EU AI act, and was also involved in the development of the EU Cookie Law in the early 2010s.

The following transcript has been lightly edited for clarity.

What is the AI Act? What does it attempt to do and why is it being done now?

There are a few countries that have been ahead of the pack with regards to AI legislation, in particular Singapore. I strongly recommend people here taking a look at how they’ve done it—I think they’ve done a very good job. Pretty much all countries are starting to take a look at that. The reason is, a few years ago, AI started to make more and more headlines, and this got the people in Brussels thinking, “This is going to be big, therefore we need to be a bit proactive.” Very quickly I could hear them talking in the corridors saying, “We need to do a GDPR of AI”—that was the internal chat—meaning they felt that we needed quite a comprehensive law to protect consumers’ interests.

To be fair on EU regulation, when I talk about my views, I think some things are good, some things could be improved. But if you think about GDPR, it is true that the European Union set the global standard. And actually, what they did was they set the tone and then other countries followed. And now, even in California, the Privacy Act is pretty much copying and pasting the GDPR act, and in Brazil and Australia. So I think from the beginning Brussels realized, “AI is going to be big. We want to protect citizens, let’s do the GDPR of AI.”

The real intentions were quite high, of course. The law has been quite a few years in the making. There have been different versions; it’s been shaped and changed from the first draft. But I think ultimately the objective of the regulation is: how can all the European citizens use AI in a safe way—safe for the consumer and safe for the country? So, how to protect the rights of the citizens? You have the human rights, the basic ones on privacy, dignity and so on. It actually drags a lot of the first principles for other legislation and applies them specifically to AI. And it does a number of things to ensure that. The fundamental first principle, which is what my book talks about all the time, is focused a lot on protecting the consumer. And that’s one thing you have to say in favor of the European Union is they’re one of the countries that are really focused on the citizen. Other countries tend to be a bit more focused on the commercial side. It’s always a fine balance.

How does previous legislation compare to what the EU are trying to do here?

In banking, for example, there is legislation which they use to regulate the automation of the algorithms; this is another level on top of standard regulation. Singapore was looking at tools to check the fairness on AI models. But the main difference really is that the EU is a bit more balanced in terms of its focus on the citizen versus the corporation.

The act takes a risk-based approach as its main structure. Was this the best way to structure it, or were there other directions they could have gone in?

Whenever you want to regulate something, there are always two ways to do it—do you regulate the principles, or do you regulate how it is actually implemented? In other words, the ‘what’ or the ‘how’.

Depending on the area, I prefer to regulate usage because I think it is the most effective way of achieving what you want. So I believe that tackling the ‘what’ is better, but it’s more complicated, and more difficult to implement. Imagine I want to regulate how long you stay in your room? The ‘what’ will say, “You shouldn’t leave your room.” Then if you take a ‘how’ approach I will have a rule saying, “If you cross to the door, or if you cross to the window there…” I’m giving a very simple example, but you see it can get very complicated.

Regulating the ‘what’ for me achieves the principle behind the regulation, but the implementation becomes more difficult, especially in the edge case. The ‘how’ is very easy to implement, but you can bypass it and bad actors do.

If you look at the UK, the Financial Conduct Authority tends to do more the ‘what’, the principle-based ruling. For example, we say to banks, “You have to be fair to consumers,” which is good for consumers, but it becomes very complicated for banks, because what do you mean by fair? I believe the financial industry in the UK is quite well regulated; other regulators tend to be more rule-based.

The European Union Act is a bit of a mixture. It is very principle-based. For example, it will say you shouldn’t discriminate against people, but this is very difficult to accomplish in terms of the ‘how’. I did a lot of work in my last company on this—I actually have a patent on fairness. It is really complex, because it’s subjective. And also technically, how do you do it? If you do a technical test, is it accurate? The number you’re going to get depends on the framework you’re testing. There isn’t an absolute framework for fairness and you need to decide how you define the values. That’s when you introduce the complexity.

With the EU act, I do think they’ve done a fairly good job. It’s a good balance between the principles, and they outline exactly what to do with some methods which is unusual. You tend to either have one or the other. Another positive thing is that we still have talks. For me, anything that tries to regulate the technology itself is wrong. However, you should include principle and usage. In AI you should regulate something by saying you shouldn’t use AI to do genetics or to produce—for example, doing tests with biological material or bioengineering, because it is an unacceptable usage. I don’t think going after regulating technology is very helpful. You cannot implement it and other countries will do it. I think the regulation is a good balance between setting the principles, protecting the citizens, discrimination and mass surveillance.

Mass surveillance is particularly complex because many countries don’t want private companies to do it, but want loopholes for their security forces to, they would argue, keep their citizens safe. I know, for example, the French government shaped quite a bit of the early drafts for their security forces, but the first ever draft, which I don’t think was ever published, had very, very strict rules to even limit their governments for any mass surveillance. Some big security agencies in countries like France and other big countries in Europe, pushed back saying, “No, no, I need these powers.”

Phil Winder:

That’s fascinating, because I think going back to what you said previously about the differences between trying to regulate specific applications versus specific implementations, there are some applications that are explicitly listed in there and excluded. Like military use, for example, is excluded from the legislation. Presumably that’s their influence.

Interestingly, it’s almost the same side of the coin. The use for domestic policing is included for many applications. There are still some exceptions in there, but it’s much more restrictive than military use, for example.

What do you think about their attempts to legislate for generic AI solutions?

Javier Campos:

When a patch is introduced later on, I would argue that it doesn’t work very well. One of the first things with any law, any contract, whatever, is to define the definitions. It was extremely painful, defining what AI is. You can roughly talk about it, say, “Look, is ‘anything that humans do that machines can mimic’ human behavior?” That definition as a broad theory is ok. But when you get into the details, it is really difficult.

One of the big debates was distinguishing machine learning from traditional mathematical or statistical models. For example, is linear regression machine learning? You could spend hours debating it—some people say yes, some people say no—and both have a bit of a point, but personally, I don’t think it is. The first definition that the EU coined, they were trying to be too broad and effectively trying to say that ML is everything.

You might ask “Why would I care?”, but it has big implications. Because if you say any mathematical algorithm is AI, all of a sudden this law applies to everything… I mean, pretty much any application you have running today, from CRM or whatever, there will be some sort of basic algorithms to calculate, for example price optimization. When you call your insurance company, as soon as they pull your data they will calculate your price. That’s an algorithm. Is that AI? And the problem is, depending on the definition everything falls down. The same happened with foundational models. The definition now is quite generic and has got loopholes.

You saw when ChatGPT was in early release, Italy very quickly banned ChatGPT, and then they went back. My view is, this is the problem with this type of legislation. Because AI is accelerating at such a speed. And if you think about the evolution and the history of AI—even something like generative AI about five, six years ago was just research in a lab. And now it is everywhere.

By the same token there are a lot of algorithms which are very different, and they’re going to be in production this year, next year, in the next few years. And you can’t really use the tools if you want to do what I call version one of ML algorithms, like the XGBoost and things like that. They are good but they’re also quite limited in what they do. Even generative AI starts blurring the lines. What do you consider copyright, or not? What is the actual content, what is not? So, for me, it seems very open-ended.

Phil Winder:

Yeah, I agree. And it almost looks a little bit hasty that they arbitrarily included the generic AI systems, because it captures a huge segment of the market, and the downstream applications of that may have nothing to do with it; there may be zero risk of any sort of harm. They did seem to leave a clause in the act that basically allowed them to update that list when they see fit. I do think that it will still continue to change, moving into the future.

Do you think that this legislation will change how businesses do business in the future?

Javier Campos:

Absolutely. I think this legislation could drive some of the AI innovation out of Europe because of the way it has been written. Since Brexit, the UK has not been within that market. It remains to be seen whether the UK adopts this legislation or not. It’s a bit like it was with GDPR, and I do think this is a potential opportunity for the UK.

For Europe, you have operations in Germany, France, Italy, Spain and so on, so the first thing to know is is there is a list of use cases. You need to check if whatever model your company is using falls under one of these packets. Of the four or five packets, there is one that is forbidden so if you are in there, you can’t just use AI in Europe—it is very simple.

If you’re in another area, you need to look at what your company does and how many models it has. But you do need to be careful because, as I said, the law is quite open-ended in terms of the definition, so what you might think is not AI, the law might define as AI. So I would double-check, just to be sure. The way to do it is to check on what use case you are doing, then check the list to see where that potential use case falls. If it’s forbidden, it’s very simple. If it’s just very low risk, there is not much to do. The question is when it is in the middle; then you might need to do things. If it’s a category which is important—then there is documentation you need, and there are extra checks you need to do. And that’s why, for the technical teams, if your use case falls under these ones, you need to comply with the legislation for the countries.

The other thing to know, and this is typical with European laws, is that it is the responsibility of every member country to implement the laws. Which means that implementation is actually slightly varied. This actually happens today. With GDPR most people think, oh it’s just a European law. But when you look at the details, it’s actually the way the regulator, let’s say in the Nordics, would tend to be quite a lot more strict in the way they interpret and implement it than say in Spain or Italy. Meaning some borderline cases are okay in Spain, but are not okay under Norwegian law. This only applies to the borderline cases though, which is why the law is successful because with the core cases pretty much every member is the same.

But also, when it comes to more borderline cases, it is worth seeing where you are working and then look at that particular country and how they’re going to implement it. For example, Germany has published how they’re going to implement the regulator, France is going to do other things. Spain is going to do another thing, the UK is going to do other things.

And then the other variable is, what regulators do you use? Every country will have different regulators. Every country will have some sort of data organization, a lot of which was implemented because of GDPR. Data in the UK is protected by the ICO, Germany will be the data regulator, and so on. I think for the companies, you need to familiarize yourself with how each country is going to implement it and then the timelines that they will work to, because there will be differences.

Phil Winder:

Yeah, I think that’s interesting, and I’m not an expert on this by any means, but I did read in the text that the specific inclusion, the main inclusion, in there is that it’s any output that is consumed within the EU. If we imagine that we’re a company based in the UK or the US or somewhere like that, even if the output of that model is consumed by someone in the EU, then that means that company does fall under this regulation.

How do you think that we can satisfy the requirements when potentially there are different implementations in different countries?

Javier Campos:

That’s a good point. If I’m operating in Germany and I use a product which uses AI, even if that product is based in Singapore or the U.S. or China, if the output of that product touches citizens in the EU, it puts pressure on you to validate and to do due diligence on the third parties you use. I guess you have to do the test for the third party. But the interesting thing is that the party responsible for ensuring that the AI is compliant is actually the company that deployed that AI in the European countries. Because, of course, if you create a global product, you don’t know how your consumers are going to use your technologies.

So if I buy a product that has been developed in China, it is my responsibility to ensure that it’s compliant. And if it’s not, I’m going to be fined. The other question is for the supply chain, and I know some people are redoing the supply chain contracts in terms of their liability. So if I’m using a product in China, and China does something I wasn’t aware of and I’m liable, they should be liable. The reality is you need to understand which countries you are in. But it’s a bit like GDPR, you go to the minimum common denominator. In my last company, for example, I deployed financial banking products across Europe, and for some products the Nordic version was very different from the Spanish version because of the implementation of GDPR; this is not uncommon.

I think in AI because it’s so new, we don’t know how it’s going to work yet as some countries still haven’t clarified how they’re going to implement it. We haven’t seen any cases. It’s like GDPR—the legislation is there, the regulators get on board, more or less, then as soon as a couple of cases go through the courts, that’s when we see how these things work. The penalties of this law are very aligned with GDPR. There was a lot of uncertainty. Companies reacted, and if you look, many companies at the beginning of GDPR were very, very restricted.

Then as they see how things develop, they open up a bit. Here it’s the same principle, saying be careful at the beginning, protect until you see a few cases going through the courts and then you’ll see what it is they want to do and what is the attitude of regulators—both at the European level, and at a more country level.

Phil Winder:

Yeah, okay. I have spoken to some really big companies in the world that are already tailoring their AI solutions and products. Like hyper-tailoring their products to specific jurisdictions. One example that they gave was the fact that in India it is illegal to show a picture of the Indian flag in the wrong orientation. It’s got to be the correct orientation; it’s technically illegal not to do that. And so, for some of their products, they had to build models that were effectively capable of understanding that fact and making sure that it didn’t output any particular images of the Indian flag the wrong way around. I can certainly see that there will be different AI products being built for different jurisdictions here as well.

Okay, so let’s start to wrap it up now then. I think the main thing that our listeners will be interested in is really how much should I care about this? What should I start thinking about now? Should I start worrying now? And maybe you could talk about what you’re thinking about doing at Fenestra? I know you have AI products that you sell as well. What are you personally thinking about at the moment?

How much should I care about this?

Javier Campos:

It’s all about making sure you check for the products your company produces. What use case do they fall under with this classification? And then check the requirements for that classification. It’s quite complicated depending on the classification, but there are two areas where people should start thinking about it. One is a bit of transparency. How do you make decisions based on algorithms? And also, when do you need to include a human in the loop? Even when the algorithms can do everything, you might have to do a bit of process analysis.

Banks do this regularly when you apply for a financial product like a credit card. With something like a relatively small amount, an algorithm will do a check, and if the case is very, very clear, let’s say you have an amazing credit score, you have a very good financial story, it’s a relatively low amount compared to what you have, the machine will approve it; or if it’s really bad, they will decline it automatically. But when things are a bit complicated, they refer to a human. I think people should be thinking along those lines, so if you’re a bank that’s doing something which is critical like a credit check, or selling an insurance product, you’ll need to have extra documentation and transparency on the algorithms.

And the other big one is, you need to check what the algorithm’s implications are on fairness and discrimination. This is a complex area for two reasons. The first one is, fairness is subjective. I mean, this is something which people sometimes struggle to understand, but believe it or not, with every single case you have to have some values. I always give a couple of examples. Culturally, we have different values depending on where we live. And even within Europe, countries have slightly different values, which will impact your perception of what is fair or not fair.

A good example of different values is shown when you do a trolley-type of experiment. Imagine a self-driving car which has a big problem: it loses all control, it’s doing 100 miles per hour, the brakes don’t work, there’s a zebra crossing, and on the left there’s a baby and on the right an elderly person. Then you ask a question and say, “Look, this has happened, it’s really bad, I cannot brake. Unfortunately, I have to decide whether I run over the baby or run over the elderly person, which is likely to have very bad consequences for either of them, which one I choose?”

When you run this for the values of the different people, if you do this in the west, like in Europe, in the UK or U.S., most of us will say, well the baby has all its life ahead, but sadly the elderly person is nearly done, just run over the elderly person. But funnily enough, when you run the same thing in Asian cultures, it’s the opposite. There is so much respect for the elderly. The elderly person has a life; the baby doesn’t even know. It gives you an idea of how massively varied the interpretation is. And by the way, with self-driving cars, I believe it is an ethical bombshell, because at some point you have to configure the software. People don’t want to think about it.

But if in a self-driving vehicle, in the case of an accident, you might have another dilemma. This example is a bit more subtle and more cynical. If there is an accident and the car cannot avoid it, you can configure it as a setting: do I prioritize the life inside the car or outside the car? This is fascinating because it’s an ethical bind. More in the west but more or less globally, if it’s your car, you will say no, no, of course prioritize the life inside the car because it’s my car. But actually maybe the equation, if you look at it from society’s point of view, outside the car are babies, or whatever. Maybe that’s who you should prioritize. It’s a fascinating issue that we’re going to face very soon.

I think, as a summary, check the control and start thinking about the overall process, model transparency, how do you capture your data, how do you train it? And transparency is a big thing. At any point of time if somebody asks, “Why did the system make that comment?”, you should be able to have a transparent answer. The last thing I will say about this is that all these points are not just with AI. In the UK, there is a big controversy over a post office case that began a few years ago. It’s not AI, but it was an IT system, it was software that had some bugs, it provided the wrong information and as a result people went to jail and some people even committed suicide, and there was a cover-up. I think the main challenge there is not that the machine made the mistake—it’s the whole human surroundings.

The point is these things again are not just about AI. Any time you automate, any time there are humans controlling the machines in a system, there are a lot of things which in the past we haven’t done—like transparency, looking at discrimination, how the decisions are impacting different protected groups. That happened in the past, but nobody was looking at it. Now because we are looking we realize, which means it’s an opportunity to improve. But my point is these things are not just because of AI. My advice for the audience is to look at your models, transparency, how do you get the data? And at some point can you really explain what you did, what the model did? If you can answer that, you will be in a good place. If you cannot, or don’t want to, you should review your processes.

More articles

Introduction to the EU AI Act

A short introductory webinar to introduce you to the new EU AI Act. Learn what it is, how it impacts you, and what you need to do.

Read more

Explain, Enhance and Enrich Your Data with Bacalhau Amplify

An AI product development case study introducing Bacalhau Amplify, a data engineering tool based upon Web3 technologies, and our work with Expanso Inc.

Read more
}