The Professor of Ethics and Technology and AI expert shares her experiences forming the landmark regulation.
On 1 August 2024, the European Union’s AI Act came into force. The act attempts to establish a governance framework around artificial intelligence, aiming to mitigate risks to citizens and ensure that fundamental rights are respected.
While some lawmakers have praised the passing of the act for bringing in much-needed regulation, others have warned of its potential to stifle the growth of AI in Europe.
Joanna Bryson, Professor of Ethics and Technology at the Hertie School, has been involved in discussions around the act from its earliest stages, contributing to public consultations, advising policymakers, and writing widely about its significance.
We sat down with Joanna Bryson to discuss her pivotal role in shaping the AI Act, the focus of her chapter in the new book Artificial Intelligence and Fundamental Rights, exploring how the act came about, what makes it “beautifully boring,” and why it matters that Europe is leading the way.
You’ve been involved with the AI Act since its early stages. How did it come about, and how were you involved?
Bryson: I’ve been involved in the process from very early on, even before the text was drafted, when discussions were still happening at the OECD. Sometimes I was working hard to contribute, and sometimes I was just in the right place at the right time.
One of the questions I explore in my chapter is how academics can even become part of something like this. People often imagine it’s all closed-door policymaking, but actually, there were many points for public and expert input along the way.
And what’s important to understand is that the process doesn’t end once the law is written. In reality, how it’s interpreted and applied matters enormously, and that work is happening now.
At a meeting I attended after the act was finalised, it was fascinating to see lawyers from across Europe working together to clarify how to implement it, even finding ways around last-minute clauses that had been pushed in by lobbying from United States companies. It showed me how engaged and collaborative the legal and fundamental rights communities are in making this work.
There’s been debate over whether the act is too restrictive or too weak. How do you see it?
There’s been a surprising amount of disinformation about the act. People are saying it’s a failure or that it’s somehow unworkable. But when I went to that meeting, almost everyone agreed that it’s actually a very solid piece of legislation that achieves what it set out to do.
Of course, you can always nitpick details, but the bigger picture is that this is a law that most people in the field think is good. There was even a moment when lawyers explained to someone from the European Commission that something she thought was a flaw wasn’t actually a problem at all; the legal community had already figured out how to handle it.
So I don’t see the AI Act as too weak or too strict. It’s a serious attempt to give clear structure to an area that was previously full of confusion and hype. And this clarity is exactly what we need.
You’ve called the act “beautifully boring”. What did you mean by that?
Well, if you look at other big EU laws like the Digital Markets Act and the Digital Services Act, they are addressing really visible issues – market concentration, online harms – and they get a lot of attention because they feel dramatic and urgent.
By contrast, the AI Act deliberately strips away the science-fiction hype. People tend to anthropomorphise AI, talking about it like it’s a person or a friend, and some really wild ideas have been proposed. But most of that was kept out of the final text.
Instead, the act just treats AI as what it actually is: a product. That might sound boring, but it’s exactly what was needed. It puts AI back into the framework of product law and product safety. This helps people to stop thinking of AI systems as independent actors and start seeing them as corporate products, which is the only way you can regulate them effectively.
It’s not flashy, but that’s its strength. We’ve created something enforceable and grounded that can now be communicated more widely as the starting point for public understanding and future policy.
Is the AI Act influencing other countries or regions beyond the EU?
Oh yes, and in ways people might not expect.
For example, both Brazil and China published draft AI regulations that looked remarkably similar to the AI Act even before the EU had fully agreed on its version. But they could move faster due to the structure of their political systems at the time.
And it’s definitely not just the EU that’s regulating AI; that’s a common misconception. Countries across South America, Australia, India and many parts of Southeast Asia and the Pacific Islands are all developing frameworks. China actually has some of the strictest AI rules in the world right now: when DeepSeek launched, it was probably the most heavily regulated large language model ever.
China has even introduced rules going beyond the AI Act, for example, requiring that all AI-generated content be clearly labelled, which has sent Chinese tech firms scrambling to comply.
So yes, there’s regulation happening everywhere, but my hope is that the EU’s approach has more democratic legitimacy because it’s been participatory. That could make it more robust in the long term.
You can read Joanna Bryson’s chapter, “From Definition to Regulation: Is the European Union Getting AI Right?”, in the book Artificial Intelligence and Fundamental Rights here: https://irdt-schriften.uni-trier.de/index.php/irdt/catalog/book/6
Contact
-
Joanna Bryson, Professor of Ethics and Technology
-
Nick Cosburn, Associate | Media Relations