What does the AI Act mean for digital rights in the EU?
A robot equipped with artificial intelligence is seen at the AI Xperience Center at the VUB (Vrije Universiteit Brussel) in Brussels, Belgium February 19, 2020. REUTERS/Yves Herman
What’s the context?
As tech experts raise concerns over the risks of artificial intelligence, the EU is inching closer to making the world's first extensive AI rules
BRUSSELS - From homes to transport, culture to policing - artificial intelligence (AI) technology is transforming our everyday lives.
Rights experts warn this rise has come at a significant cost: undermining our privacy, entrenching societal biases, and creating opaque systems that lack accountability.
European lawmakers came a step closer to passing new rules regulating AI tools such as ChatGPT, following a crunch vote on Wednesday where they agreed tougher draft legislation, in what could be the world's first sweeping rules governing artificial intelligence under the AI Act.
This now opens the door to tough negotiations between EU lawmakers and representatives from the governments of all EU states to find a compromise before year's end.
We spoke to five tech experts about what's at stake for digital rights:
SARAH CHANDER - senior policy adviser, European Digital Rights (EDRi)
"We're seeing AI systems used to surveil and identify people in public spaces, to assess them for 'risk' of committing crime and welfare fraud, to facilitate illegal push backs at borders, and facilitating discriminatory decisions in access to education and employment.
We need bans on the most harmful uses of AI systems, including predictive policing, and a full ban on remote biometric identification in the public space (such as facial recognition).
There needs to be accountability and transparency when a high-risk AI system is used, so that the public is more aware of these uses, and how they will be affected, and people need to have legal mechanisms by which they can challenge harmful AI systems."
VICTORIA ADELMANT - director of the Digital Welfare State and Human Rights Project, New York University
"Carve-outs for law enforcement are problematic because that's where one of the really big and problematic uses of this technology happens. We know that facial recognition systems are a lot less accurate for people of colour. And so it has been leading to wrongful arrests and these kinds of things.
There is talk about (the EU) being the first and setting the global standard which builds on the its past efforts with the GDPR. The hope is that the EU Act will have a similar kind of global effect. But none of this is ever happening in a vacuum.
And so the EU might be the first to do an extremely comprehensive attempt at AI regulation. But really, Brazil's AI regulation has been happening concurrently and China has already passed quite a few different rules as well in this domain."
VIRGINIA DIGNUM - professor in responsible artificial intelligence, Umea University, Sweden
"The main challenge is the difficulty of defining AI...Over-defining AI for regulatory purposes can lead organisations to replace some methods by others, just to be outside the scope of the Act...Under-defining AI has the risk of being similar to trying to regulate 'magic'.
If used properly, regulation is a stepping stone for innovation, not an impediment. It will provide a level playing field for different organisations, and most importantly will be a pointer to the type of innovation that really matters."
SEBASTIANO TOFFALETTI - secretary general at European DIGITAL SME Alliance
"There are two ways the EU AI Act could affect SMEs. On the one hand it could significantly enhance the trust of consumers towards AI systems and prompt the birth of a vibrant human-centric and European AI industry.
On the other hand, it could overburden SMEs with compliance costs, which would mean that the market would be dominated by large foreign companies, mainly from the US but also from China. That's why it is important to get it right.
EU legislation often takes the one size-fits-all approach, but this is ineffective when regulating fast paced technologies controlled by dominant market players."
MARK BRAKEL - policy director, Future of Life Institute
"The speed of change is definitely a big challenge. You have Annexe 3 (in the AI Act) which lists all the high-risk applications and if something is listed on that annexe plus a few caveats, then you have to abide by a number of high-risk requirements. And yet keeping that list up to date will definitely be a challenge.
When it comes to general purpose systems such as large language models (like ChatGPT), they will find their way into tens or hundreds of applications. Some of them are going to be on that high-risk list and some of them won't be, but you're going to want to capture that at the source."
ChatGPT - OpenAI's text generator
"Overall, the EU AI Act could have a positive impact on digital rights in Europe by establishing clear rules and standards for AI systems that prioritize transparency, accountability, and human oversight.
However, the implementation and enforcement of these rules will be crucial to ensuring that digital rights are protected in practice.
The EU AI Act is likely to have an impact beyond Europe, as companies operating in other regions may need to comply with the regulation if they offer AI products or services in the EU. This means that the regulation could indirectly influence the development and deployment of AI systems globally."
This article was updated on May 12, 2023, to include the latest vote on the AI Act.
(Reporting by Joanna Gill; Editing by Zoe Tabary.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Tech regulation
- Data rights