What does the EU's AI act mean for human rights?

A security camera is seen at the main entrance of the European Union Commission headquarters in Brussels July 1, 2013. REUTERS/Francois Lenoir

A security camera is seen at the main entrance of the European Union Commission headquarters in Brussels July 1, 2013. REUTERS/Francois Lenoir

What’s the context?

The rules put Europe at the forefront of global efforts to regulate artificial intelligence tech, but do they go far enough?

  • 'Historic' deal hailed as setting global benchmark
  • Rights groups sound alarm about policing, migration uses
  • Export of AI tech to non-EU countries raises concern

BRUSSELS - The EU reached a landmark deal on rules to govern the use of artificial intelligence (AI) on Friday - the world's first comprehensive regulations for the use of tools that are already transforming everyday life - from workplaces to law enforcement.

Europe's AI Act was hailed as "historic" by European Union lawmakers, who said it set a global benchmark.

"We (delivered) a balance between protection and innovation, we have all the safeguards, provisions and the redress that we need," one of the lead negotiators, European lawmaker Dragoş Tudorache, told reporters.

But some digital rights campaigners say the rules do not go far enough to protect against discriminatory AI systems and mass surveillance.

Here's what the new AI rules could mean for human rights:

What is the AI Act?

The provisional deal on rules governing the use of AI aims to ensure that AI systems in the EU are safe and respect fundamental rights and EU values while boosting innovation. 

It establishes obligations for the use of AI based on potential risks by categorising AI systems and their capacity to cause harm to society - the higher the risk, the stricter the rules, with some systems entirely banned within the EU.

Negotiators agreed limited exceptions to these rules for law enforcement to use remote biometric surveillance in the case of national security threats.

Companies not complying with the rules could face fines ranging from 35 million euros ($37.7 million), or 7% of global turnover, or 1.5% depending on the size of the company, and the violation.

The new rules will be subject to further technical discussions to hammer out the details in the next few weeks. They are due to enter into force early next year when the deal is officially ratified and will apply two years after that date.

Women sit on a balcony of a house in Beirut's southern suburb of Ouzai, Lebanon November 15, 2022
Go DeeperIn Middle East, poor miss out as 'faulty' algorithms target aid
A view shows some of a small fleet of Jaguar I-Pace electric vehicles at Waymo's operations center in the Bayview district of San Francisco, California, U.S. October 19, 2021. Picture taken October 19, 2021. REUTERS/Peter DaSilva
Go DeeperMake or brake: California's robotaxis show self-driving limits
Supporters of Argentine president-elect Javier Milei celebrate the results of Argentina's runoff presidential election, in Buenos Aires, Argentina November 19, 2023
Go DeeperHow AI shaped Milei's path to Argentina presidency

How do the rules aim to protect human rights?

Under the deal, some AI systems are banned from the EU on the grounds that they pose an unacceptable risk to basic rights and freedoms.

They include emotion-recognition tools in the workplace or educational institutions, as well as the biometric categorisation of sensitive data such as sexual orientation and some cases of predictive policing.

The use of real-time remote biometric identification, or facial recognition, in public places is prohibited, except for use by law enforcement officials to prevent terrorism and to search for victims or perpetrators of serious crimes.

High-risk AI systems must undergo a fundamental rights impact assessment before being introduced to the EU market, and public entities using high-risk AI must register them on a database. Citizens will be able to demand explanations about AI systems' decisions that impact their rights.

What do rights groups say?

While rights groups said the provisional deal is a step forward, they warned against a lack of safeguards for the most dangerous uses of AI and said exemptions for law enforcement, border controls and migration management were ripe for abuse.

"You only have to look at who governments around the world are claiming are terrorists - from human rights defenders and journalists to teenage climate activists," Ella Jakubowska of Brussels-based digital rights group EDRi told Context.

The use of high-risk AI systems such as biometric identification in border policing - which the Act would allow - has drawn criticism from rights activists who say it creates a double standard - with one set of rules protecting EU citizens, and another for migrants and asylum seekers.

There is also concern about the potential use of emotion-recognition AI systems - a sort of AI-based lie detector - by police and immigration authorities, as well as AI forecasting models to predict migration flows.

Another criticism is that the rules will only apply within the EU. AI systems developed within the bloc may still be exported, with no thought of how that technology might contribute to human rights abuses elsewhere.

"If Europe wants to be a standard-setter globally on a regulation that is human rights-centred, I don't think it's sending the right message," said Mher Hakobyan, Amnesty Tech's advisor on AI regulation.

What other legislation exists globally?

The speed at which AI is developing has complicated lawmakers' efforts to agree rules governing its use, meaning there are few regulations at national or international level.

At a meeting in Britain in November, 28 countries - including the United States and China - signed a declaration to encourage transparency and accountability from developers of AI technology to mitigate potential harms.

In October, U.S. President Joe Biden issued an executive order requiring developers of AI systems which pose a risk to national security, the economy or public health to share the results of safety tests before they are released to the public.

China has established a series of interim measures to beef up security requirements for AI products. In October, it published a list of security requirements for services using generative AI, such as OpenAI's ChatGPT.

(Reporting by Joanna Gill; Editing by Helen Popper.)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles


Tags

  • Migration
  • Tech regulation

Featured Podcast

An illustration photo shows the globe with a tree standing on top. On the left hand side, a red backed illustration shows barren trees and oil refinery towers. On the right hand side, a green backed illustration shows wind turbines and solar panels. A sound equaliser image crosses the screen to indicates audio.
6 EPISODES
Podcast

Just Transition

The human stories behind the shift to a green economy

An illustration photo shows the globe with a tree standing on top. On the left hand side, a red backed illustration shows barren trees and oil refinery towers. On the right hand side, a green backed illustration shows wind turbines and solar panels. A sound equaliser image crosses the screen to indicates audio.
Podcast




Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context