Unchecked AI will lead us to a police state
A staff member of European Union's border agency Frontex operates an aerostat balloon system equipped with high tech surveillance cameras, in Alexandroupolis, Greece, August 10, 2021. REUTERS/Alexandros Avramidis
Despite the lack of public support, European governments are planning to push back on legal limits to AI in law enforcement
Sarah Chander, senior policy adviser, European Digital Rights (EDRi)
In the most substantial piece of legislation on artificial intelligence in the world, the European Union is now deciding which laws will apply to police technology.
AI for the police (state)
Across Europe, police, migration and security authorities are seeking to develop and use AI in increasing contexts. From the planned use of AI-based video surveillance at the 2024 Paris Olympics, to the millions of EU funds invested in AI based surveillance at Europe’s borders, AI systems are more and more part of the state surveillance infrastructure.
We’re also seeing AI deployed with the specific purpose of targeting specific communities. Technologies like predictive policing, whilst presented as neutral tools in the fight against crime, have as their basis the assumption that certain groups – in particular racialised, migrant and working class people, are more likely to commit crime.
In the Netherlands, we have seen the vast consequences predictive policing systems have for Black and Brown young people. The Top-600, a system designed for the ‘preventive identification’ of ‘potential’ violent criminals, was found, following investigation, to disproportionately over-represent Moroccan and Surinamese suspects.
In migration, there is increased investment in AI tools to forecast migration and assess migration claims in new and absurd ways. EU agencies like Frontex, embroiled in allegations of facilitating pushbacks of asylum-seekers from Europe, are exploring how AI can be used to combat the ‘challenge’ of increasing migration. There is a severe danger that technologies will be used to predict and prevent movement to Europe, a clear and illegal violation of the right to seek asylum.
The growing use of AI in policing and migration contexts has huge implications for racial discrimination and violence. Technologies like AI will only feed this reality of structural racism with more tools, more legal powers, and less accountability for police.
Regulate police AI
A growing movement is demanding limits on how the state uses technology to surveil, identify and make decisions about us.
Whilst governments claim the police need more tools to stop crime and maintain order, we ask, who protects us from the police? Who decides the limit on mass surveillance? And where do we draw the line when, in particular for migrants and racialised people, more AI means more stops by the police, greater risk of arrest, and an ever growing threat of violence in interactions with police and border guards?
Checks on state and police power are essential to a safe and functioning democracy. No institution deserves unchecked authority and trust, especially when they have tools at their disposal to watch our every move. Further, with the introduction of AI technologies we also see the encroachment of the private sector and technologies into state functions, integrating profit-motives into the conversation on public safety.
The call to regulate police AI is echoed in the European Parliament. In June this year, the EU’s democratic arm verified the need for legal limits on how police and migration control use AI. The European Parliament position included a full ban on the use of facial recognition in public spaces, predictive policing, and increased the list of ‘high-risk’ AI in migration control.
But now, in the last stages of the negotiations (“trilogues”) on the EU AI Act, European governments are planning a drastic scale-back of any limits to law enforcement use of AI.
This week, 115 civil society organisations demanded that the EU prioritise safety and human rights over unchecked police power. They called for legal limits on how the police and migration authorities use AI, including banning the most harmful systems, like facial recognition in the public space, predictive policing, and AI to predict and prevent migration flows.
We need to know when and where the state uses AI to watch, assess and discriminate us. The public needs to set limits on how police use technologies. Without these limits, unchecked AI will lead to a police state.
Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.
Tags
- Migration
- Tech and inequality
- Tech regulation
- Data rights
Go Deeper
Related
Latest on Context
- 1
- 2
- 3
- 4
- 5
- 6