View Online | Subscribe now
Journalism from theThomson Reuters Foundation logo
Context logo

Know better. Do better.

tech and society

Dataveillance

AI, privacy and surveillance in a watched world

Photo of Samuel Woodhams

Hi, it’s Sam. Migrants and asylum seekers are facing unprecedented levels of digital repression. This week, we’ll look at some of the technology being used across Europe, and explore how the misuse of digital tools by border forces threatens us all. You can also check out Context's special series looking at how this tech is being deployed across the world, from the United States to Jordan to Myanmar.

Since 2017, the European Union has spent over 340 million euros ($489 million) on research into border AI technologies. From advanced surveillance drones to automated decision-making tools, virtually every stage of the migrant journey has become digitised.

Contrary to the EU’s techno-solutionist rhetoric of “smartening” borders, this technology is causing harm on an unparalleled scale. It plays an integral role in the securitisation of migration management, and is facilitating unlawful pushbacks, increasing discrimination, and repeatedly violating peoples’ fundamental rights. The tools developed in this unregulated digital testing ground threaten to reshape global immigration policies, and impact broader society.

Dutch and Bulgarian border police officers stand near a vehicle equipped with a thermal camera during a patrol a day after the official launch of the European Union's Border and Coast Guard Agency near Kapitan Andreevo border crossing on the Bulgarian-Turkish border, Bulgaria October 7, 2016. REUTERS/Stoyan Nenov

Dutch and Bulgarian border police officers stand near a vehicle equipped with a thermal camera during a patrol a day after the official launch of the European Union's Border and Coast Guard Agency near Kapitan Andreevo border crossing on the Bulgarian-Turkish border, Bulgaria October 7, 2016. REUTERS/Stoyan Nenov

Automating discrimination

Predictive technologies and automated decision-making tools have become increasingly common in the context of managing migration. But they are also capable of radically entrenching racial discriminaton. And some, it would seem, are discriminatory by default.

The EU’s EUMigra Tool is designed to predict migration flows and detect “risks of tensions related to migration” by analysing data from online news, the TV and social media. However, an internal review last year of the project suggests it may lead authorities to make discriminatory decisions that harm already vulnerable groups. “Discrimination may even become structural,” it said.

In an open letter published in September, a group of civil society organisations called for the project to be scrapped, citing its clear societal risks and potential to further criminalise migration. Unfortunately, this tool is far from unique in facing accusations of  increasing migration-related discriminaton.

The overreliance on algorithmic decision-making tools and predictive technologies is entrenching power imbalances between those that design and use the tools, and those that are subjected to them.

This AI divide not only exacerbates inequity at a local level, but may also have significant international ramifications that could negatively impact the future of border controls around the world. 

A member of joint units of Serbian military and police observes the area on thermal camera near the Serbian-Bulgarian border near the town of Zajecar, Serbia, September 27, 2016. REUTERS/Marko Djurica

A member of joint units of Serbian military and police observes the area on thermal camera near the Serbian-Bulgarian border near the town of Zajecar, Serbia, September 27, 2016. REUTERS/Marko Djurica

Securitisation of border controls

As the EU spends ever increasing amounts of money on the research, training and development of new technology to deploy at its borders, it’s providing a huge stimulus to defence companies with poor human rights records.

BAE Systems, for example, was found to have sold sophisticated surveillance technologies to governments across the Middle East. Now, it is responsible for transforming the UK’s digital border strategy. Meanwhile, surveillance drones purchased by the EU to locate people crossing the Mediterranean were previously used by the Israeli army during military assaults on the Gaza Strip.

Crucially, these drones appear to be making conditions worse for many people on the move, with growing evidence indicating that they are being used to covertly monitor and forcibly push back people from the EU’s borders.

By contributing to the growth of international security and defence markets, the EU is not only guilty of misusing technology, it is also responsible for propagating it and ensuring these companies continue to thrive, despite their detrimental impact on human rights.

A Greek security member holds a drone as he patrols near Turkey's Pazarkule border crossing, in Kastanies, Greece March 12, 2020. REUTERS/Florion Goga

A Greek security member holds a drone as he patrols near Turkey's Pazarkule border crossing, in Kastanies, Greece March 12, 2020. REUTERS/Florion Goga

Exporting surveillance technology abroad

Since the so-called migrant crisis of 2015, the EU has also become an active exporter of surveillance technology. Much of this has been channelled through the European Border and Coast Guard Agency, known as Frontex.

It’s the most well-funded agency in the union and, along with other agencies, has actively exported its vision of high-tech migration management internationally. In 2020, for example, Privacy International revealed that the agency had shown authorities in Libya how to secure digital evidence and acquire biometric data, including from “children and people with vulnerabilities.”

But there are indications this may change. Last month, the European Ombudsman found that the European Commission had failed to carry out the necessary human rights due diligence in its management of the Trust Fund for Africa, which was responsible for the transfer of surveillance equipment and training across the continent.

However, as is often the case, this intervention appears to be too little, too late.

A staff member of European Union's border agency FRONTEX operates an aerostat balloon system equipped with high tech surveillance cameras, in Alexandroupolis, Greece, August 10, 2021. REUTERS/Alexandros Avramidis

A staff member of European Union's border agency Frontex operates an aerostat balloon system equipped with high tech surveillance cameras, in Alexandroupolis, Greece, August 10, 2021. REUTERS/Alexandros Avramidis

Initially, there was optimism that the EU’s AI Act would help rein in the unregulated use of these advanced tools. But the first proposal has significant limitations.

Almost 200 human rights and digital rights organisations have called for more to be done to protect everyone from the harmful impact of AI technology, including migrants and asylum seekers. Their demands lay bare the shortcomings of the proposal, such as failing to restrict the use of some of the most invasive technology, including emotion recognition and biometrics.

If these issues are not addressed, it will be another important limitation of the legislation.

The use of digital technologies in the context of immigration and border management is having an unmistakable impact on people attempting to enter the EU. But it’s far from a localised issue. By funding international companies with poor human rights records and exporting technology abroad, the EU is also contributing to the violation of peoples’ fundamental rights elsewhere.

In time, this process risks propagating further harm and, ultimately, may lead to the creation of technologies and policies that we are unable to reverse.  

Any views expressed in this newsletter are those of the author and not of Context or the Thomson Reuters Foundation.

We're always happy to hear your suggestions about what to cover in this newsletter - drop us a line: newsletter@context.news

Recommended Reading

Judith Sunderland and Lorenzo Pezzani, Airborne Complicity Frontex Aerial Surveillance Enables Abuse (Human Rights Watch, December 8, 2022)

This immersive investigation reveals how Frontex’s aerial surveillance facilitates the interceptions of migrants, refugees, and asylum seekers at sea. It highlights the EU’s lack of transparency and shows that its approach is “designed not to rescue people in distress but to prevent them reaching EU territory.”

Petra Molnar, Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up (EDRi, November 4, 2020)

This report provides analysis of migration management technologies, and provides extensive details of the EU’s use of surveillance, automated decision-making tools, and biometrics. It shows how these tools exacerbate existing harms and increase discrimination.

The Border Violence Monitoring Network, The Black Book of Pushbacks (December 8, 2022)

This 3,000-page dossier includes published testimonies of hundreds of people that have attempted to enter the EU. Their accounts shed light on the devastating use of violence at the border and reveal many of the technologies that enable this expansive system of oppression.

Jane Kilpatrick and Chris Jones, A Clear and Present Danger: Missing Safeguards on Migration and Asylum in the EU’s AI Act (Statewatch, May 9, 2022)

This comprehensive briefing was created by a group of NGOs to complement proposed amendments to the EU AI Act. It identifies over 50 EU projects that show the diverse ways authorities are using AI technologies at the border, including proposed border control borders, verification technologies, and automated analysis systems.

This week's top picks

In Jordan, refugees scan irises to collect aid. But is it ethical?

Biometrics and blockchain technology help Syrians access U.N. cash, but rights groups voice concern over consent and data use

Colombia's Rappi app draws ire of Venezuelan migrant couriers

Riders' complaints about low pay and precarious conditions have prompted a government probe into the Latin American delivery giant

LGBTQ+ Africans fight Twitter hate speech after Musk's takeover

LGBTQ+ Africans use online spaces to gather and speak freely, but say a surge in Twitter abuse may drive many off the platform

Myanmar election will seal military’s digital domination

Myanmar’s military is tightening its hold of the internet, reinforcing its ability to track people’s communications and locations

Workplace surveillance doesn’t just harm workers

Besides economic, health and safety effects on workers, surveillance technologies can also impact labor markets and the economy

 
Read all of our coverage here

Discover more

Thank you for reading!

If you like this newsletter, please forward to a friend or share it on Social Media.

We value your feedback - let us know what you think.