Q&A: Algorithms tested on migrants should worry citizens too

A Customs and Border Protection (CBP) officer works, as a migrant, seeking asylum in the United States and who previously requested an appointment on the U.S. CBP One application, stands, at the CBP building, in El Paso, Texas, U.S. June 14, 2024. REUTERS/Jose Luis Gonzalez
interview

A Customs and Border Protection (CBP) officer works at the CBP building, in El Paso, Texas, U.S. June 14, 2024. REUTERS/Jose Luis Gonzalez

What’s the context?

As governments deploy AI to manage borders, citizens should be wary of this tech too, says researcher.

LONDON - As the number of people fleeing war, poverty and climate disasters reach record levels worldwide, governments are increasingly turning to digital fixes to manage migration.

In April, Britain said it would use artificial intelligence to speed asylum decisions, arming caseworkers with country-specific advice and automated summaries of key interviews.

The United States is ramping up the use of surveillance and AI tools - from facial recognition to robotic patrol dogs - in its crackdown on illegal immigration.

As part of the Algorithmic Fairness for Asylum Seekers and Refugees (AFAR) research project, Derya Ozkul and a team of academics examined the algorithms used in migration and asylum management.

Ozkul, an assistant professor of sociology at the University of Warwick, spoke to Context about the growing use of automation to control borders and why this may affect citizens too.

What have you learned about the algorithms used by governments in asylum migration?

The Algorithmic Fairness for Asylum Seekers and Refugees project looked at all types of different uses of algorithms that immigration authorities have been using or have started piloting again across Europe, including in the UK.

There is no kind of automated system of decision-making for asylum decision-making currently. Asylum seekers have to go and apply for asylum, and then they go through an interview, and then there will be someone interviewing them and then deciding on the case.

So that process is not automated in any way in any country. However, we have found certain tools that help case workers to make the decision.

People believed to be asylum seekers stage a protest over the condition of their accommodation, outside of a hotel in London, Britain, June 2 2023
Go DeeperAI will help make 'life-or-death' calls in rammed UK asylum system
A Mexican man is arrested by federal law enforcement agents led by U.S. Immigration and Customs Enforcement in Rex, south of Atlanta, Georgia, February 5, 2025. REUTERS/Carlos Barria
Go DeeperHow AI is aiding Trump's immigration crackdown
Arriving passengers queue at UK Border Control at the Terminal 5 at Heathrow Airport in London, Britain June 29, 2021. REUTERS/Hannah Mckay
Go DeeperUK's eVisa rollout dogged by glitches, threatening rights

What sort of tools?

In Germany, for example, they use dialect recognition to identify where the person comes from - if they say they come from Syria, they check the dialect and see if it's Syrian Arabic.

In many countries across Europe, including the UK, they use mobile phone data extraction. So they take the mobile phone data from asylum seekers, and they automatically generate a report to see whether they are telling the truth about the route that they took to come to the country, whether they are really from that country that they say they come from.

In Italy, they also started using an automated transcription software to transcribe what the person is saying.

In the UK, they will start using text-based, AI-generated summaries. So the process itself is still quite heavily done by caseworkers and humans, but they have integrated these automated tools to help the caseworker make it speedier.

(Britain) also made it compulsory for all migrants in the country to digitalise their status, and a lot of people faced technical problems on the system. So basically, anything that goes wrong with your digital status will impact your access to employment, housing, everything.

What are your main concerns about these tools?

It's good that governments are trying to find solutions to make this (asylum) process speedier and give people decisions faster.

But it's a temporary fix. They are not actually solving the problem. It can actually cause lots of problems, like (application) rejections for people whose claims were legitimate.

With technologies like automated summaries of statements, it's quite dangerous, and it may not be as effective as they say because if the summary doesn't really reflect the nuances of what the person has said in the interview, then it will lead to more delays in the appeal process.

These systems are also very costly.

Why should people care about tech being used on the migrant population?

Migrants are the population that they (governments) can play with because they don't have many rights. They don't have the same capacity to resist as citizens would. This is being experimented on migrants.

We need to make people understand that these same technologies can be applied to them as well in the very near future like in courts, GP appointments or anywhere else.

They can have an impact on their lives too.

So we have to think about what these tools are doing in general as a society. Even if they don't care about asylum seekers, I think everyone should be thinking about what this tech is doing.

This interview was edited for clarity and length.

(Reporting by Lin Taylor, Editing by Ayla Jean Yackley.)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles


Tags

  • Facial recognition
  • Digital IDs
  • Migration
  • Data rights




Dataveillance: Your monthly newsletter for a watched world.

By providing your email, you agree to our Privacy Policy.


Latest on Context