AI will help make 'life-or-death' calls in rammed UK asylum system
People believed to be asylum seekers stage a protest over the condition of their accommodation, outside of a hotel in London, Britain, June 2 2023. REUTERS/Peter Nicholls
What’s the context?
Britain will use AI to help process asylum claims but the move could have lethal consequences, rights groups say.
- UK pledges to clear asylum backlog amid record migration
- AI could influence human oversight in critical decisions
- Governments adopt digital tech to manage borders
LONDON - Britain is hoping to clear a record backlog of asylum claims with artificial intelligence (AI), outsourcing life-and-death decisions to dehumanising technology, rights groups say.
As global displacement soars, Britain said it would deploy AI to speed asylum decisions, arming caseworkers with country-specific advice and summaries of key interviews.
It will also introduce new targets to streamline parts of the overstretched and badly backlogged decision-making process.
Migrant charities and digital rights groups say the use of automation could endanger vulnerable lives.
"Relying on AI to help decide who gets to stay here and who gets thrown back into danger is a deeply alarming move," said Laura Smith, a legal director at the Joint Council for the Welfare of Immigrants (JCWI).
"The government should focus on investing in well-trained, accountable decision-makers - not outsourcing life-or-death decisions to machines," she told Context.
The governing Labour party has pledged to hire more asylum caseworkers and set up a new returns and enforcement unit to fast-track removals for applicants who have no right to stay.
At the end of 2024, the government had 90,686 asylum cases awaiting an initial decision, official data showed.
Most asylum seekers wait at least 6 months for an initial ruling, a scenario that will cost taxpayers 15.3 billion pounds ($20.40 billion) in housing over the next decade, according to the National Audit Office, the government spending watchdog.
AI biases
In a government-run pilot study, less than half of caseworkers who tested the proposed AI summary tool said it gave them the correct information with some users saying it did not provide references to the asylum seeker's interview transcript.
Nearly a quarter said they were not "fully confident" in the summaries provided and about 9% of the summaries were inaccurate, the pilot study reported in April.
But the government wants to go ahead with AI, as the issue of immigration gains ever more traction with disgruntled voters.
"Asylum decisions are some of the most serious that the government makes - the wrong decision can put lives at risk. There are therefore potentially lethal consequences resulting from these faulty summaries," said Martha Dark, founder of tech rights group Foxglove.
"While the government claims that a human will always be 'in the loop' when it comes to making the decision, there are still clearly risks if the human is making that decision on the basis of inaccurate information in an AI-generated summary."
Digital rights advocates point to the tendency of AI tools to generate "hallucinations" - answers or information that look real but are in fact fabricated – which make them dangerous to use in critical situations such as asylum claims.
Automated tools can also reinforce biases against certain groups of people, rights groups say, since AI trains up on old data that can reinforce historic prejudices.
In 2020, Britain's interior ministry scrapped a tool that automatically assigned risk scores to visa applicants from certain countries after a legal challenge.
Possible prejudice aside, AI-generated synopses of applicant interviews are also highly dehumanising, said Caterina Rodelli, a policy analyst at tech rights group Access Now.
"People have to undergo so much re-traumatisation with these processes ... and then you reduce it to a summary. So that's a testament to the dehumanisation of the asylum system."
The Home Office did not immediately respond to requests to comment on its proposed use of AI to process asylum claims and what safeguards it will have in place to ensure human oversight.
Record migration
Britain has experienced record migration in recent years, with net arrivals hitting 728,000 for the year ending June 2024, most migrants coming legally to work or study.
More than 10,000 asylum seekers have also arrived in small boats this year, up about 40% on the same period last year.
The Refugee Council said previous efforts to speed up processing times have led to poor initial decisions, more asylum appeals and a bigger backlog in the courts.
"The use of AI therefore must be carefully considered before potentially life-or-death decisions become a testing ground for the technology," said Enver Solomon, chief executive of the Refugee Council.
Human rights barrister Susie Alegre said immigration lawyers seeking to challenge asylum decisions could also hit roadblocks if they are "unpicking decisions based on automated outputs".
"Lawyers looking at asylum decisions with a view to challenging them will need to know what role AI played in any decision making," Alegre said.
Tip of the iceberg
As the numbers fleeing war, poverty, climate disaster and other tumult reach record levels worldwide, states are increasingly turning to digital fixes to manage migration.
President Donald Trump is ramping up the use of surveillance and AI tools - from facial recognition to robotic patrol dogs - as part of his crackdown on illegal immigration.
Since 2017, Germany has used a dialect recognition tool to determine an asylum seeker's true country of origin.
Access Now's Rodelli said governments were testing digital tools on migrants and asylum seekers without due accountability, warning of AI's potential mission creep into other areas of public life such as welfare and debt recovery.
"These types of applications are just the tip of the iceberg," she said.
(Reporting by Lin Taylor, Editing by Lyndsay Griffiths)
($1 = 0.7501 pounds)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Migration
- Tech regulation
- Corporate responsibility