Can AI help Afghans at risk from the Taliban?
An Afghan refugee holds her passport in front of the German Embassy in a bid to acquire refugee visas from the European country, in Tehran, Iran September 1, 2021. Majid Asgaripour/WANA
AI and algorithms are increasingly being used in immigration processes. Protections need to be in place for vulnerable groups
Imogen Canavan is a legal consultant.
Two years on since the Taliban gained control of Afghanistan and many of those left behind are persecuted and at risk. States and international organisations are utilising algorithms and even artificial intelligence in immigration processes, but can this help Afghans at risk?
Established in October 2022, the German Federal Admission Programme for Afghanistan is one of the few specific programmes for Afghans at risk. To be eligible, Afghans must be in Afghanistan and meet a particular risk category, amongst other criteria. Then their data is shared by a referring entity with the Government. After which, an IT tool and human components are used to select 1,000 Afghans every month.
Details of the German IT tool are opaque, but it at least uses an algorithm. In other words, it applies rules or other logic to the data received. The complexity of the algorithm and importantly, the balance of the IT and human components are not publicly known. This presents a problem. Without transparency, the technology simply cannot be assessed in terms of its purpose, accuracy and efficacy.
Algorithms and AI being used in this context are increasingly common with impressively broad applications. Oxford University’s Algorithmic Fairness for Asylum Seekers and Refugees project has identified numerous examples of such technology utilised in the UK and Europe. Automated decision-making, document and biometric verification, transcription, speech and dialect recognition, and mobile phone data analysis are among the applications identified. This raises numerous important questions, especially in cases concerning highly vulnerable migrants where there is inherently a power imbalance with the deciding state.
In 2015, the UK Home Office began using an algorithm to classify visit visa applicants. Following investigations, it was revealed that applicants were categorised by their nationality with some nationalities being considered ‘suspect’ by the Home Office algorithm. The algorithm was challenged in the courts for being discriminatory and biased with problematic results being fed back into the system. The Home Office ceased using the algorithm in August 2020 prior to a decision in the courts.
Obvious examples of discrimination and bias built into the design of an algorithm may be easy to identify, but discrimination and biases could be much more subtle and indirect. This demonstrates the need for transparency and human oversight. However, technology could also be much more complex where AI is used with ‘machine learning’ to effectively teach itself. In contrast with an algorithm, AI is much more difficult and, in some instances, may be impossible to assess.
In practice, the balance between technology and humans in different parts of immigration systems varies significantly. In some systems, decision-making has become almost fully automated raising questions of power and responsibility. Since 2020, Norway is utilising so-called robots named “Ada” and “Kalle” to process and issue legal residencies and citizenships automatically. Human caseworkers only intervene where negative results arise from the robots.
The Norwegian Directorate of Immigration explains that the system is reliant upon quality data. These limitations are unrealistic for countries like Afghanistan where quality data is almost impossible to obtain, especially for persecuted people. The Directorate also suggests avoiding discretionary assessments that the technology cannot make. The purpose of technology is to augment human capability not to limit it. This would be simplifying decision-making to accommodate technology, when technology should be altered to enhance human decision-making.
Like all technology, algorithms and AI can be used for good or bad, intentionally or unintentionally. At the centre of every immigration and asylum case is a human being and principles of humanity should apply in any decision-making process, regardless of the technology utilised.
In the context of Afghanistan, immigration decisions can mean life or death. Given these high stakes, greater transparency and human oversight of technology utilised are absolutely essential. Technology is better used to help prevent and predict conflict and natural disasters as the root causes of forced migration rather than testing out new unproven technology on some of the most vulnerable people in the world.
Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.
Tags
- Facial recognition
- Digital IDs
- Data rights
Go Deeper
Related
Latest on Context
- 1
- 2
- 3
- 4
- 5
- 6