While live facial recognition dominates headlines, there are other types of technology with strikingly similar capabilities that often fly under the radar.
Last year, the UK’s Metropolitan Police spent £3 million on retrospective facial recognition technology (RFR). RFR uses facial recognition to scan images already collected by CCTV, rather than scanning people in real-time. In an article I wrote last year, experts warned that the technology can be used in much the same way as LFR and has many of the same risks. Despite this, there are almost no limitations on its use in the UK and abroad.
For some police forces, simply detecting faces in a crowd doesn’t go far enough. There are now several products that not only identify people, but also analyse peoples’ emotions. The technology, which is already being used in the heavily monitored region of Xinjiang, China, can then supposedly help police predict crime.
Like a lot of new surveillance technology, however, it’s unlikely to live up to the hype and the technology has repeatedly been shown to be inaccurate. Worse yet, it’s been accused of being based on “racist pseudoscience” that could lead to higher rates of discriminatory policing.
The UK’s Information Commissioner's Office (ICO) has warned against the use of such technology in the UK. But as attempts to regulate live facial recognition show, meaningful restrictions on potentially dangerous surveillance technology can take years to establish.
With new guidance from the ICO on biometric technologies expected next spring, it’s crucial that it looks ahead and offers proactive, comprehensive and meaningful guidance on facial recognition and other forms of biometric surveillance that may soon become staple parts of contemporary policing.
Recommended Reading
Khari Johnson, How Wrongful Arrests Based on AI Derailed 3 Men’s Lives, Wired, March 7, 2022.
Discussions of facial recognition can often overlook the human impact. This article shows how wrongful arrests caused by facial recognition software damage peoples’ lives and highlights the importance of regulating its use.
Evani Radiya-Dixit, A Sociotechnical Audit: Assessing Police Use of Facial Recognition, Minderoo Centre for Technology and Democracy, University of Cambridge
The full report from the University of Cambridge offers detailed information regarding the ethical and legal problems surrounding the police’s use of the technology. It also includes an audit that could be adopted for analysing future trials of the technology in the UK.
Lauren Rhue, Emotion-Reading Tech Fails the Racial Bias Test, January 3 2019.
This article from back in 2019 shows that racial bias within facial recognition software is far from new. The study found that emotion-reading technology considers black faces to be angrier than white faces, something that could have awful consequences in a policing context.
Nicol Turner Lee and Caitlin Chin, Police Surveillance and Facial Recognition: Why Data Privacy Is Imperative for Communities of Color, Brookings Institution, April 12, 2022.
This report for the Brookings Institution makes the case for stronger privacy protections in the United States to help limit the risks of facial recognition technology, particularly as they relate to communities of colour.