The UK needs new legislation for biometric technologies
A person uses a sensor for biometric identification on a smartphone in Berlin, Germany October 16, 2015. On the verge of collapse a decade ago, Sweden's Fingerprint Cards (FPC) has emerged market leader in a booming industry set to supply billions of touch fingerprint sensors for smartphones, tablets and credit cards in the years ahead. After years in the wildnerness, plowing cash into product development, the main rival of U.S. Synaptics has seen demand soar in 2015, bagging deals from some of China's biggest smartphone makers and U.S. tech giant Google. REUTERS/Fabrizio Bensch
Biometric technologies impact our daily lives in powerful ways and are proliferating without an adequate legal framework in place.
Madeleine Chang is senior policy advisor at Ada Lovelace Institute.
My sister and I look very similar. But we don’t exactly look alike. We were therefore surprised to discover one day that I could unlock her phone with my face, and that she could unlock mine with hers.
For us, this facial recognition mismatch isn’t too serious. In other situations, however, accuracy is imperative. If police are to use facial recognition technology to identify someone in a crowd, or if stores use it to identify alleged shoplifters, it must be able to detect the right person. But research modelled after the foundational work of Joy Buolamwini and Timnit Gebru shows that facial recognition is prone to misidentifying people of colour - partially because facial recognition algorithms have learned how to identify people from a set of photos that don’t include enough people of colour. In this way, the technology itself is discriminatory.
Even if facial recognition and other biometric technologies - which use data derived from the human body such as voice, iris scans, and gait - were to become more accurate over time, problems would persist.
Discrimination can arise not from the technology itself, but from the social context surrounding its use. Police may use facial recognition more frequently on marginalised communities; shopkeepers may add a disproportionate number of people of colour to their shoplifting watchlists. Even if the technology can accurately match faces from all racial groups, the likelihood that a person of colour would be misidentified would still be higher. Simply making tools more accurate will not necessarily make them harmless or acceptable.
Equally worrying is a new set of biometric technologies that fuse old-fashioned phrenology with new technical capabilities, leading to decisions based on stereotypes. Some technologies analyse facial geometry and expressions, voice and gait to detect characteristics like race, gender and sexuality, or assess whether people are fit for a job, look like a criminal, or are paying attention in an online classroom.
The Ada Lovelace Institute makes the case for new legislation in the UK to govern biometric technologies in a new report. The question of how the technology should be used cannot be left to the companies who produce it or those who wish to use it, because the issues at hand involve our most fundamental freedoms.
If people know that facial recognition can be used at protests, either by police or private actors, they may not attend. Other biometric technologies pose needless privacy harms. For example, facial recognition was recently introduced - and quickly rolled back - in UK schools as a way for students to pay for lunch, a task just as easily accomplished through less intrusive means like a card or PIN. For these technologies, the question is less, do they work? And more, should we use them?
To shed light on what people think about the use of biometric technologies, the Ada Lovelace Institute conducted two pieces of research: a national survey on facial recognition technology and an in-depth public deliberation. Council members - 50 UK adults, who spent over 60 hours learning about and debating biometric technologies - regarded biometric data as inherently sensitive and the legitimacy of its use as highly contextual. Even in cases where there was a perceived public benefit, council members underscored the need for proportionality and safeguards.
A new, landmark legal review led by advocate Matthew Ryder finds that existing safeguards are not fit for purpose. Oversight structures, where they exist, are patchy and ineffectual. Legal frameworks do not adequately cover all emerging use cases, especially for biometric technologies that try to classify people’s characteristics or emotions.
So the case for new legislation to cover all biometric technologies that identify or categorise people is clear. A new regulatory function is needed to assess the human rights impacts of certain biometric technologies before they are used, and ensure that all such technologies meet basic standards relating to accuracy and bias. Until this comprehensive framework is in place, there should be a moratorium on the more problematic uses of biometric technologies, including the use of live facial recognition technology.
Biometric technologies impact our daily lives in powerful ways, and are proliferating without an adequate legal framework in place. We have an opportunity to act now to prevent future harms from taking place, and ensure that these technologies work for people and society.
Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.
- Facial recognition
- Digital IDs
- Tech regulation
- Data rights
Latest on Context