View Online | Subscribe now
Journalism from theThomson Reuters Foundation logo
Context logo

Know better. Do better.

tech and society

Dataveillance

AI, privacy and surveillance in a watched world

Photo of Samuel Woodhams

Hi, it’s Sam. Authorities are increasingly using automated decision-making tools in pursuit of lowering costs and increasing efficiency. This week, we’ll look at their use in social welfare programmes, and how they threaten our privacy, equality and well-being.

Algorithms to determine welfare payments and detect fraud are becoming standard practice around the world. From Manchester to Melbourne, peoples’ lives are being shaped by secretive tools that determine who is eligible for what, and how much debt is owed.

Although the technology has been around for some time, the outbreak of COVID-19 renewed enthusiasm for the digital welfare state and, for thousands of cash-strapped public bodies, the promise of increased efficiency and lower costs has proven irresistible.

But the tools come with significant hidden costs. They violate our privacy, exacerbate inequality and often get things wrong, leading to often terrifying consequences.

In short, these tools don’t improve our welfare, they threaten it. And unless we alter their course, we will tumble “zombie-like into a digital welfare dystopia,” as the former United Nations special rapporteur Philip Alston memorably said.

Staff members from Qian Ji Data Co take photos of the villagers for a facial data collection project, which would serve for developing artificial intelligence (AI) and machine learning technology, in Jia county, Henan province, China March 20, 2019

Staff members from Qian Ji Data Co take photos of the villagers for a facial data collection project, which would serve for developing artificial intelligence (AI) and machine learning technology, in Jia county, Henan province, China March 20, 2019. REUTERS/Cate Cadell

Privacy infringing algorithms

The products used in the digital welfare state all operate slightly differently, and the data analysed also varies. They can be used to assess someone’s eligibility for support, determine how much someone receives, and predict whether someone is likely to claim too much. Typically, this means the tools will access information about someone’s employment status, number of children, gender, age, and where they live.

To amalgamate these disparate data points, manufacturers are tasked with combining several existing databases into one huge dataset containing millions of rows of sensitive data. By doing so, they encourage mass surveillance and discriminatory profiling, while benefiting companies that threaten everybody’s privacy.

The companies involved range from credit-rating giants to specialised data-mining firms. In other words, they are data-hungry corporations responsible for the datafication of public life associated with surveillance capitalism.

Automated discrimination

Given the sensitive nature of the data analysed, the use of automated decision-making tools in this context obviously risks discrimination. Thankfully, there is greater awareness that algorithms have the potential to reflect and entrench the biases contained within existing data sets.

As a 2019 United Nations report on the UK’s digital welfare state concluded: “Algorithms and other forms of AI are highly likely to reproduce and exacerbate biases reflected in existing data and policies. In-built forms of discrimination can fatally undermine the right to social protection for key groups and individuals.”

Protesters demonstrate against IT company Atos's involvement in tests for incapacity benefits outside the Department for Work and Pensions in London August 31, 2012. REUTERS/Neil Hall

Protesters demonstrate against IT company Atos's involvement in tests for incapacity benefits outside the Department for Work and Pensions in London August 31, 2012. REUTERS/Neil Hall

But it’s not just the use of skewed data that may exacerbate inequity. The way the technology is deployed is often discriminatory, as well.

In the Netherlands, an algorithm eerily similar to that which falsely accused thousands of benefit fraud is still being used in Utrecht. But it’s not being used across the entire city, it’s only targeting people who live in the low-income neighbourhood of Overvecht.

This establishes a clear double standard and means that those on society’s periphery bear the brunt of surveillance and the potential ramifications of the technology’s faults.

By pursuing these tools, the idea that a citizen can be knowable, quantifiable and predictable risks spreading further into local and national governance. In doing so, the structural issues that influence criminality are likely to be ignored in favour of an individualistic approach cloaked in technology’s veneer of objectivity.

But they do work, right?

There’s now overwhelming evidence that these tools don’t even work very well.

In Australia, nearly half a million people receiving welfare support were wrongly accused of lying about their income and given fines. In the Netherlands, tens of thousands of people were wrongly accused of owing money to the state by an algorithm that breached EU human rights legislation. And in Britain, the Department of Work and Pensions (DWP) has been found to use a secretive algorithm that “targets disabled people in a disproportionate, unfair and discriminatory way.”

So, what can be done when the algorithm gets it wrong? Unfortunately, people who want to contest a decision often face years of bureaucracy, with authorities rarely admitting their mistakes.

Screenshot of Melanie Klieve speaking via video link at the Royal Commission hearing into Robodebt, an automated debt recovery scheme that wrongly calculated that welfare recipients owed money, in Brisbane, Australia. December 5, 2022

Screenshot of Melanie Klieve speaking via video link at the Royal Commission hearing into Robodebt, an automated debt recovery scheme that wrongly calculated that welfare recipients owed money, in Brisbane, Australia. December 5, 2022. Thomson Reuters Foundation/Seb Starcevic

In part, that’s because authorities themselves appear unaware of exactly how their tools work and are unwilling, or incapable, of explaining how they work to citizens.

In 2021, I filed a Freedom of Information request with the DWP to find out more about their self-proclaimed “cutting-edge artificial intelligence” tool designed to catch people responsible for large-scale benefits fraud. But, as is often the case, they refused to divulge any new information.

The complete lack of transparency and accountability was summarised perfectly by Lina Dencik, co-director of the Data Justice Lab at the University of Cardiff: “Rather than the state being accountable to its citizens, the datafied welfare state is premised on the reverse, making citizens’ lives increasingly transparent to those who are able to collect and analyse data, at the same time knowing increasingly little about how or for what purpose the data is collected.”

Remedies

The use of algorithms in the digital welfare state is forcing society’s most vulnerable to become test subjects of opaque tools that few appear to fully understand. While important, I think we need to look beyond ways of increasing the efficacy, transparency and accountability of these tools.

Instead, authorities should ask themselves whether processes that can have such a profound impact on citizens’ well-being should ever be automated. And whether the promise of lowering costs and increased efficiency will ever really be worth the risk.

Any views expressed in this newsletter are those of the author and not of Context or the Thomson Reuters Foundation.

We're always happy to hear your suggestions about what to cover in this newsletter - drop us a line: newsletter@context.news

Recommended Reading

Lina Dencik, The Datafied Welfare State: A Perspective from the UK, New Perspectives in Critical Data Studies, May 21, 2021.

This book chapter is an essential read on the digital welfare state, the future of algorithmic governance, and the political and economic interests underpinning the use of automated decision making-tools.

Algorithm Watch, Automating Society: taking stock of automated decision-making in the EU, Jan. 2019

While slightly dated, this report offers a broader perspective of the range of automated decision-making systems in the EU and their impacts.

Electronic Privacy Information Center, Screened & Scored in D.C, Nov. 2022.

Authorities don’t just rely on algorithms for managing welfare. This investigation shows the extent to which automated decision-making tools are being used across Washington D.C. The team spent 14 months filing FOI requests and found these tools in use in everything from housing to the criminal justice system.

Melissa Heikkilä, Dutch scandal serves as a warning for Europe over risks of using algorithms, Politico, March 29, 2022.

This article covers the recent Dutch benefit fraud scandal, detailing the faulty algorithm’s devastating consequences and discusses the potential for the EU’s AI Act to curb similar catastrophes in the future.

This week's top picks

Hidden loopholes and privacy risks loom over online age check laws

Louisiana joins Europe and the UK in passing age verification laws to boost minors' online safety, but experts warn of the dangers

Asia turns to tech to help watch over a growing elderly population

Ageing nations are using cameras, robots and AI tech to help care for seniors - but privacy and security fears are growing

Nigeria's social media fact-checkers fight fake news as vote nears

Big Tech companies are enlisting independent fact-checkers to tackle online disinformation ahead of the Feb. 25 election

EU lawmakers must do right by gig workers

A debate and vote this week will determine the fate of the Platform Work Directive, a consequential law for Europe's gig workers

Social media companies are to blame for Andrew Tate

Deplatforming is effective in limiting the reach and toxicity of spreaders of hate and misinformation, and can reduce harm

 
Read all of our coverage here

Discover more

Thank you for reading!

If you like this newsletter, please forward to a friend or share it on Social Media.

We value your feedback - let us know what you think.