COVID-19: Social protection responses were opaque, lacked privacy

A man carrying boxes walks down a narrow road
opinion

A city employee wearing a face mask carries boxes of food aid given by the mayor's office, amid the outbreak of the coronavirus disease (COVID-19) in Bogota, Colombia April 21, 2020. REUTERS/Luisa Gonzalez

Many social protection programmes lacked basic human rights considerations, from inception to implementation, leaving the most vulnerable excluded

Nuno Guerreiro de Sousa is a technologist at Privacy International @privacyint

The impact of the Covid-19 pandemic around the world has been seismic, not only in obvious health terms but also in the devastation it has caused to economies and the livelihoods of hundreds of millions of people.

Many governments around the world fast-tracked emergency relief programmes, undoubtedly cushioning a great many people from the worst hardships. But research by Privacy International shows that a lot of these programmes lacked basic human rights considerations, from their inception to their implementation, leaving the most vulnerable excluded. Because “in tech we trust” is fast becoming something of an absolutist philosophy in governance, the people affected have had little or no chance to challenge empirically bad decisions.

While digitisation and process automation in the social protection context pre-dated the pandemic, their adoption was accelerated in the face of increasing poverty. The difficulty in identifying beneficiaries, combined with containment measures such as lockdowns, as well as the need for individual isolation, provided governments with the perfect opportunity to experiment with ever more data-intensive methods to reach people in need. This increased digitisation of welfare benefits is flagged in the most recent report from the UN special Rapporteur on Extreme Poverty and Human rights as an issue of exclusion, a fact that is confirmed by our research.

Many of these social protection programmes were shrouded in opacity, with their eligibility criteria being withheld from the very population they sought to serve.

The coronavirus disease (COVID-19) contact tracing smartphone app of Britain's National Health Service (NHS) is displayed on an iPhone in this illustration photograph taken in Keele, Britain, September 24, 2020. REUTERS/Carl Recine/Illustration
Go DeeperPandemic surveillance: is tracing tech here to stay?
Go DeeperSocial protection must be extended to millions of excluded informal workers
Scientists in red lad coats and protective masks stand in line in a science lab
Go DeeperFrom Syria to COVID-19 frontline, stateless scientist sets sights on vaccine

As documented by our global partners, Fundación Karisma, Colombia’s ‘Ingreso Solidario’ is a stark example of this. In this case, the government set up an unconditional cash transfer system targeting 3 million citizens in just under two weeks. This may sound impressive, but it made use of all kinds of undisclosed administrative records and data managed by both private and public actors in order to make automated decisions on who was or wasn’t eligible, raising serious questions.

So, while the score criteria were made public, the Colombian people were nonetheless left in the dark as to what data was used to assign scores and what the scores actually meant. It was then revealed that there were nearly 17,000 records with inconsistencies which were automatically flagged as non-eligible. Because the system is a “black box” it was - and is - impossible to know how many people were unfairly excluded in the crosschecks between different databases.

Transparency is necessary, but not sufficient. In Mozambique, we learned that the government decided to identify priority geographical areas by using the Multi Dimensional Poverty Index mapping, which combined social and economic indicators, as well as data gathered in census and high-resolution satellite imaginary of urban poverty maps. In this case, the eligibility criteria were made public, but not in a way that people could really understand. According to independent observers, only 61% of beneficiaries knew why they had been enrolled.

Other instances we researched show a blatant disregard of beneficiaries’ privacy. You might assume that information about whether you qualified for benefits would be private. But within Paraguay’s emergency social protection programme, for example, the list of beneficiaries was made available in an Excel spreadsheet for anyone to access from the government website, making public the names, ID numbers and districts of all beneficiaries.

These aren’t just teething problems in otherwise laudable state interventions. These are inevitable consequences of data-intensive welfare systems, leaving huge swathes of people - often those already the most marginalised and vulnerable - outside of that system.

No welfare benefits system will be perfect in terms of reach. However, there is a lot that can be done to mitigate some of the serious failures we’ve seen, including ensuring the transparency and clarity of eligibility criteria and the data used to make welfare decisions, as well as the implementation of redress mechanisms.

No matter how good the tech gets, it will always fail some people. If we believe that progressive welfare systems are a cornerstone of our humanity, then our welfare systems need to be more human.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Tech regulation
  • Data rights



Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context