Collective power and better auditing can help fix biased AI

A security camera is seen damaged by bullet marks at the Banco do Brasil bank in the Uberaba city, in Minas Gerais state, Brazil August 4, 2022. REUTERS/Leonardo Benassato
opinion

A security camera is seen damaged by bullet marks at the Banco do Brasil bank in the Uberaba city, in Minas Gerais state, Brazil August 4, 2022. REUTERS/Leonardo Benassato

Must we always be the crash test dummies of artificial intelligence?

Tarcizio Silva is a senior fellow in tech policy at Mozilla, based in São Paulo. Solana Larsen is the editor of Mozilla’s Internet Health Report and the IRL Podcast, based in Berlin.

Every day, artificial intelligence (AI) systems — from generative AI to algorithms in our governments — grow by leaps and bounds. Tech companies and consulting firms are constantly researching new capabilities, gathering larger training datasets, and proposing novel applications.

But while the technology and its adoption accelerate, progress to mitigate its harms — especially bias — isn’t keeping pace. As a result, the systems embedded in our lives become more and more capable of discrimination, excluding certain populations.

Bias in AI isn’t a new phenomenon. Almost five years ago, AI researchers Joy Buolamwini and Timnit Gebru released their now-famous paper “Gender Shades,” which revealed gender and racial bias in mainstream facial recognition systems. And yet such problems persist at scale across many AI systems. Flawed or racist data is recycled again and again, and so AI systems discriminate in healthcare, in government services, and even in generative artwork.

Reducing bias in AI requires change, from better public policy to more inclusive design processes and incentives. As researchers focusing on trustworthy AI around the world, we’ve encountered a handful of approaches that can have an outsized impact doing this.

Go DeeperConversational AI: Africans disproportionally disadvantaged
A security camera sits on a building in New York City March 6, 2008. The New York City Police Department are using evidence from video tapes from nearby buildings to catch a suspect involved in a bombing at the Armed Forces Career Center. REUTERS/Joshua
Go DeeperAI surveillance takes U.S. prisons by storm
Election staff members monitor screens connected to CCTV cameras set up in and outside vote counting centres in Ahmedabad, India, May 21, 2019
Go DeeperRacist, sexist, casteist: Is AI bad news for India?

A scandal over biased AI is unfolding right now in Brazil. For years, governments across the country have been deploying biometric surveillance technologies like facial recognition. As elsewhere, these systems are notorious for misidentifying faces with darker skin — and Brazilian cities are notorious for providing no transparency or public input into how they’re used. There has been some successful pushback against this trend, like the ongoing campaign Tire Meu Rosto da Sua Mira (“Get my face out of your sight”) and a protocolaço, (“bill-a-thon”) to ban facial recognition.

Most recently, the issue has come to Sâo Paulo. City authorities launched "Smart Sampa," ostensibly to transform Sâo Paulo into a "smart city." Despite the veneer of innovation, the project is racist at its core: its facial recognition vendors were asked to include racial and “vagrancy” identification features, a mechanism historically used in Brazil to criminalize poor and black people.

In response, activists in Sâo Paulo have added a new tool to their toolbox. Elaine Mineiro, a councilwoman and part of the Quilombo Periférico collective, launched an unprecedented “Parliamentary Front Against Racism in Technologies.” The initiative convenes experts from various sectors — academia, developers, activists from favela communities, sex workers, families of incarcerated people — to debate not only biometric surveillance, but also the use of AI in areas such as health, housing, and education.

At the first public hearing, Mineiro commented that she considers the “commitment of city resources to a policy that has proven to be flawed and that places vulnerable populations under greater suspicion [to be] a problem.” Sâo Paulo’s mayor, Ricardo Nunes, was invited to participate in the hearing — but did not even send a representative.

The Parliamentary Front is still in its early stages, but its ability to build bridges across communities is encouraging. Uniting multiple movements against biased AI taps into collective power.

Meanwhile, in Europe, governments are using automated systems to administer public services — and sometimes even to predict who may commit fraud. In our latest season of Mozilla’s IRL podcast, we spoke with Lighthouse Reports, a European nonprofit investigating algorithmic bias in the welfare systems of multiple countries across the continent.

Their investigation in The Netherlands revealed that the data used to train a welfare fraud detection system in Rotterdam was littered with subjective — and outright sexist — parameters. For example, women who don’t wear makeup during appointments with public officials could risk a higher score for likelihood to commit fraud. There were several other biased parameters, too, like what languages people speak. The system was decommissioned, but for a time vulnerable populations struggling to pay rent were unfairly targeted.

Lighthouse uncovered these biases because they were able to gain access to the system’s underlying data. This is an approach with proven results: Time and again, independent researchers who audit the datasets behind AI systems find embedded biases. Imagine what we can do if we better resource and codify these audits? Researchers like Dr. Abeba Birhane and Deb Raji are focusing on just this, and companies like Credo.ai are building tools to help companies develop practical approaches to identify risks for bias and harm.

Addressing the bias that plagues AI systems is a monumental task. But we must confront the problem. It’s people, and not machines, who decide the values and purpose at the core of any system. This lends ample hope that working together, across people’s movements and disciplines of research, we can do much better. There is actually plenty of know-how for building AI that is more trustworthy, and less harmful. Now we just need people and governments to act on it.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Facial recognition
  • Tech and inequality
  • Tech regulation



Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context