AI Bill of Rights: what critics get right and wrong

An artificial Intelligence project utilizing a humanoid robot from French company Aldebaran and reprogramed for their specific campus makes its debut as an assistant for students attending Palomar College in San Marcos, California, U.S. October 10, 2017. REUTERS/Mike Blake

An artificial Intelligence project utilizing a humanoid robot from French company Aldebaran and reprogramed for their specific campus makes its debut as an assistant for students attending Palomar College in San Marcos, California, U.S. October 10, 2017. REUTERS/Mike Blake

The blueprint represents a shift from a model of self-policing towards a rights-based approach to protect from algorithmic harms

Janet Haven is the executive director of Data & Society, and member of the National Artificial Intelligence Advisory Committee

Imagine a future in which a vast network of digital surveillance fed by our electronic devices tracks our behavior, and shares that information with law enforcement, insurance companies, social service agencies, and employers, who then use algorithms to decide computationally – and inaccurately – who gets ahead and who stays behind.

There’s no need to imagine it, because it’s already happening. Today, automated, data-centric technologies – often referred to by the imprecise, catch-all term “artificial intelligence” – touch nearly every aspect of our lives, yet they are largely ungoverned in the United States. In the absence of clear guidelines and regulation, we’ve seen how these technologies can cause serious harm, especially to populations that are already among the most vulnerable.

So the release of a Blueprint for an AI Bill of Rights, published earlier this month by the White House’s Office of Science and Technology Policy, was a significant milestone. The blueprint makes the case that policymakers can no longer afford to ignore the realities and harms of contending with algorithmic systems, and articulates ways to govern these technologies in ways that are grounded in equity, opportunity, and human dignity.

The document has been criticized by some as lacking teeth, largely because it does not have the force of law. But critics have failed to appreciate the enormous shift in official thinking and action the blueprint represents as a statement of values - away from a model centered on industry self-policing or cookie-cutter government regulation, and towards a rights-based approach.

Go DeeperBesides AI, regulation key to fight mis/disinformation
A security camera sits on a building in New York City March 6, 2008. The New York City Police Department are using evidence from video tapes from nearby buildings to catch a suspect involved in a bombing at the Armed Forces Career Center. REUTERS/Joshua
Go DeeperAI surveillance takes U.S. prisons by storm
A truck with the logo of Amazon Prime Delivery arrives at the Amazon logistics center in Lauwin-Planque, northern France, March 19, 2020. REUTERS/Pascal Rossignol
Go Deeper'Dystopia Prime:' Amazon AI van cameras spark surveillance concerns

This departs from other approaches to AI governance which emphasize trust, safety, ethics, responsibility, or other more interpretive frameworks. A rights-based approach is rooted in deeply held American values – equity, opportunity, and self-determination – and legal traditions.

The blueprint’s choice of language and framing clearly positions it as a framework for understanding AI governance broadly as a civil rights issue, one that deserves new and expanded protections under American law. This draws on years of community-based and academic research from a range of organizations and scholars detailing the instances and impact of algorithmic discrimination involving employment, criminal justice, access to education and healthcare, and more.

The blueprint also makes an argument for protecting communities - as well as individuals - against algorithmic harms. As the authors note, the impacts of data-driven automated systems may be most visible at the community level: in, for example, social networks or neighborhoods or indigenous groups. Such communities, defined in broad and inclusive terms, have the right to protection and redress against harms to the same extent that individuals do. In identifying communities as those who bear the brunt of algorithmic harms, the blueprint recognizes how algorithms function in practice, and how their impact lands.

While the blueprint is not enforceable in its current form, neither were the many articles of what became the U.S. Bill of Rights when they were first drafted. Instead, what it seeks to provide is a powerful starting point: a “national values statement” that can serve as a guide for both policy and practice. Still, it is not without its flaws.

The blueprint’s lengthy legal disclaimer suggests a line-by-line negotiation with parties who would prefer fewer limitations on how we, as a society, choose to develop automated technologies – and there are notable voids when it comes to law enforcement and defense. The document also lacks specific direction to protect workers – particularly those in low-wage, precarious and algorithmically managed jobs – who are increasingly subject to invasive data collection, surveillance and algorithmic controls.

Do the advantages to a company of an algorithm that sets rents, for example, outweigh its impact on people who need affordable housing? What about a recruiting algorithm that discriminates against people of color? And who gets to decide? Like the US Bill of Rights, the blueprint for an AI Bill of Rights can help spur us to create a healthy balance in these considerations, while reminding us that unproven industry claims about innovation and competitiveness cannot be allowed to trump fundamental values of equity, justice and human dignity.

Above all, the blueprint makes plain the need to protect Americans from being involuntarily subjected to decisions made by automated systems that are built on discriminatory practices of mass surveillance and behavioral prediction. Now it’s up to policymakers and legislators to develop specific guidelines and guardrails. With potential harms of algorithmic systems multiplying as these systems become more ubiquitous, we can’t afford to wait.

Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


  • Tech regulation
  • Data rights
  • Tech solutions

Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.

Latest on Context