Biased bots? US lawmakers take on 'Wild West' of AI recruitment

A man waits for an interview in Los Angeles, California April 14, 2012. REUTERS/Patrick T. Fallon

A man waits for an interview in Los Angeles, California April 14, 2012. REUTERS/Patrick T. Fallon

What’s the context?

As companies embrace automation in hiring, are AI algorithms perpetuating or amplifying historical biases in the U.S. job market?

  • AI increasingly ubiquitous in hiring but fears of bias
  • Lawmakers seek to reduce risk of discrimination
  • Advocates of AI say tools can counteract human bias

LOS ANGELES - After applying in vain for nearly 100 jobs through the human resources platform Workday, Derek Mobley noticed a suspicious pattern.

"I would get all these rejection emails at 2 or 3 in the morning," he told Context. "I knew it had to be automated."

Mobley, a 49-year-old Black man with a degree in finance from Morehouse College in Georgia, had previously worked as a commercial loan officer, among other jobs in finance.

He applied for mid-level jobs across a range of sectors, including energy and insurance, but when he used the Workday platform, he said he did not get a single interview or call-back and was often forced to settle for gig work or warehouse shifts to make ends meet.

Mobley believes he was being discriminated against by Workday's artificial intelligence (AI) algorithms.

Election staff members monitor screens connected to CCTV cameras set up in and outside vote counting centres in Ahmedabad, India, May 21, 2019
Go DeeperRacist, sexist, casteist: Is AI bad news for India?
A man looks at advertisements for luxury apartments and homes in the window of Real Estate sales business in Manhattan's upper east side neighborhood in New York City, New York, U.S. October 19, 2021
Go DeeperU.S. renters fall foul of algorithms in search for a home
A security camera sits on a building in New York City March 6, 2008. The New York City Police Department are using evidence from video tapes from nearby buildings to catch a suspect involved in a bombing at the Armed Forces Career Center. REUTERS/Joshua
Go DeeperAI surveillance takes U.S. prisons by storm

In February, he filed what his lawyers describe as a first-of-its-kind class action lawsuit against Workday Inc, alleging that the pattern of rejection he and others experienced pointed to the use of an algorithm that discriminates against people who are Black, disabled or over the age of 40.

In a statement to Context, Workday said Mobley's lawsuit was "completely devoid of factual allegations and assertions", and said the company is committed to "responsible AI".

The question of what "responsible AI" might look like goes to the heart of an increasingly robust push-back against the unrestricted use of automation in the US recruitment market.

Mobley's lawsuit, which is working through California's court system, is just one skirmish in a bigger battle involving automation in the workplace. 

Across the United States, state and federal authorities are grappling with how to regulate the use of AI in labor hiring and guard against algorithmic bias.

Around 85% of large U.S. employers, including up to 99% of Fortune 500 companies, now use some form of automated tool or AI to screen or rank candidates for hire, according to recent surveys. 

This includes using resume screeners that automatically scan applicants' submissions, assessment tools that grade an applicant's suitability for a job based on an online test, or facial recognition or emotion recognition tools that can analyze a video interview.

In May, the Equal Employment Opportunity Commission (EEOC), the federal agency that enforces civil rights law in workplaces, released new guidelines to help employers prevent discrimination when using automated hiring processes.

In August, the EEOC settled its first ever automation-based case, fining iTutorGroup $365,000 for using software to automatically reject applicants over the age of 40. The company, which provides English-language tutoring to students in China, denied wrongdoing in the settlement.

City and state authorities are also weighing in.

A novel law to regulate AI in hiring went into force in New York City in July, and lawmakers from California to Vermont to New Jersey are pushing through new legislation.

"Right now, it's the Wild Wild West out there," said Matt Scherer, a lawyer with the Center for Democracy and Technology (CDT), a non-profit advocating for civil rights in a digital age. "But that will change."

'Algorithmic blackballing'

Technology-enabled bias is a risk because AI uses algorithms, data and computational models to mimic human intelligence. It relies on "training data" and if there is bias in that data, which is often historical, this could be replicated in an AI program.

In 2018, for instance, Amazon abandoned an AI resume screening product that had started to automatically downgrade applicants with the word "women's" on their CVs, as in "women's chess club captain".

This was because Amazon's computer models were trained to vet applicants by observing patterns over a decade. Most applications came from men, a reflection of male dominance across the industry.

This is the kind of discrimination that worries Brad Hoylman-Sigal, a state senator in New York. In August, he introduced a bill that would require audits of hiring tools and also ban certain kinds of data collection, including emotion recognition software.

"Many of these tools have been proven to unduly invade workers' privacy and discriminate against women, people with disabilities, and people of color," he said.

Ifeoma Ajunwa, director of the AI and the Law program at Emory University, says job applicants often don't have a choice about whether to submit to automated hiring processes.

She has warned about the possibility of "algorithmic blackballing", where hiring systems repeatedly reject an applicant based on hidden criteria.

She also called on the Federal Trade Commission (FTC) to step in and ban certain kinds of automated hiring tools.

In April, the FTC and three other federal agencies, including the EEOC, said in a statement that they were looking at potential discrimination arising from data sets that train AI systems and opaque "black box" models that make anti-bias diligence difficult.

Some advocates of AI acknowledge the risk of bias but say this can be controlled.

Amandeep Singh Gill, the UN secretary-general's envoy on technology, called for more investment in AI and data literacy to mitigate risks such as discrimination in automated hiring.

"We need to lower the barriers to entry to these conversations and build up the literacy around data, AI and how we teach it in schools and government," he said at the Thomson Reuters Foundation's annual Trust Conference in London.

Frida Polli, co-founder and former CEO of pymetrics, which creates AI-powered assessment tools, said programmers could tweak the variables that are considered by an automated system, something that cannot be done to the human brain.
CDT's Scherer is skeptical.

"The industry says that you can use these tools to increase diversity but I think there's a real tension there," he said. "In reality, you are just automating the process of human bias in hiring."

Taming the tech

That's what worries lawmakers like Californian state assembly member Rebecca Bauer-Kahan, who introduced legislation this year that would allow job applicants to opt-out of automated hiring platforms and require those platforms to submit to fairness audits.

Her bill, AB331, would also have made it easier for private citizens to sue hiring platforms if they suspect bias.

That last point proved to be a major obstacle. Businesses and tech groups signed a letter penned by California's Chamber of Commerce to raise concerns about private right of action, among other issues.

The bill failed to pass out of assembly but Bauer-Kahan plans to reintroduce a version in the next session in December.

"The federal government is not doing much these days," she said. "The states are going to have to move first."

New York City is leading the way. In July, it became the first jurisdiction in the country to introduce a law to specifically regulate algorithms in hiring.

Under the legislation, applicants can petition to be notified when they are subjected to automated tools, and hiring software that relies on AI to choose preferred candidates or eliminate others must be audited for racist or sexist bias.

But many digital privacy advocates say the law does not go far enough: it only applies to AI hiring tools that "substantially assist or replace" humans, and it also does not address biases that might affect disabled applicants.

Cody Venzke, senior policy counsel with the American Civil Liberties Union (ACLU), said he was particularly concerned about "watered-down" regulatory efforts.

"Some proposals would require harmed applicants and employees to prove that an algorithm directly caused discrimination, when many algorithmic hiring tools' real harm is their strong influence on human decision-making," he said.

"Other proposals would give employers a second bite at the apple to come up with non-discriminatory hiring practices, rather than giving applicants harmed by discriminatory technology their day in court."

As the regulatory environment seeks to adapt to the ubiquity of AI in the recruitment market as well as its complexity, Mobley hopes his lawsuit will at least raise the lid on the extent of algorithmic bias.

"I know I'm not the only one," he said. "There are a lot of people out there, applying for jobs they were probably qualified for ... but (who) are being unfairly discriminated against."

This article was updated on October 20th 2023 at 12:54 GMT to include comments from the UN made at the Thomson Reuters Foundation's annual Trust Conference on Friday.

(Reporting by Avi Asher-Schapiro; Editing by Clar Ni Chonghaile.)

Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles

OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration

Part of:

AI and jobs: What does it mean for workers’ rights?

As artificial intelligence tools like ChatGPT reshape work, here's our collection of stories on what AI means for workers' rights

Updated: 6 hours and 27 mins ago


  • Unemployment
  • Future of work
  • Workers' rights

Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.

Latest on Context