Signal president: Empower users, workers to tackle AI threats

Meredith Whittaker, Signal President, poses for a photo

Meredith Whittaker, Signal President, poses for a photo. Signal Foundation President Meredith Whittaker/Handout via Thomson Reuters Foundation

What’s the context?

The former Google AI researcher sounded the alarm about AI harms years before the current boom

  • President of Signal Foundation calls for 'meaningful' AI regulation
  • Vulnerable groups already experiencing harms of AI
  • AI to take center stage at major digital rights conference

SAN JOSE - Privacy laws and labor organizing offer the best chance to curb the growing power of big tech and tackle artificial intelligence's main threats, said a leading AI researcher and executive.

Current efforts to regulate AI risk being overly influenced by the tech industry itself, said Meredith Whittaker, president of the Signal Foundation, ahead of RightsCon, a major digital rights conference in Costa Rica this week.

"If we have a chance at regulation that is meaningful, it's going to come from building power and making demands by the people who are most at risk of harm, " she told Context. "To me, these are the front lines."

A security camera sits on a building in New York City March 6, 2008. The New York City Police Department are using evidence from video tapes from nearby buildings to catch a suspect involved in a bombing at the Armed Forces Career Center. REUTERS/Joshua
Go DeeperAI surveillance takes U.S. prisons by storm
A robot equipped with artificial intelligence is seen at the AI Xperience Center at the VUB (Vrije Universiteit Brussel) in Brussels, Belgium February 19, 2020. REUTERS/Yves Herman
Go DeeperWhat does the AI Act mean for digital rights in the EU?
Employees work at their desks inside Tech Mahindra office building in Noida on the outskirts of New Delhi March 18, 2013. REUTERS/Adnan Abidi
Go DeeperAI boom is dream and nightmare for workers in Global South

More than 350 top AI executives including OpenAI CEO Sam Altman last week joined experts and professors in raising the "risk of extinction from AI", which they urged policymakers to equate at par with risks posed by pandemics and nuclear war.

But for Whittaker, these doomsday predictions overshadow the existing harms that certain AI systems are already perpetrating.

"Many, many researchers, have been carefully documenting these risks, and have been piling up the receipts," she said, pointing to work by AI researchers such as Timnit Gebru and Joy Buolamwini, who first documented racial bias in AI-powered facial recogntion systems over 5 years ago.

A recent report on AI harms from the Electronic Privacy Information Center (EPIC), lists labor abuse of AI annotators in Kenya who help build predictive models, the environmental cost of the computing power to build AI systems, and the proliferation of AI-generated propaganda, among other concerns.

Curbing power

When Whittaker left her job as an AI researcher at Google in 2019, she wrote an internal note warning against the trajectory of AI technology.

"The use of AI for social control and oppression is already emerging," said Whittaker, who had clashed with Google over the company's AI contract with the U.S. military, as well as over the company's handling of sexual harassment claims.

"We have a short window in which to act, to build in real guardrails for these systems, before AI is built into our infrastructure and it's too late."

Google did not respond to a request for comment.

Whittaker sees the current AI boom as part of the "surveillance derivative" business, which has monetized the vast collection of user-generated information on the internet to create powerful predictive models for a small set of companies.

Popular generative AI tools like ChatGPT are trained on vast troves of internet data - including text from Wikipedia entries to patent databases and World of Warcraft player forums, according to a Washington Post investigation.

Social media companies and other tech firms also build AI and predictive systems by analyzing their owner users' behavior.

Whittaker hopes that encrypted messaging app Signal and other projects that do not collect nor harvest the data of their users can help curb the concentration of power among a few powerful AI developers.

For Whittaker, the rise of powerful AI tools points to the growing concentration of power in a small group of technology companies that are able to make the sizable investments in data collection and computing power that such systems require.

"We have a handful of companies that have ... arguably more power than many nation states," said Whittaker, who will be speaking about privacy-centric apps and encryption at RightsCon, which is hosted by digital rights group Access Now.

"We are sort of ceding more and more decision making power, more and more power over our futures — who will benefit and who will lose — to a small group of companies."

Pushing back

Whittaker is hopeful for greater regulatory oversight of AI - but also wary of those regulators being overly influenced by the industry itself.

In the U.S., a group of federal agencies announced in April they would be policing the emerging AI space for instances of bias in automated systems, as well as deceptive claims being made about the capabilities of AI systems.

The EU in May agreed tougher draft legislation, also known as the AI Act, that will categorize certain kinds of AI as "high-risk" and require companies to share data and risk assessments with regulators.

"I think everyone is scrambling," said Whittaker, who served as a senior advisor on AI to the U.S. Federal Trade Commission before joining Signal in 2022.

She sees promise in privacy-centric regulation that seeks to limit the amount of data that companies can collect and therefore deprive AI models of the raw materials they need to build ever more powerful systems.

Whittaker also pointed to the work of labor organizers, such as the recent calls from the Writers Guild of America (WGA) and Screen Actors Guild (SAG) to limit the use of generative AI technologies like ChatGPT in their workplaces.

(Reporting by Avi Asher-Schapiro; Editing by Zoe Tabary)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles


OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration

Part of:

AI and jobs: What does it mean for workers’ rights?

As artificial intelligence tools like ChatGPT reshape work, here's our collection of stories on what AI means for workers' rights

Updated: August 22, 2023


Tags

  • Content moderation
  • Google
  • Tech regulation
  • Social media
  • Data rights

Featured Podcast

An illustration photo shows the globe with a tree standing on top. On the left hand side, a red backed illustration shows barren trees and oil refinery towers. On the right hand side, a green backed illustration shows wind turbines and solar panels. A sound equaliser image crosses the screen to indicates audio.
6 EPISODES
Podcast

Just Transition

The human stories behind the shift to a green economy

An illustration photo shows the globe with a tree standing on top. On the left hand side, a red backed illustration shows barren trees and oil refinery towers. On the right hand side, a green backed illustration shows wind turbines and solar panels. A sound equaliser image crosses the screen to indicates audio.
Podcast




Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context