View Online | Subscribe now
Powered byThomson Reuters Foundation logo
Context logo

Know better. Do better.

tech and society

Dataveillance

AI, privacy and surveillance in a watched world

Photo of Avi Asher-Schapiro

The U.S. Congress is taking up artificial intelligence this week as OpenAI CEO Sam Altman makes his first appearance testifying in front of lawmakers, amid a growing debate about regulating the technology.

The U.S. has lagged behind the EU and China, where more concrete proposals to regulate AI are already in the works.

Last week, President Joe Biden met with the CEOs of top tech companies pursuing AI, and urged them to be transparent about how their products work and evaluate them for harm.

The administration has also directed federal agencies to look into bias in AI systems, and released a non-binding blueprint for an "AI Bill of Rights" last year.

But there's no consensus in Congress about what new laws might be necessary.

As Reuters reported on Monday, some lawmakers want to pursue an approach similar to the EU, in which certain "high-risk" applications of AI - in healthcare, or finance - are tightly regulated.

The kind of AI that's captured the public's attention in recent months - including ChatGPT and image creation tools, would face a much lighter touch.

In a high-profile interview that aired on NBC's "Meet the Press" over the weekend, former Google CEO Eric Schmidt urged lawmakers to back off regulations and allow the industry to set its own rules.

Facebook whistleblower Daniel Motaung in a meeting with his lawyers shortly before his case with Meta was lodged in Nairobi, Kenya, March 2022. Daniel Motaung/Handout via Thomson Reuters Foundation

Facebook whistleblower Daniel Motaung in a meeting with his lawyers shortly before his case with Meta was lodged in Nairobi, Kenya, March 2022. Daniel Motaung/Handout via Thomson Reuters Foundation

Rest of the world: what’s new?

Africa

Kim Harrisberg, South Africa correspondent

Kenyan courts have ordered Facebook's parent company, Meta, to pay 180 of its retrenched content moderators who were working for an outsourced Kenyan company called Sama, Tech Cabal reported.

The content moderators are suing both Meta and Sama for unfair labour conditions, including a lack of mental health support despite the violent content they had to view.

"Right now they they don't have anything," Facebook whistleblower Daniel Motaung told us in an interview last week. Motaung is also suing both Sama and Meta for failing to protect moderators' rights.

Asia

Vidhi Doshi, India correspondent

A man in China was arrested for allegedly using ChatGPT to generate a fake story about a train crash, which gained more than 15,000 clicks on social media - China's first AI-related arrest.

Beijing is on a mission to rein in big tech and regulate emerging technologies through sweeping rules in areas ranging from antitrust to data protection.

China's new deepfake rules bar service providers and users from using such technology to produce, release and fabricate untrue information. ChatGPT is banned in China, but can be accessed using VPNs or foreign phone numbers.

A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration

A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration

Latin America

Diana Baptista, Mexico correspondent

Sixteen data protection authorities across Latin America have announced coordinated action against tech firm OpenAI to guarantee the personal data protection of ChatGPT users.

United under the Ibero-American Network of Data Protection, the government institutions have warned that ChatGPT lacks security measures to "guarantee the protection and confidentiality of personal data" and could result in the non-consensual transfer of such data to third parties.

The Colombian government was the first member of the network to announce an investigation which will determine whether ChatGPT violates the national data protection law.

Europe

Adam Smith, UK correspondent

The UK government is looking to expand online surveillance by logging the web histories of millions. Police will be able to gather internet connection records (ICRs), which are lists of websites - but not specific pages - that users visit.

It is unclear whether ICRs will be rolled out nationally; the Home Office told Wired, in response to an FOIA request, that any additional information could jeopardise law enforcement activities.

In 2020, the IPCO recorded that a telecoms company had provided an excess of information to police in response to an ICR demand, due to a technical error. No further information was provided about the data or the cause of the error.

This week's top picks

TikTok bans: What could they mean for you?

As U.S. lawmakers move to force TikTok's Chinese parent company to sell the app or face a ban, here is a look at other global curbs

Indian girl gamers fight keyboard warriors and online abuse

Women advance in India's fast-growing gaming world but risk rape threats, abuse and low prize money for all their progress

What does the AI Act mean for digital rights in the EU?

As tech experts raise concerns over the risks of artificial intelligence, the EU is inching closer to making the world's first extensive AI rules

Nickel mining for EVs fuels risk of abuses in Southeast Asia

As electric vehicle sales soar, report warns of rights and environmental risks in nickel producers Indonesia and the Philippines

 
Read all of our coverage here

Discover more

Thank you for reading!

If you like this newsletter, please forward to a friend or share it on Social Media.

We value your feedback - let us know what you think.