View Online | Subscribe now
Powered byThomson Reuters Foundation logo
Context logo

Know better. Do better.

tech and society

Dataveillance

AI, privacy and surveillance in a watched world

Photo of Zoe Tabary

Hi, it’s Zoe, Context’s tech editor. As new AI model ChatGPT makes waves and Google announced its answer to the chatbot, let’s look at what it means for privacy and surveillance.

ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history.

It can generate articles, essays, jokes and even poetry in response to prompts. And in the world of work? Jobs that AI tools like ChatGPT could disrupt include digital marketing, online content creation, answering customer service queries or as some users have found, even helping to debug code.

What’s the latest?

Google owner Alphabet Inc announced on Monday it would launch its own chatbot service called Bard and more AI for its search engine as well as developers.

And Microsoft last week rolled out a premium Teams messaging offering powered by ChatGPT to simplify meetings. It will generate automatic meeting notes, recommend tasks and help create meeting templates for Teams users.

Microsoft Teams app is seen on the smartphone placed on the keyboard in this illustration taken, July 26, 2021. REUTERS/Dado Ruvic

Microsoft Teams app is seen on the smartphone placed on the keyboard in this illustration taken, July 26, 2021. REUTERS/Dado Ruvic

What could this mean for our privacy and data?

With millions of people worldwide working from home, the coronavirus pandemic proved a boon for tracking tools designed to boost worker productivity.

As legions of newly home-based workers worried about keeping their jobs due to the impact of the pandemic and rising cost of living, digital rights campaigners say employers have greater leverage to impose monitoring – such as recording meetings or producing automatic notes – on reluctant employees.

Digital experts have also warned that ChatGPT could pose cybersecurity risks by creating extremely sophisticated phishing emails and other social engineering attacks. BlackBerry research last week found that half of IT professionals predict that we are less than a year away from a successful cyberattack being credited to ChatGPT, and 71% believe that foreign states are likely to already be using the technology for malicious purposes against other nations. Watch this space.

Any views expressed in this newsletter are those of the author and not of Context or the Thomson Reuters Foundation.

We're always happy to hear your suggestions about what to cover in this newsletter - drop us a line: newsletter@context.news

Recommended reading

MIT Technology Review, How to spot AI-generated text, December 19, 2022.

This piece by Melissa Heikkilä delves into the accuracy risks associated with consuming AI-generated information online, and lists tools to detect text written by AI.

Euractiv, What ChatGPT and the likes tell us about AI, January 23, 2023.

This podcast episode discusses to what extent AI language models are actually new with Joanna Bryson, Professor of Ethics and Technology at The Hertie School of Governance; and the societal risks and possible regulatory approaches with Daniel Leufer, a Senior Policy Analyst at Access Now.

This week's top picks

What is ChatGPT? And will it steal our jobs?

ChatGPT, an artificial intelligence text generator, is being hailed as the future of work, but not everyone is convinced

India push for digital sovereignty risks more online surveillance

India is pushing locally-made technologies such as Koo and BharOS to replace big tech, but digital experts warn it could result in greater surveillance

Ethiopia digital ID prompts fears of ethnic profiling

The rollout of digital ID Fayda in Ethiopia could entrench discrimination against Tigrayans and other ethnic minorities, rights groups fear

Can the UK Online Safety Bill take on misogyny?

The Online Safety Bill aims to protect users from harmful content, but women's groups say it does not go far enough

Wikipedia Middle East editors ban shows risks for creators

Wikipedia's ban of 16 users in the Middle East highlights attempts by Saudi Arabia to control online spaces, rights groups say

 
Read all of our coverage here

Editor's pick:

Is my job as a video producer safe from AI?

AI is being used to create award winning art, write movie scripts, diagnose patients and even pass an MBA exam, but how good is it at making videos?

Discover more

Thank you for reading!

If you like this newsletter, please forward to a friend or share it on Social Media.

We value your feedback - let us know what you think.