What could this mean for our privacy and data?
With millions of people worldwide working from home, the coronavirus pandemic proved a boon for tracking tools designed to boost worker productivity.
As legions of newly home-based workers worried about keeping their jobs due to the impact of the pandemic and rising cost of living, digital rights campaigners say employers have greater leverage to impose monitoring – such as recording meetings or producing automatic notes – on reluctant employees.
Digital experts have also warned that ChatGPT could pose cybersecurity risks by creating extremely sophisticated phishing emails and other social engineering attacks. BlackBerry research last week found that half of IT professionals predict that we are less than a year away from a successful cyberattack being credited to ChatGPT, and 71% believe that foreign states are likely to already be using the technology for malicious purposes against other nations. Watch this space.
Any views expressed in this newsletter are those of the author and not of Context or the Thomson Reuters Foundation.
We're always happy to hear your suggestions about what to cover in this newsletter - drop us a line: newsletter@context.news
Recommended reading
MIT Technology Review, How to spot AI-generated text, December 19, 2022.
This piece by Melissa Heikkilä delves into the accuracy risks associated with consuming AI-generated information online, and lists tools to detect text written by AI.
Euractiv, What ChatGPT and the likes tell us about AI, January 23, 2023.
This podcast episode discusses to what extent AI language models are actually new with Joanna Bryson, Professor of Ethics and Technology at The Hertie School of Governance; and the societal risks and possible regulatory approaches with Daniel Leufer, a Senior Policy Analyst at Access Now.