Will Italy’s decision on ChatGPT stand the test of time?
A man uses a laptop in Rome, Italy March 1, 2016. REUTERS/Max Rossi
ChatGPT has passed its first legal test in Europe – but the tensions between AI and data protection regulation look like they are here to stay
Edward Machin is an associate at Ropes & Gray.
The speed at which artificial intelligence has pervaded the public consciousness is playing havoc with our perception of time. Less than six months ago, generative AI applications were known mainly to researchers and technologists. Now, these tools are being used by businesses, creatives and hobbyists in increasingly sophisticated and interesting ways – from producing realistic images and songs, to designing equity trading and business generation strategies.
This bending of time also affects the laws that govern the development and use of AI technologies in Europe, the United States and Asia. Although legislators and policymakers across the world are taking different approaches to regulation, they share a common issue: grappling with how to apply existing laws to technologies that are fast outpacing their legal frameworks. The recent regulatory investigation into US AI company OpenAI’s wildly popular ChatGPT tool – the first of its kind in the European Union – gives clues as to what the tensions between innovation and regulation may look like in the months and years ahead.
On 30 March, the Italian data protection regulator (a/k/a the Garante) ordered OpenAI to suspend access to ChatGPT in Italy, pending the outcome of an investigation into ChatGPT’s compliance with the EU General Data Protection Regulation. The Garante cited four areas of potential non-compliance: (1) the privacy notice provided to individuals did not meet the requirements of Article 13 and 14 of the GDPR; (2) ChatGPT did not include an age verification mechanism in its sign-up process; (3) the absence of a lawful basis under the GDPR to use individuals’ personal data to train ChatGPT’s algorithms; and (4) the inaccuracy of some of the personal data processed by the service – including straightforward errors as well as more problematic AI “hallucinations” (that is, outputs which sound plausible but are incorrect or inaccurate).
This is where we come back to the speed of time. Typically, data protection authorities in the EU and UK have given organisations three, six or even 12 months to remedy their GDPR compliance practices following a regulatory investigation. For its part, on Tuesday 11 April the Garante told OpenAI that it would allow ChatGPT to operate in Italy again if it could address the concerns described above. There was just one snag: OpenAI had 19 days to revise its operations to comply with the GDPR.
Impressively, ChatGPT appears to have done that to the Garante’s satisfaction – and two days ahead of deadline, no less. On Friday 19 April, the service was made available in Italy. Without wanting to spoil the party, however, it bears thinking about whether two of the steps taken by OpenAI to meet the Garante’s requirements – the GDPR lawful basis for using personal data to train ChatGPT’s algorithms, and OpenAI’s ability to rectify or delete individuals’ personal data upon request – are as settled as either party would like.
Given the choice between obtaining users’ consent or relying on the company’s legitimate interests to process personal data for algorithmic training purposes, it’s understandable that OpenAI chose the latter option. And it seems that the Garante agrees with OpenAI that its commercial interests outweigh the rights and interests of its users. However, with the growth in use, sophistication and, in some cases, potential harms of public-facing AI applications, companies and consumers shouldn’t be surprised to see regulators look more closely at consent as the preferred – or, perhaps, required – GDPR lawful basis for processing personal data in this context.
An equally interesting question is how OpenAI allows individuals to correct or delete their data. The company says that its services now support these requests – and sometimes this will be easily done, such as where users ask for their login details or chat history to be erased. What happens if data has already been used to train the algorithms? Honouring these requests is likely to be much harder, and in some cases it may be impossible. If that is right, and users’ personal data are baked into the service indefinitely, regulators may take the view that the legitimate interests balancing test falls in individuals’ favour and consent is required instead.
For now, OpenAI’s experience in Italy shows that generative AI tools can be used in a way that complies with the GDPR – at least in the Garante’s view. That is a small sample size, and other European data protection authorities are looking at ChatGPT, but it would be unusual if they reached a wildly different conclusion. As a major early development in the regulation of AI globally, the Garante’s decision will surely also be of interest to regulators, businesses and individuals beyond European shores. And so, while time may be an illusion – the development of AI, and the legal and regulatory challenges that the Garante and ChatGPT have been the first to test, certainly are not.
Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.
- Content moderation
- Tech regulation
- Data rights
Latest on Context