How Argentina's AI ruling can help stem child sexual exploitation
A woman uses her laptop at her home in Lima, September 29, 2011. REUTERS/Mariana Bazo
What’s the context?
Creating child abuse imagery with AI is now a crime in Argentina, setting a regional precedent for fighting online child abuse.
- Argentina's ruling criminalises AI-generated child abuse imagery
- AI-generated content harms child sexual integrity, judges rule
- Child sexual abuse videos generated by AI surging globally
BUENOS AIRES - Creating child abuse imagery with artificial intelligence is now a crime in Argentina, following a landmark high court ruling in the South American nation amid a surge of child sexual abuse content available online.
A high court in Argentina's Buenos Aires province ruled this month that the use of AI-generated child abuse photos and videos is a criminal offence even if no real, identifiable victims are involved and regardless of whether the images were completely or partially fabricated by AI.
"It's an unprecedented ruling in Latin America," said lawyer Hernan Navarro, founder of Grooming Argentina, a non-profit organization tackling child grooming.
"While these are crimes where there is effectively no apparent victim, what is interpreted here is that society as a whole is the victim," said Navarro, an expert in paedophile crimes, told Context.
The ruling stemmed from the case of a man accused of publishing and distributing AI-generated images and videos of children aged between 3 and 13 performing sexual acts.
While judges acknowledged the imagery could not be traced to actual children, they said failing to criminalise AI-generated content would "lead to the normalization" of paedophilia and harm the "sexual integrity" of children.
AI technology 'can be a weapon'
The court ruling directly addresses the legal vacuum surrounding AI-generated illicit content, and campaigners said they hope the legal precedent will help fight online child exploitation and the misuse of AI tools.
"In the hands of paedophiles, AI technology can be a weapon as it becomes a readily available tool to fabricate blackmail material," said Navarro.
The use of technology allows the doctoring of images of children that originally had no sexual connotation, undressing them or altering them to create sexualised content.
This can be used to threaten or blackmail the victim with its publication, Navarro said.
Legal experts warned, however, that the ruling could be appealed, especially if a conviction follows, and lead to a new interpretation.
Thus, specific legislation is needed to provide more robust and clear means for addressing these issues in a new criminal code, said Lucas Moyano, a cybercrime prosecutor in the province of Buenos Aires.
"The court's ruling is correct, but the question is when this will be legislated," Moyano said.
"With a law that clearly defines it, the decision would be definitively settled, and there would be no room for further interpretations."
Images generated by AI are difficult to distinguish from real ones, making it hard to identify and prosecute criminals, and create an overall climate that endangers children, he added.
"Even if the image does not depict a real child, AI-generated child sexual abuse material still contributes to the objectification and sexualisation of children as a whole, and promotes sexual desire toward them," he said.
'Crossed the threshold'
The cross-border nature of cybercrime in many cases adds yet another layer of complexity to tracking and punishing perpetrators, the prosecutor noted.
Globally, there has been a sharp rise in AI-generated child sexual abuse videos online, fuelled by rapidly advancing technology, readily available AI models and the growing sophistication of video-creation tools.
In July, the Internet Watch Foundation (IWF), a UK-based internet safety watchdog, said AI videos of abuse had "crossed the threshold" of being nearly indistinguishable from real imagery.
In the first six months of 2025, the IWF verified 1,286 AI-made videos with child sexual abuse material, compared with just two in the same period last year.
Most of the videos featured 'Category A' abuse, the classification for the most severe type of material.
Earlier this year, Europol and local authorities arrested dozens of people in nearly 20 countries in a large-scale global sting known as "Operation Cumberland," one of the first major cases involving AI-generated child sexual abuse material.
The operation followed the arrest of a Danish man for allegedly producing AI-generated child abuse material and charging users for an online subscription service.
Crediting new laws
In Britain, legal experts have credited new laws passed in the fight against online child abuse and exploitation.
In February, Britain made it illegal to use AI tools that create child sexual abuse images, becoming the first country in the world to introduce the new AI sexual abuse offences.
As a result, possessing, taking, making, showing or distributing explicit images of children is a crime in England and Wales.
In the past year, the United States also stepped up prosecution efforts against AI-generated child abuse content.
The U.S. Justice Department in 2024 brought at least two criminal cases involving the use of generative AI systems, which create text or images in response to user prompts, to produce explicit images of children.
Despite such efforts, many legal loopholes persist worldwide that allow for a flood of illicit material.
The surge of AI has significantly changed the nature, scope and reach of crimes by paedophiles, Navarro said.
"AI is an extremely powerful tool they now have as an ally," he said.
(Reporting by David Feliba; Editing by Anastasia Moloney and Ellen Wulfhorst.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Content moderation
- Tech regulation
- Data rights
- Corporate responsibility