Deepfake porn tools bypass safeguards to hide in Apple app store
A person holds a phone showing a screenshot of the AI art app Girl. July 16 2024. Thomson Reuters Foundation/Adam Smith
What’s the context?
Several apps that can be used to target women and girls with deepfake porn openly advertise on major social platforms
- Context finds deepfake porn tools in app store
- Apps evade regulations and moderation
- Patchwork of laws leaves little recourse to justice
LONDON - They are sold as harmless photo editing tools, but several applications in Apple's app store have been hiding a secret - they can be used to make deepfake porn.
While appearing innocent in Apple's store, the app makers openly promote them on Facebook and Instagram, owned by Meta Platforms, as being designed for removing clothing from real photos of women.
Though the naked images are generated by AI, they are strikingly realistic, raising alarms amongst women's rights campaigners.
Context found four such apps in the app store that promoted themselves on Facebook and Instagram by suggesting users upload photos of women and "delete" their clothing.
A cropped screenshot of an advert for AI Girl Artwork Generator that appeared on Meta platforms in May 2024. Thomson Reuters Foundation
A cropped screenshot of an advert for AI Girl Artwork Generator that appeared on Meta platforms in May 2024. Thomson Reuters Foundation
Meta's advertising standards policy says "ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive."
Apple's guidelines for its app store say "content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy" is forbidden.
Despite that, the apps identified by Context last month appear to have slipped through the cracks, raising concerns about the effectiveness regulation at a time when around 90% of all deepfakes are non-consensual porn featuring women.
"The app developers seem to have developed a strategy to work around the terms of service of the app stores," said Daniel Leufer, emerging technologies policy lead at digital rights group Access Now.
"This means that Apple would have to do extra work to verify what functionalities are available within the app, and not just rely on a superficial reading of the description in the app store."
In-app payments
The apps identified by Context - PicX, Tapart, MatureAI, and Artifusion - all promoted themselves on Meta platforms using the same advertising format: two side-by-side images of a woman, one clothed and one blurred, but nude.
While the text that accompanied the ads does not reference deepfake nudity, it does so in text overlayed on the ad's image, making it more difficult for social platforms' systems to detect.
Contacted by Context, Meta took down the ads, but did not answer questions about how they had bypassed its moderation systems.
"We've removed all the ads brought to our attention for violating our policies and disabled the accounts associated with them," a spokesperson said.
Apple also removed the apps from its app store when contacted by Context, but declined to comment and did not answer questions on how the apps got around moderation.
"Very little is known about how publishers with access to a monetization program are being verified: what standards platforms use to assess compliance with platforms' policies, whether automated means are being deployed for assessment, or whether previous violations are being considered," said Eliška Pírková, Senior Policy Analyst at Access Now.
Three out of the four tools, with the exception of PicX, seek in-app payments. When subscriptions are purchased in-app via Apple's payment systems, the company receives a cut of between 15% and 30%.
Artifusion tells users they must join its "pro" tier, priced at $39.99 for a lifetime subscription, to create explicit images.
Tapart requires users to either purchase credits to create nudity, or subscribe to a $150 per year package to access a "reshape" tool that removes clothes from any uploaded image.
None of the companies behind the apps responded to requests for comment.
Prosecution difficult
Deepfake sexual abuse is facing increasing scrutiny as campaigners push for legislation to control it.
Only a handful of countries have laws that can be used to combat deepfake porn - although none use that term. These include Australia, South Africa and Britain.
In South Korea, producing deepfake porn for profit carries a seven-year prison term. Colombia and Canada are among other countries considering legislation.
The United States has no federal law, but about 10 states including Virginia, California, Illinois and Hawaii have passed legislation that can be used to target deepfake producers.
"As with most crimes that happen on the internet, it can be extremely difficult to find and prove the identity of the perpetrator, and even when you can, the crimes often happen across jurisdictions, which makes prosecution extremely complicated," said Sloan Thompson, director of training and education at EndTAB, a U.S.-based nonprofit tackling tech-enabled abuse.
"At the end of the day, any time an image of a person is sexualised without their consent, it is an inexcusable violation," Thompson said.
According to identity theft expert group Home Security Heroes, one in every three deepfake tools allow users to create pornography. The images have increasingly been weaponised against female politicians, used as so-called revenge porn and been cited in school bullying cases.
"As a society, we're complicit. They're seeing these apps advertised on Instagram. It's not a surprise that for some of them, this seems normal, slash acceptable and unproblematic," said Clare McGlynn, a professor of law at Durham University, specialising in pornography, sexual violence and online abuse.
McGlynn said major tech firms were effectively profiting from the abuse of women and girls.
"Anyone can sit down right now and take your image to put it into porn, and there's absolutely nothing you can do about it," she said.
(Reporting by Adam Smith; Additional reporting by Beatrice Tridimas; Editing by Barry Malone and Jon Hemming)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Content moderation
- Facebook
- Instagram
- Tech regulation
- Meta
- Social media