Political cheap fakes are a blind spot for platforms in the Global South

People walk past a banner with a picture of the former Prime Minister Imran Khan outside the party office of Pakistan Tehreek-e-Insaf (PTI), a day after the general election, in Lahore, Pakistan, February 9, 2024. REUTERS/Navesh Chitrakar
opinion

People walk past a banner with a picture of the former Prime Minister Imran Khan outside the party office of Pakistan Tehreek-e-Insaf (PTI), a day after the general election, in Lahore, Pakistan, February 9, 2024. REUTERS/Navesh Chitrakar

Cheap fakes pose a prevalent risk in the Global South that is being ignored by platforms amid the AI hype

Sabhanaz Rashid Diya is the founding board director, Tech Global Institute

Last month’s audio deepfake of President Joe Biden during the New Hampshire primary raised concerns in Congress about the use of artificial intelligence to sway voters. In Pakistan, former Prime Minister Imran Khan, reportedly used voice cloning technology from prison to address a campaign rally. With frequent headlines about political deepfakes, it seems generative AI will heavily impact this year’s elections worldwide.

However, in global majority countries, this may not necessarily be the case. While there are notable examples of politically motivated deepfakes from India, Bangladesh, Pakistan, Indonesia, and Zambia, to name a few, for the most part, global majority voters are dealing with the longstanding flood of ‘cheapfakes’—manipulated audio and video created with basic tools, ranging from edited photos to stitched videos. 

Leading American social media platforms are responding to growing debates on generative AI by launching new content moderation guidelines. For example, Meta's manipulated media policy applies to a piece of content only when it is a deepfake. YouTube’s misinformation policy includes removal of doctored content if it poses risks of egregious harm. TikTok does not allow manipulated media if they are generated using artificial intelligence with the exception of creators labeling AI-generated realistic scenes of fake people. 

These measures inadequately tackle the proliferation of cheapfakes, which threaten civic participation in the global majority, home to more than 3 billion Internet users. Cheapfakes propagate misleading narratives, discredit candidates and worsen mis- and disinformation in fragile democracies with low digital literacy. They are easy to produce, and require minimal compute resources, thereby exposing global majority users to a significantly higher volume of cheapfakes on social media platforms. This is especially dangerous in regions lacking press freedoms, resulting in American platforms making up the entirety of the Internet experience. For example, during Bangladesh’s recent election, nearly half of mis- and disinformation involved cheapfakes, compared to less than 2 percent for deepfakes. By narrowly focusing on deepfakes, platforms overlook the longstanding problem of cheapfakes, exposing communities to harms. 

Cheapfakes can span from satire to harmful political misrepresentation and platforms argue they can be addressed using other policies. However, the lack of a comprehensive and inclusive policy framework results in inconsistent, arbitrary and subjective enforcement. Existing approaches are overly reliant on technical standards and verbal components, neglecting facial cues and graphic overlays—techniques commonly used in cheapfakes. The Oversight Board recently issued its decision on a case consisting of an edited video clip of President Biden, underscoring the need for a unified manipulated media policy focusing on harm rather than technical aspects of digitally altered media. For instance, a debunked video portraying a Malaysian minister in a sexual tryst highlights how cheapfakes risks disproportionately targeting female and gender-diverse candidates, impacting voter perceptions, particularly in global majority regions with limited resources. 

The policies lack clarity on AI technologies, often overlooking more basic manipulation tools. Cheapfakes, made with simple editing software, include facial reenactment, lip-syncing, audio insertion, video stitching, and recontextualization. This technique, like when former Prime Minister Imran Khan released a debunked video of clashes between his supporters and law enforcement, after his arrest, omits certain elements, such as statement, gesture or action, and alters context to create false narratives. 

Meta recently announced plans to add visible markets and invisible watermarks on AI-generated content on Facebook, Instagram and Threads. Other platforms rely on users to disclose AI-generated content voluntarily. Neither are implementing similar disclaimers for other manipulation technologies. Yet, the impact of visible disclaimers on user awareness and misinformation remains uncertain. Even prominently labeled digitally altered media can stoke violence or cause riots in the Global Majority, as a result of confirmation bias coupled with the disclaimers being predominantly in English.

Additionally, existing detection capabilities are insufficient to automatically flag content created using generative AI, and unlikely to be equitable across the global majority with historic lags contextual and language cues. This approach is even less effective for detecting content created with basic editing technologies.  

A robust manipulated media policy framework should adopt a technology-agnostic approach, focusing action on harm posed by the content. While the jury is still out on whether labeling manipulated content is effective in countering misinformation, platforms should be cognizant that visible disclaimers and invisible watermarks, if introduced bluntly for a wide range of manipulated media, could pose privacy risks, especially if the content was created by a political dissident. Over 70 percent of the world’s population live under authoritarian regimes, primarily in low- and middle-income countries, therefore, any disclosure technique needs to protect identifiable information about the creator. And finally, platforms should be intentional about factoring in risks uniquely faced by global majority communities in their policy deliberations to ensure they are comprehensive and inclusive, and that blind spots don’t end up exacerbating harms.

RelatedConversational AI: Africans disproportionally disadvantaged
RelatedWhen inventors get an AI assist, who gets the patent?
A man takes a picture of the centre stage of Web Summit, in Lisbon, Portugal, November 13, 2023. REUTERS/Pedro Nunes
RelatedWeb Summit: From AI to green tech, here are four key takeaways

Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Disinformation and misinformation
  • Online radicalisation
  • Polarisation
  • Content moderation
  • Tech regulation
  • Meta
  • Social media


Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context