Arguments about jurisdiction are common in disputes involving transnational corporations, and Meta is not alone.
Courts push back on Meta’s legal strategy
Kenyan lawyers Valerie Omari, Mercy Mutemi and Damaris Mutemi addresses a news conference after filing a lawsuit on behalf of their clients accusing Meta of enabling hateful posts on Ethiopia conflict at the Milimani Law Courts in Nairobi, Kenya December 14, 2022. REUTERS/Monicah Mwangi
What we have concluded about Meta’s legal strategy following our cases against the tech giant.
David Shipton is Senior Legal Counsel at investigative campaigning organisation Global Witness
On 3 November 2021, Meareg Amare Abreha, a professor at Bahir Dar University in Ethiopia, was shot and left to die outside his home by men dressed in special forces uniforms.
A month earlier, Meareg’s son Abrham had reported two posts on Facebook which published his father’s location and falsely claimed that he was assisting the Tigrayan People’s Liberation Front, a rebel group at war with the Ethiopian government. Neither post was removed by Facebook’s moderators before Meareg’s murder.
Abrham is now the lead claimant in a $2 billion legal case brought against Facebook’s parent company - Meta - in Kenya, where its main content moderation operation for Africa was previously based.
The claim alleges that Meta fuelled ethnic violence in Ethiopia by failing to take down hate speech and incitements to violence and demands that Meta take steps to stop their spread on its platforms.
Global Witness is formally listed as an interested party in the case, which draws upon our research with legal non-profit Foxglove and independent researcher Dagim Afework Mekonnen into Facebook’s failures in detecting hate speech in Amharic, one of Ethiopia’s main languages.
In response to Abrham’s claim, Meta argued that Kenyan courts have no jurisdiction over these matters, saying its terms of service require such disputes to be resolved in California, where it is headquartered.
Similar arguments were made by Meta last year in response to two separate claims brought by former moderators located in Kenya who were alleging unfair dismissal and poor working conditions.
These recent cases reflect what we at Global Witness believe is a key feature of Meta’s global litigation strategy, which seeks to replace the courts of the regions in which Meta is alleged to have committed breaches or caused damage with those in which it is headquartered.
Indeed, we were unable to find evidence of any content moderation-related claim brought in the last three years against Meta outside the United States or Ireland (where Meta’s European headquarters are based) in which the company had not sought to challenge the local court’s jurisdiction.
We saw this strategy play out in relation to a complaint we filed against Meta alongside women’s organisation Bureau Clara Wichmann at the Netherlands Institute for Human Rights.
This complaint asserted that Meta’s algorithm was discriminating based on gender in job advertisements. Again, Meta challenged the jurisdiction of the local authority.
Arguments about jurisdiction are common in disputes involving transnational corporations, and Meta is not alone in seeking to move disputes away from foreign or hostile courts.
Nevertheless, we believe that forcing claimants from around the world to bring their cases in the United States or Ireland raises profound questions around fairness and access to justice. Imagine the challenges, for example, faced by the Rohingya refugees who sued
Facebook in California for its alleged role in promoting the 2017 genocide. Their case – which is pending an appeal – was dismissed on first instance because it had passed the strict two-year statute of limitations on personal injury claims.
Section 230
Protecting the primacy of courts in the United States and Ireland in claims against social media companies also arguably undermines the ability of local authorities to regulate and oversee the impact of technologies which increasingly shape the lives of the people they govern.
Central to this challenge is Section 230 of the Communications Decency Act of 1996, which was passed by Congress principally to regulate pornographic material on the internet.
Sometimes described as “the 26 words that made the internet”, this provision ensures that US-based internet service providers, such as Meta, aren’t treated as publishers of user-generated content.
But while this law was instrumental in the early growth of the internet economy, it was not designed for the realities of modern platform design - particularly the algorithmic systems that determine what users see and share.
In this new environment, platforms like Meta no longer merely host content, but actively shape and amplify it, blurring the line between neutral intermediary and editorial actor.
In the US case of M.P. v Meta (2025), a minor, whose father was amongst the nine African Americans murdered in a Charleston church in 2015, claimed that Facebook’s algorithm had promoted content which radicalised the white supremist murderer.
Earlier this year, the case was dismissed principally on Section 230 grounds, with the Fourth Circuit Court holding that algorithmically ranking or recommending harmful content does not strip platforms of their immunity.
We believe this shows the benefit to Meta of its consistent strategy of moving claims relating to harmful content to California, where they have often been quickly dismissed in preliminary hearings.
The impact that this will have on the platform’s human rights record remains unclear.
But there are signs of change. Earlier this year, the Netherlands Institute for Human Rights accepted jurisdiction for our complaint and found that Meta Platforms Ireland Ltd was engaging in prohibited discrimination based on gender.
And in April 2025, the Kenyan High Court found that Abrham’s claim raised key constitutional matters which were within its competency. Meta is appealing this decision.
Earlier this year, Mark Zuckerberg announced that Meta would be moving towards a mix of more automated and community-based approaches to content moderation. In 2022, there were also cuts to the company’s trust and safety staff.
The impact that this will have on the platform’s human rights record remains unclear.
However, as long as what we see as a strategy of moving legal challenges to specific jurisdictions is propped up by a platform-friendly legal environment, there is little commercial incentive for Zuckerberg to prioritise the protection of human rights.
Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.
Tags
- Facebook
- War and conflict
- Meta
- Social media
Related
Latest on Context
- 1
- 2
- 3
- 4
- 5
- 6
Most Read
- 1
- 2
- 3
- 4
- 5