Local actors re-claiming voice in content moderation
A man browses with his phone in his shelter at a camp for the internally displaced due to the fighting between the Ethiopian National Defense Force (ENDF) and the Tigray People's Liberation Front (TPLF) forces in Dessie town, Amhara Region, Ethiopia, October 8, 2021. REUTERS/Tiksa Negeri
For social media platforms to make correct content moderation decisions, they have to understand the unique local contexts they operate in
Roberta Taveri is project coordinator at ARTICLE 19.
18 June marked the International Day for Countering Hate Speech - a UN initiative established as a result of the international community’s increasing concern about the scale and growing harm caused by hate speech in societies around the world.
What happens on social media forms a crucial part of this conversation. In the past few years, more and more evidence has emerged of the serious consequences that harassment, disinformation and hate speech online can have in the ‘offline’ world.
We’ve seen this happen in Myanmar in 2018, where Facebook allowed incitement of genocide against the Rohingya to spread on its platforms. Meta is also facing a lawsuit in Kenya, filed by two Ethiopian petitioners and a Kenyan rights group Katiba Institute, who claim that the company’s lack of action on hate speech and incitement to violence on its platform during the civil war in Ethiopia exacerbated tensions. The reporting from The Bureau of Investigative Journalism found that Meta was aware that it was helping to directly fuel the growing tensions in Ethiopia.
Despite the evidence and public outcry, content moderation practices, and algorithms which recommend content to users, continue to fail communities not deemed as a priority by executives at Big Tech companies.
Content moderation requires the correct understanding of the meaning behind a message and the ability to assess its potential impact. For this, one needs to understand not only the local language, but also the various historical, societal and political contexts.
And yet, we know from our research in Bosnia and Herzegovina, Indonesia and Kenya, that platforms routinely fail to consider those contexts and dedicate sufficient resources to ensuring they understand the languages and the societies they operate in. Too often, they rely on automated systems that are not capable of recognising nuances in language or tone, such as irony or satire, or the ways certain phrases are used in a specific society.
This can lead to situations where some lawful expression gets removed erroneously, while content capable of causing real-world harm is not moderated properly by social media platforms, increasing tensions, or even the risk of violence.
Too often, local civil society organisations find themselves powerless in the face of actions of the global social media companies. They share a number of common concerns: from the complete lack of transparency around specific content moderation decisions, to how little leeway there is for them to work with platforms on flagging posts, appealing wrongful decisions and engaging in a sustained dialogue about the specific local context. Content rules are often not available in local languages, appeal processes are opaque and often automated, and requests for response are often left unanswered.
Local actors constantly feel like they are fighting a losing battle - they understand the potential risks of harmful content, but little to no avenues are effective for them to engage with companies to mitigate potential risks.
This is why over the past year, ARTICLE 19 has been working with UNESCO and local organisations to support the setting up of national coalitions on freedom of expression and content moderation.
The coalitions, bringing together diverse civil society voices, could serve as a platform for ongoing engagement, bridging the gap between global tech giants and local communities. They could contribute to the development of content moderation practices that uphold international standards on freedom of expression while giving consideration to the local context.
In Bosnia and Herzegovina, Indonesia and Kenya, members of the civil society have been coming together to discuss their shared areas of concern and ideas of how they can speak with social media companies with a more united voice. By joining forces, participants are hopeful they can build a strong foundation for a constructive dialogue with platforms.
For social media companies, the engagement with the coalitions could bring real benefits. Not a week goes by without a headline about failures in moderating content. Many of those could have been avoided, if stronger lines of dialogue existed between platforms and local actors, who have the necessary knowledge. Platforms could then easily hear the concerns of local civil society, and act in a timely manner, helping to address flaws in their systems, improve protection of fundamental rights and prevent wrongful decisions from contributing to harms on the ground.
The road is long - but the past year has marked an important starting point. In the three initial pilot countries, civil society told us they now feel more empowered to collectively engage with platforms to ensure that the rights and concerns of their communities are properly respected. It’s time for platforms to show they are ready for dialogue too.
Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.
Tags
- Content moderation
- Tech regulation
- Social media
Go Deeper
Related
Latest on Context
- 1
- 2
- 3
- 4
- 5
- 6