Israel-Hamas war: ‘Dire’ disinformation spreads globally

Hamas's armed wing Izz el-Deen al-Qassam Brigades train with paragliders as they prepare for an armed air assault, in this screengrab obtained from a social media video released on October 7, 2023. Izz el-Deen al-Qassam Brigades via Telegram/via REUTERS

Hamas's armed wing Izz el-Deen al-Qassam Brigades train with paragliders as they prepare for an armed air assault, in this screengrab obtained from a social media video released on October 7, 2023. Izz el-Deen al-Qassam Brigades via Telegram/via REUTERS

What’s the context?

Social media platforms struggle to contain graphic images and disinformation on the Israel-Hamas conflict spreading from India to the U.S.

  • Surge in disinformation, hate speech seen in many countries
  • Platforms don't invest enough in non-English language content moderation
  • Users will find ways to get around content moderation

BANGKOK/LONDON/BEIRUT - Hours after the Israel-Hamas conflict erupted on Oct. 7, Bharat Nayak, a fact-checker in the east Indian state of Jharkhand, noticed a surge of disinformation and hate speech directed at Muslims on his dashboard of WhatsApp messages.

The viral messages from hundreds of public WhatsApp groups in India contained graphic images and videos, including many from Syria and Afghanistan falsely labelled as being from Israel, with captions in Hindi that called Muslims evil.

"They are using the crisis to spread misinformation against Muslims, saying they will attack Hindus in a similar way, and to falsely accuse opposition parties and others of supporting Hamas, and calling for their elimination," Nayak said.

"The content is very graphic, the messaging is extreme, and it gets forwarded many times, as there is no content moderation on WhatsApp" he told Context.

The conflict, that has killed over 1,400 people in Israel and more than 8,000 in the Gaza Strip, has triggered a surge in disinformation and hate speech against Muslims and Jews across social media platforms from India to China to the United States.

A woman looks at her mobile phone as she sits on a balcony near a man standing by a window at a residential compound in Wuhan, Hubei province, China, March 10, 2020
Go DeeperAI supercharges disinformation and censorship, report warns
A woman uses her phone next to a logo of the WhatsApp application during Global Fintech Fest in Mumbai, India September 20, 2022
Go DeeperAsian innovators fight online hate, lies as tech giants fall short
A Kenyan social media user checks her TikTok feed on her phone
Go DeeperOnline disinformation stokes tensions as Kenya elections near

Meta and X, formerly known as Twitter, said they have removed tens of thousands of posts, but the volume of disinformation and hate speech underlines the failure of social media platforms to boost content moderation, particularly in languages other than English, say digital rights experts.

"We've tirelessly drawn their attention to these issues over the years, but social media platforms continue to fall short when it comes to combating hate speech, incitement and disinformation," said Mona Shtaya, a nonresident fellow at The Tahrir Institute for Middle East Policy, a non-profit.

"The recent layoffs in trust and safety teams across platforms underscore this deficiency," she said.

"Additionally, their resource allocation - based on market size, rather than assessed risks - exacerbates the challenges faced by marginalised communities including Palestinians and others."

In a blog post, Meta - which owns Facebook, Instagram and WhatsApp - said it had "quickly established a special operations centre staffed with experts, including fluent Hebrew and Arabic speakers," and that it is working with third-party fact-checkers in the region "to debunk false claims".

X, formerly known as Twitter, did not respond to a request for comment.

Israeli soldiers stand next to rockets lying on the ground at an unknown location in this social media image released on October 11, 2023. @IDFSpokesperson via X/via REUTERS

Israeli soldiers stand next to rockets lying on the ground at an unknown location in this social media image released on October 11, 2023. @IDFSpokesperson via X/via REUTERS

Israeli soldiers stand next to rockets lying on the ground at an unknown location in this social media image released on October 11, 2023. @IDFSpokesperson via X/via REUTERS

Real-world harms

Failures of content moderation are not limited to the decades-long Israel-Palestine conflict.

U.N. human rights investigators said in 2018 that the use of Facebook had played a key role in spreading hate speech that fuelled violence against the ethnic Rohingya community in Myanmar in 2017.

Rohingya refugees in 2021 sued Meta for $150 billion over allegations that the company's failures to police content, and its platform's design contributed to the real-world violence.

Meta has acknowledged being "too slow" to act in Myanmar.

Last year, a lawsuit against Meta filed in Kenya accused the platform of allowing violent and hateful posts from Ethiopia on Facebook, and its recommendation systems of amplifying violent posts that inflamed the Ethiopian civil war.

The company has faced similar accusations related to violence in Sri Lanka, India, Indonesia and Cambodia.

The surge in disinformation during the current Israel-Hamas conflict underscores that "platforms do not have the right systems in place," said Sabhanaz Rashid Diya, former head of policy at Meta for Bangladesh.

"The historical under-investment in specific parts of the world and specific languages is now being tested in this crisis," said Diya, founding board director of Tech Global Institute, a thinktank.

"Some of the challenges we're seeing around the information ecosystem are consequences of not building capacity; these are consequences of automated systems, staffing issues; not having sufficient fact-checkers in these markets; not having policies that are contextualised for local regions," she added.

Whack-a-mole

The Arab Centre for Social Media Advancement, or 7amleh, has documented more than half a million instances in Hebrew of hate speech and incitement to violence against Palestinians and their supporters.

There is also an over 50-fold increase in the absolute volume of anti-Semitic comments on YouTube videos, the Institute for Strategic Dialogue in London said in a report this week.

State-affiliated accounts of Iran, Russia and China are also spreading disinformation and hate speech on Facebook and X, it said, adding that it could contribute to "polarisation and deepening mistrust towards democratic institutions and the media."

Reports of anti-Semitic and Islamophobic incidents have surged worldwide, including assaults, vandalism and the fatal stabbing of a 6-year-old Palestinian boy in the United States.

They are a result of the hate speech online, said Marc Owen Jones, who researches disinformation in the Middle East.

"Much of the disinformation is violent, graphic and highly emotive - designed to provoke polarisation and turn people against each other," said Jones, an associate professor at Hamad bin Khalifa University in Qatar.

It is "driving a sense of righteousness and tribalism that contributes to violence, as we've seen as far away as Dagestan and Illinois. The upshot is dire," said Jones.

Yet despite heated conversations around the need for better content moderation, trust and safety is "resource-intensive, meaning that tackling the issue is a challenge for any platform," said Yu-Lan Scholliers, head of product at Checkstep, a UK-based content moderation services firm.

With easy access to AI, "it's now much easier to generate real-looking but fake content - requiring more advanced detection mechanisms," said Scholliers, who previously worked in Meta's product data science team.

But even if platforms invested heavily in their trust and safety teams, the main challenge "is and will be adversarial behaviour - users always find more and more creative ways to avoid detection," she said.

"It is a whack-a-mole that can never be fully solved."

(With additional reporting by Avi Asher-Schapiro in Los Angeles. Editing by Zoe Tabary.)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles


Computer specialists of the misinformation team take part in the NATO-led cyber war games 'Locked Shields 2023' in Tallinn, Estonia April 18, 2023. REUTERS/Ints Kalnins

Part of:

Disinformation: Is democracy under threat?

From falsehoods to deepfakes, here’s a collection of our stories on what online disinformation means for democracy

Updated: August 23, 2023


Tags

  • Disinformation and misinformation
  • Polarisation
  • Content moderation
  • Facebook
  • Twitter
  • TikTok
  • Instagram
  • Meta
  • Social media

Featured Podcast

An illustration photo shows the globe with a tree standing on top. On the left hand side, a red backed illustration shows barren trees and oil refinery towers. On the right hand side, a green backed illustration shows wind turbines and solar panels. A sound equaliser image crosses the screen to indicates audio.
6 EPISODES
Podcast

Just Transition

The human stories behind the shift to a green economy

An illustration photo shows the globe with a tree standing on top. On the left hand side, a red backed illustration shows barren trees and oil refinery towers. On the right hand side, a green backed illustration shows wind turbines and solar panels. A sound equaliser image crosses the screen to indicates audio.
Podcast




Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context