Big Tech accused of failing to tackle U.S. midterm 'lies'
A voter fills out a ballot for New York's primary election at a polling station in Brooklyn, New York City, New York, U.S., August 23, 2022. REUTERS/Brendan McDermid
What’s the context?
As Americans head to the polls, critics say Facebook and other social firms must do more to protect election integrity
- Online platforms step up moderation ahead of Nov. 8 polls
- Campaign groups say more action needed against abuse
- Algorithms favor engagement over expertise, say critics
LOS ANGELES - Social media firms have been accused of failing to crack down on political disinformation and hate as campaigning heats up for next month's midterm elections in the United States.
Facebook and other top social networks said they have stepped up moderation and fact-checking in the run-up to the Nov. 8 vote, but a coalition of more than a dozen non-profits and rights groups said it is too little, too late.
"Platforms have done very little about people posting lies about the election," said Nora Benavidez, a lawyer with Free Press, which is a member of the Change the Terms coalition against online hate and disinformation.
Major platforms have only begun to ramp up election security measures in the final moments, said Benavidez, and have not invested enough in content moderation staff, or in rooting out accounts promoting conspiracies about poll rigging.
Meta - whose Facebook platform was cited as a news source by more than 30% of Americans in a recent poll by the Pew Research Center - said it has hundreds of staffers focused on the U.S. midterms, and invested $5 million in fact-checking for the vote.
In September, Meta said it had disrupted the first known China-based online political interference campaign, which targeted Americans on controversial issues such as abortion and gun rights ahead of the midterms.
It pledged to monitor Facebook for misleading posts about polling sites and bans new political advertising in the two weeks before elections.
Facebook exempts politicians from its third-party fact-checking program, allowing them to run adverts with false claims - though they are covered by a ban on adverts that discourage voting or question a current election's legitimacy.
Critics have said its policy enables misinformation, while Meta chief executive Mark Zuckerberg has said the firm does not want to stifle political speech.
Benavidez said Meta and other major platforms were being lax about tackling posts and groups that promote what she called the "big lie" - the notion pushed by former U.S. President Donald Trump and his allies that the 2020 U.S. presidential poll was rigged and that U.S. elections cannot be trusted.
Meta declined to make a comment in response. The firm has met Change the Terms, and Meta's president of global affairs Nick Clegg said it "invests a huge amount to protect elections".
Social media firms have rolled out a range of policies to tackle lies and misleading information around elections.
TikTok has said its rules ban "election misinformation, harassment ... hateful behavior, and violent extremism", and it has partnered with outside fact checkers and election experts ahead of the vote.
Google and YouTube have also announced rules against "content interfering with democratic processes", and said they will prioritize authoritative national and local news sources when users search for information about the elections.
Personal data questions
Concerns over online election manipulation grew after it emerged that UK firm Cambridge Analytica, which worked on Trump's 2016 presidential election campaign, had surreptitiously gained access to millions of voters' Facebook data which was used to target users with political messaging.
Since then, Facebook has paid out billions of dollars in fines to regulators in connection with the incident, and said it has banned apps on the platform from requesting user data that is not necessary or relevant for their product.
Like Facebook, other social media firms have put some restrictions in place for political adverts following the scandal. TikTok bans them entirely, while YouTube limits what data candidates can use to target voters.
But many of the core problems have not been addressed, said David Carroll, who spent years fighting Cambridge Analytica in court to see all the personal data it had accessed from his Facebook account.
"Still, we don't have visibility or control into the supply chain of data on our social media platforms" said Carroll, an associate professor of media design at The New School, a university in New York.
The United States has not passed national privacy laws or enacted strong restrictions on how data can be harvested and re-sold by advertisers, said Carroll, who will speak at the Thomson Reuters Foundation's Trust Conference in London on Oct. 26.
"There's been a lot of talk, not a lot of action," he added, saying that had left firms to make their own choices about how to guard against abuse during election time.
Algorithms 'prioritize engagement'
A core problem is that platforms run on algorithms that too often prioritize user engagement above all other concerns, said Zamaan Qureshi, a policy advisor with the Real Facebook Oversight Board, a group of experts critical of Facebook.
Although Facebook has the potential to tweak its code to prioritize authoritative news sources ahead of sensational or potentially misleading information, it does not deploy those tools often enough, Qureshi said.
"Platforms keep saying: trust us, trust us, we know what we're doing - but that line doesn't hold any weight anymore," he said.
Facebook said that it does step in and prevent content that its third-party fact checkers deem false from going viral.
Qureshi said Facebook's record in elections around the world was a cause for concern, pointing to recent polls in Brazil and Kenya, where he said the platform did not sufficiently rein in misinformation and hate speech.
In Brazil, nonprofit Global Witness submitted five Portuguese-language Facebook adverts containing false election information in an investigation to test the site's election integrity policies. All five were approved, it said.
Kenya's ethnic cohesion watchdog threatened to shut down the platform in July unless it took speedy action to tackle hate speech and incitement relating to an August election.
Facebook said it had invested significant resources to tackle hate speech and misinformation in both countries.
When it comes to the upcoming U.S. midterms, "it may be too late for any of the major firms to implement a meaningful mechanism to limit hate or lies," said Benavidez.
"But this is not a dress rehearsal."
(Reporting by Avi Asher-Schapiro @AASchapiro, Editing by Sonia Elks.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
The human stories behind the shift to a green economy
Latest on Context