Will the US Supreme Court make it harder to remove online hate speech?

View of the U.S. Supreme Court building in Washington, U.S., January 8, 2024. REUTERS/Julia Nikhinson
opinion

View of the U.S. Supreme Court building in Washington, U.S., January 8, 2024. REUTERS/Julia Nikhinson

The case will examine whether Florida and Texas can pass laws preventing social media companies from moderating content posted by users

Barbora Bukovská is senior director for law and policy, ARTICLE 19

On 26 February, the US Supreme Court will hear oral arguments in two crucial cases, deciding whether Florida and Texas can adopt laws preventing social media companies from moderating content posted by users. 

In the leadup to the 2020 U.S. elections, conspiracy theories and various disinformation have been widely shared on social media, including by then-President Donald Trump and many conservative sources. YouTube, X and Meta responded by restricting, flagging, or altogether removing content in accordance with their terms of service. As a result, the Republican Party accused platforms of censoring conservative voices. 

In 2021, Texas and Florida passed laws seeking to limit content moderation on a select group of platforms perceived to have a ‘leftist’ agenda and anti-conservative bias. 

Both of the laws are now subject of lawsuits brought before the Supreme Court by a trade association NetChoice and the Computer and Communications Industry Association. The decision of the Court in these cases might set a dangerous precedent for free speech that could have long-lasting consequences beyond the United States. 

Content moderation on social media is notoriously problematic. To perform it at scale, companies rely on automation, using algorithms which are incapable of understanding context and nuance. Too often, this leads to mistakes resulting in over-removal of perfectly legitimate content. Many decisions lack proper justifications and users struggle with platforms’ ineffective appeal mechanisms. 

The Florida and Texas laws, however, do little to address this fundamental problem. Both claim to be ‘anti-censorship’ and, on the surface, might appear content neutral. In reality however, it’s not hard to see that they are inherently politically motivated. The laws impose ‘must-carry’ obligations, effectively commanding which content is and isn’t allowed to be taken down or restricted by platforms. Through that, they seek to eliminate any discretion platforms have in their content moderation practices and allow governments to control the public debate.

Both laws are incredibly vague and broad, with a lot of room for interpretation as to what exactly ‘censorship’ means - leaving platforms to guess what exactly they might be liable for. 

Those unclear definitions are coupled with an extraordinary amount of power handed over to state attorney generals, which makes them prone to politicised enforcement. The ambiguity awards state enforcers a lot of discretion in choosing the targets of speech regulation. Attorney generals are free to turn a blind eye to some content moderation decisions and vigorously investigate others, based purely on political interests. Even more worryingly, both laws enable attorney generals to act on the mere possibility of violation - making it easy for them to go after content moderation decisions that do not support their chosen political viewpoint. 

All this means that social media platforms will have no way to predict how the laws are going to be enforced. This will either lead to overmoderation, or abnegation of moderating responsibilities altogether, as platforms deem content moderation too risky, given it can expose them to potential litigation and liability. Eliminating all content moderation will transform the way most users use social media, effectively rendering many platforms unusable - thereby threatening users’ right to receive information. 

The laws, if allowed to stand, could serve as a blueprint for other governments, beyond the US, on how to interfere with social media platforms’ policies in a way that serves political interests. 

Those concerns are not purely hypothetical. When Meta, Twitter and YouTube decided to suspend Donald Trump’s account in the aftermath of January 6, populist leaders around the world raised alarm and proposed similar regulations, designed to curb platforms’ ability to enforce community standards. In Mexico, President Andrés Manuel López Obrador vowed to lead ‘an international effort’ to act against what he considered ‘platform censorship’. In Poland, the then ruling conservative coalition proposed a law that would make it illegal to delete content that didn’t break Polish laws, with oversight powers given to the Free Speech Council - a new body that did not require political independence of members. And in Brazil, then-President Jair Bolsonaro signed a decree which temporarily banned social media platforms from removing certain types of content, including misinformation about Covid-19 and the 2022 presidential elections. 

As more political discourse moves online, governments around the world will continue to look for new ways to restrict critical expression in favour of government favoured viewpoints. From Turkey to Sri Lanka, ‘fake news’ and ‘online safety’ laws are proliferating, giving states the powers to prosecute users for expressing undesirable views on social media, and to dictate what content is deemed ‘harmful’ or ‘dangerous’ and should be taken down. 

‘Must carry’ laws like the ones passed by Florida and Texas lawmakers are the flip side of the same coin and serve the same purpose - giving the government increasing more control over the public debate online. If the Supreme Court finds them constitutional, it would set a dangerous precedent for other states looking to use new tools to shape the public narrative in a way that suits them. The US must not hand them a playbook. 

RelatedLawsuits pile up as U.S. parents take on social media giants
The Twitter App loads on an iPhone in front of blurred out skyline
RelatedU.S. states take center stage in battles for control over social media
RelatedMeta and human rights in Palestine

Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Content moderation
  • Tech regulation
  • Social media


Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context