The U.S. Supreme Court might end the global internet as we know it

Beatrice Gonzalez and Jose Hernandez, the mother and stepfather of Nohemi Gonzalez, who was fatally shot and killed in a 2015 rampage by Islamist militants in Paris, walk to pose for a picture with a member of their legal team outside the U.S. Supreme Court in Washington, U.S., February 16, 2023, days before justices are scheduled hear arguments in Gonzalez v. Google, challenging federal protections for internet and social media companies freeing them of responsibility for content posted by users in a case involving social media giant Google and its subsidiary YouTube, whom they argue bear some responsibility for their daughter’s death. REUTERS/Jonathan Ernst
opinion

Beatrice Gonzalez and Jose Hernandez, the mother and stepfather of Nohemi Gonzalez, who was fatally shot and killed in a 2015 rampage by Islamist militants in Paris, walk to pose for a picture with a member of their legal team outside the U.S. Supreme Court in Washington, U.S., February 16, 2023, days before justices are scheduled hear arguments in Gonzalez v. Google, challenging federal protections for internet and social media companies freeing them of responsibility for content posted by users in a case involving social media giant Google and its subsidiary YouTube, whom they argue bear some responsibility for their daughter’s death. REUTERS/Jonathan Ernst

If the Court limits the application of Section 230, the impact for free expression for everyone around the world will be severe

Barbora Bukovská is the senior director for law and policy at ARTICLE 19

Between 21 and 22 February, the U.S. Supreme Court will hear arguments in two cases that have the potential to fundamentally change the internet as we know it. The Court will decide whether one of the foundations of free speech on the internet – the protection of platforms from the liability for what others post online, provided by Section 230 of the Communications Decency Act – can continue to exist. If removed, the platforms will have to screen and censor content generated by billions of users not just in the USA but around the world.

The two cases in question, Gonzalez v Google and Twitter v. Taamnehwere initiated by families whose loved ones were killed in ISIS attacks in Paris and Istanbul. They argue that Google and Twitter were not acting aggressively enough when removing ISIS content. Though different in scope, both cases effectively deal with similar questions: as platforms host and show users terrorist content, should they be held accountable under the Anti-Terrorism Act and can their immunity from liability under Section 230 be restricted. 

Go DeeperBangladesh: New online content regulation, localisation rules threaten privacy
Former Facebook employee Frances Haugen gives evidence to the Joint Committee on the Draft Online Safety Bill of UK Parliament that is examining plans to regulate social media companies, in London, Britain October 25, 2021. UK Parliament 2021/Annabel Moeller/Handout via REUTERS
Go DeeperUK's Online Safety Bill is nearly ready and not everyone is happy
Go DeeperSingapore online safety bill must embed human rights

Protection from liability might seem like something that protects Google, Twitter and other companies, giving them free reign to tolerate problematic content on their platforms. It is certainly true that Big Tech frequently fails to tackle content such as online abuse and hate speech, and platforms are not consistent in and transparent about their conduct. 

However, shielding them from liability for the content generated by users is not about protecting companies. It is about protecting freedom of expression for all of us. Section 230 is the foundation for such protection. With the majority of the big platforms located in the United States, it has shaped much of the global internet we interact with today. It has allowed activists to organize, and dissidents and investigators to share and access information, raising awareness, monitoring violations and helping to protect human rights around the world.  

If the Supreme Court sides with the families, and limits the application of Section 230, the impact on free speech across the globe will be severe.

Platforms will suddenly be faced with a prospect of thousands of lawsuits if they fail to remove “terrorist” content anywhere in the world. Definitions of “terrorism”– let alone what exactly constitutes praise or support for a terrorist organization – are notoriously inexistent or vague. In many countries they are often conflated with criticism of government. In Russia, it means censoring topics such as criticizing the Russian war against Ukraine. In Turkey, the media and academics have been prosecuted on terrorism charges for their expression or journalistic work.

In order to avoid liability, the companies will have to censor content en masse and monitor everything we post on their platforms worldwide. The scale of this task will be enormous and only possible with a heavy reliance on automated content moderation tools. 

Automated systems cannot make complex assessments of whether certain speech is illegal or not. Such assessment relies on knowledge of context – political, social and cultural. They cannot detect nuances and irony, or whether the content is in the public interest – and they certainly cannot do that in all of the world’s languages. Relying on such systems will result in lawful speech, of marginalized communities, or those who criticize violent extremism, being arbitrarily denied a platform.

These concerns are not hypothetical. ARTICLE 19 has been campaigning on behalf of the Missing Voices, whose content was taken down or accounts blocked because of the errors or biases of automated systems. From a Palestinian journalist having his YouTube videos about the Israeli-Palestinian conflict taken down to a Mexican artist having her art censored for depicting ‘sexual activities’, automated systems have been erasing people’s work and activism. Their decisions are often non-transparent and difficult to appeal, let alone reverse. 

Documenting human rights violations has also been a casualty of automated moderation. The Syrian Archive project, which relies on content on platforms to build criminal cases and conduct human rights research tracked the removal of hundreds of thousands of posts documenting potential war crimes and human rights violations on platforms. Decisions taken by algorithms designed in Silicon Valley have effectively wiped-out essential evidence that could have been used to seek justice for the victims. 

Putting pressure on companies to screen all content and add additional categories of speech to be removed will also create an environment which further prioritizes speed over accuracy, taking down content quickly with little transparency.

Finally, the ultimate question is whether it should be up to the Supreme Court to decide on such fundamental questions as platform regulation. There is no doubt that social media platforms and their recommendation systems can have a hugely negative impact: they can facilitate and spread hate speech, extremism, bullying and other harmful content. The amount of power these companies wield over individuals, and their dominance in multiple markets, clearly warrants serious scrutiny. 

Such issues require careful consideration of competing interests: a legislative task, not a judicial one. The legislature, not judiciary, is best placed to design comprehensive reform that addresses those legitimate concerns. It must do so while continuing to respect freedom of expression, privacy and other human rights.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Online radicalisation
  • Content moderation
  • Facebook
  • Tech regulation
  • Social media



Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context