UK's Online Safety Bill is nearly ready and not everyone is happy

Former Facebook employee Frances Haugen gives evidence to the Joint Committee on the Draft Online Safety Bill of UK Parliament that is examining plans to regulate social media companies, in London, Britain October 25, 2021. UK Parliament 2021/Annabel Moeller/Handout via REUTERS
explainer

Former Facebook employee Frances Haugen gives evidence to the Joint Committee on the Draft Online Safety Bill of UK Parliament that is examining plans to regulate social media companies, in London, Britain October 25, 2021. UK Parliament 2021/Annabel Moeller/Handout via REUTERS

What’s the context?

The UK government will only require social media companies to scan content where "technically feasible"

LONDON - Britain's government appeared to back down in the latest tech encryption row, saying social media companies would only be forced to scan encrypted messages to stop child abuse on their platforms "if technically feasible".

The Online Safety Bill had threatened to break end-to-end encryption, one of the key technologies behind private communications, in favour of scanning devices for child sex abuse content.

But speaking in the House of Lords, Lord Parkinson of Whitley Bay said this week that Ofcom would not be able to implement that part of the law "if the technology does not exist" or is not "technically feasible".

"I would call this a victory, not a defeat. And am grateful to the UK government for making their stand clear. This is a really important moment, even if it’s not the final win", said Meredith Whittaker, the president of Signal.

A child using a laptop sits on a camp bed
Go DeeperCan an EU law save children from harmful content online?
People walk by a Twitter logo at the company headquarters in downtown San Francisco, California, U.S., April 25, 2022
Go DeeperHow Elon Musk's Twitter takeover could endanger vulnerable users
A woman uses her phone next to a logo of the WhatsApp application during Global Fintech Fest in Mumbai, India September 20, 2022
Go DeeperAsian innovators fight online hate, lies as tech giants fall short

Technology minister Michelle Donelan denied on Thursday that the bill had been watered down in the final stages before it becomes law.

WhatsApp, Signal, Element and other encrypted messaging services had previously warned against the bill's effect on private communications in an open letter, denying such technology exists.

The bill will be enforcible around mid-2024, the Ofcom communications regulator said in July 2022. But if the bill is not passed by then, it would be dropped entirely and the process would have to begin anew.

Here's what you need to know:

What is the Online Safety Bill?

Social media companies have long been criticised for not doing enough to tackle illegal and harmful content on their platforms.

Easy access to damaging material, particularly among young people, came into the spotlight after the death of 14-year-old schoolgirl Molly Russell in 2017, which her parents said came after she had viewed online material on depression and suicide.

That same year, the government published an Internet Safety Strategy to examine "the use of technical solutions to prevent online harms". This eventually became the Online Harms Bill, later called the Online Safety Bill.

What are the bill's main aims?

The draft law addresses a range of issues on social media sites, including minimising fraudulent advertisements, ensuring pornographic content is not accessible by children, and giving adult users more control over the content they are exposed to.

In the most serious cases, companies could also be banned from operating in Britain if they do not do everything reasonably practical to eradicate harmful content.

Companies will have to use age-verification services to ensure children are not exposed to what the legislation calls "legal but harmful" material - content that is not against the law, but could be seen to encourage abuse or trauma.

Tech companies will also be required to publish a summary of their risk assessments concerning the dangers posed to children, as well as giving Ofcom the power to publish details of enforcement action it takes against them.

"Young people will be safeguarded, criminality stamped out and adults given control over what they see and engage with online," Digital Secretary Michelle Donelan said in a statement last month.

Are there similar laws elsewhere?

The Online Safety Bill is similar to legislation being developed in Europe.

The European Union's Digital Services Act (DSA) includes a ban on targeted advertising aimed at children, and prohibits algorithmic promotion of content that could be harmful for minors such as videos related to eating disorders or self-harm.

This year, Singapore passed regulations to address online content that incites violence, sexual abuse, self-harm, and harms to public health and security.

However, critics have said Singapore's vague definitions of "egregious content" risk overly broad enforcement that could infringe people's freedom of expression.

What are the latest changes to the UK bill?

The Online Safety Bill initially aimed to restrict "legal but harmful" content accessed by adults through social media companies offering users more tools to control their feeds.

The bill had also said glorification of eating disorders, racism, anti-Semitism or misogyny not meeting the criminal threshold could be blocked by human moderation, community moderation, or sensitivity and warning screens.

But the government scrapped the "legal but harmful" definition for adult internet users, saying it was not an effective framework for moderating content seen by over-18s.

It warned that the definition could also encourage social media companies to take down content at the behest of authorities in a way that interfered with users' freedom of speech.

But some campaigners say the omission has weakened the law.

"Social media sites will not be forced to remove legal-but-harmful suicide content - a hugely backward step," said Julie Bentley, head of Samaritans, an emotional support charity.

"Increasing the controls that people have is no replacement for holding sites to account through the law, and this feels very much like the government snatching defeat from the jaws of victory," she added.

Why are some people unhappy with the Online Safety Bill?

There are three main issues: how content is monitored, how social media companies will verify the age of users, and potential threats to the security of encrypted messaging platforms and the privacy of users.

While the "legal but harmful" restrictions are being removed for adults, they remain in place for children, meaning social media companies will have to gather more data on their users to verify their age.

"They are likely to use biometrics to guess the age of people - measuring people's hands, heads, and also checking people's voices," said Monica Horten, a policy manager for freedom of expression at the advocacy Open Rights Group.

"We don't know how this technology works."

The bill would also make end-to-end encrypted platforms like WhatsApp, Signal and Apple Messages scan all photos against a database, to check for child sexual abuse material.

Legal experts and technology executives have said this would mean de-facto government surveillance.

"The provisions in the Online Safety Bill that would enable state-backed surveillance of private communications contain some of the broadest and (most) powerful surveillance powers ever proposed in any Western democracy," lawyers Matthew Ryder and Aidan Wills of Matrix Chambers wrote in a legal opinion.

"No communications in the UK – whether between members of parliament, between whistleblowers and journalists, or between a victim and a victims support charity – would be secure or private."

This article was updated on September 7, 2023, at 11:38 GMT to include latest updates on enforcing end-to-end encryption.

(Reporting by Adam Smith; Editing by Helen Popper and Zoe Tabary)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles


Tags

  • Gender equity
  • Polarisation
  • Content moderation
  • Tech and inequality
  • Tech regulation
  • Social media
  • Tech solutions

Free event

Trust Conference

22 October – 23 October 2024 | London

How has AI-generated disinformation affected elections? Journalist Kara Swisher takes the stage to talk challenges and regulation at the Thomson Reuters Foundation's annual forum.


TC Banner OrganisationTC Banner Organisation
Kara Swisher poses for a portrait in this undated handout photo. Handout via Thomson Reuters Foundation
Kara Swisher poses for a portrait in this undated handout photo. Handout via Thomson Reuters Foundation



Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context