UK's Online Safety Bill is nearly ready and not everyone is happy
Former Facebook employee Frances Haugen gives evidence to the Joint Committee on the Draft Online Safety Bill of UK Parliament that is examining plans to regulate social media companies, in London, Britain October 25, 2021. UK Parliament 2021/Annabel Moeller/Handout via REUTERS
What’s the context?
Social media platforms that fail to remove illegal or damaging material could face big fines under the regulations
LONDON - Britain's long-delayed push to make Big Tech firms remove harmful online content could soon become law, paving the way for hefty penalties against social media platforms and others found flouting the rules.
But as the Online Safety Bill enters the final stages before approval, it has faced criticism from all quarters - from child welfare campaigners who say it does not go far enough to tech companies warning of increased state surveillance.
The bill, which has already been through several rewrites following criticism from pressure groups and changes in government, returned to parliament this week to undergo three more readings and any final amendments before becoming law.
That could happen as soon as spring 2023, with the "relevant duties" becoming enforcible around mid-2024, the Ofcom communications regulator said in July. If the bill is not passed by then, it would be dropped entirely and the process would have to begin again.
Here's what you need to know:
What is the Online Safety Bill?
Social media companies have long been criticised for not doing enough to tackle illegal and harmful content on their platforms.
Easy access to damaging material, particularly among young people, came into the spotlight after the death of 14-year-old schoolgirl Molly Russell in 2017, which her parents said came after she had viewed online material on depression and suicide.
That same year, the government published an Internet Safety Strategy to examine "the use of technical solutions to prevent online harms". This eventually became the Online Harms Bill, later called the Online Safety Bill.
What are the bill's main aims?
The draft law addresses a range of issues on social media sites, including minimising fraudulent advertisements, ensuring pornographic content is not accessible by children, and giving adult users more control over the content they are exposed to.
In the most serious cases, companies could also be banned from operating in Britain if they do not do everything reasonably practical to eradicate harmful content.
Companies will have to use age-verification services to ensure children are not exposed to what the legislation calls "legal but harmful" material - content that is not against the law, but could be seen to encourage abuse or trauma.
Tech companies will also be required to publish a summary of their risk assessments concerning the dangers posed to children, as well as giving Ofcom the power to publish details of enforcement action it takes against them.
"Young people will be safeguarded, criminality stamped out and adults given control over what they see and engage with online," Digital Secretary Michelle Donelan said in a statement last month.
Are there similar laws elsewhere?
The Online Safety Bill is similar to legislation being developed in Europe.
The European Union's Digital Services Act (DSA) includes a ban on targeted advertising aimed at children, and prohibits algorithmic promotion of content that could be harmful for minors such as videos related to eating disorders or self-harm.
This year, Singapore passed regulations to address online content that incites violence, sexual abuse, self-harm, and harms to public health and security.
However, critics have said Singapore's vague definitions of "egregious content" risk overly broad enforcement that could infringe people's freedom of expression.
What are the latest changes to the UK bill?
The Online Safety Bill initially aimed to restrict "legal but harmful" content accessed by adults through social media companies offering users more tools to control their feeds.
The bill had also said glorification of eating disorders, racism, anti-Semitism or misogyny not meeting the criminal threshold could be blocked by human moderation, community moderation, or sensitivity and warning screens.
But the government said this week it was scrapping the "legal but harmful" definition for adult internet users, saying it was not an effective framework for moderating content seen by over-18s.
It warned that the definition could also encourage social media companies to take down content at the behest of authorities in a way that interfered with users' freedom of speech.
But some campaigners say the omission has weakened the law.
"Social media sites will not be forced to remove legal-but-harmful suicide content - a hugely backward step," said Julie Bentley, head of Samaritans, an emotional support charity.
"Increasing the controls that people have is no replacement for holding sites to account through the law, and this feels very much like the government snatching defeat from the jaws of victory," she added.
Why are some people unhappy with the Online Safety Bill?
There are three main issues: how content is monitored, how social media companies will verify the age of users, and potential threats to the security of encrypted messaging platforms and the privacy of users.
While the "legal but harmful" restrictions are being removed for adults, they remain in place for children, meaning social media companies will have to gather more data on their users to verify their age.
"They are likely to use biometrics to guess the age of people - measuring people's hands, heads, and also checking people's voices," said Monica Horten, a policy manager for freedom of expression at the advocacy Open Rights Group.
"We don't know how this technology works."
The bill would also make end-to-end encrypted platforms like WhatsApp, Signal and Apple Messages scan all photos against a database, to check for child sexual abuse material.
Legal experts and technology executives have said this would mean de-facto government surveillance.
"The provisions in the Online Safety Bill that would enable state-backed surveillance of private communications contain some of the broadest and (most) powerful surveillance powers ever proposed in any Western democracy," lawyers Matthew Ryder and Aidan Wills of Matrix Chambers wrote in a legal opinion.
"No communications in the UK – whether between members of parliament, between whistleblowers and journalists, or between a victim and a victims support charity – would be secure or private."
(Reporting by Adam Smith; Editing by Helen Popper)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
The human stories behind the shift to a green economy
Latest on Context