DCMS minister Nadine Dorries. Sipa US / Alamy Stock Photo

Why the Online Safety Bill threatens free speech

We should protect vulnerable users of the internet, but granting ministers and tech giants sweeping powers of censorship is not the way to do it
June 16, 2022

Readers confronting the 213 pages of the Online Safety Bill will feel an urge to put a cold towel on their heads. Complicated is an understatement. I am reminded of Henry David Thoreau’s dictum “simplify, simplify.” But there are more fundamental objections.

This legislation, currently in the committee stage in the Commons, will regulate UK internet content by imposing legal obligations on internet service providers (ISPs)—principally the big ones like Facebook, Twitter and Google, but also many smaller ones. These obligations are framed as “duties of care” to users who access their content. The ISPs will be required to restrict the freedom to upload “illegal” or “harmful” material, and the freedom of others to access it.

The problems begin here. The types of content in question are indistinct. For example, content can be restricted as illegal simply if the ISP “reasonably believes” it contains words or images that constitute an offence listed in the bill, such as terrorism, child sexual exploitation or attempting to defraud or otherwise harm individuals. This reasonable belief is considered enough; it does not actually have to be illegal.

Other categories are yet more elusive, notably material that is legal but “harmful to adults”—that is, content deemed to pose a “material risk of significant harm to an appreciable number of adults in the UK.” Beyond this it is not defined in the legislation. After the bill passes into law, the culture secretary will be empowered to define such content however they see fit. Parliament should not give such sweeping powers to Whitehall and Silicon Valley with no idea how they will be used.

The response to objectionable material can include taking it down, limiting access to it or suspending users who upload it. The ISP will track down the content—using “technology” because of the sheer volume involved—and identify the restriction to be applied. The ISPs will be regulated by guidance issued by Ofcom. Because Ofcom can penalise non-compliant ISPs and impose huge fines, and the government can amend the guidance as well as oversee Ofcom’s performance, the ISPs will effectively be doing the state’s bidding.

This regime will interfere with citizens’ rights to send and receive information on the net. The state must therefore ensure that the resulting restrictions are justifiable under the freedom of expression principles in Article 10 of the European Convention on Human Rights, incorporated into our law by the Human Rights Act.

These principles recognise that not all expression is entitled to protection, for example if it is simply too damaging in a pluralist democracy with respect for human rights (such as hate speech and inciting violence). The restrictions also recognise that other speech can be limited based on an informed assessment of the expression at issue and its value. But restrictions should be applied only if they are strictly necessary in a democracy. Critically, human rights law also recognises that a lot of offensive speech needs to be protected, and that a state that respects free speech must uphold the right to publish content some will find offensive.

The central flaw in the government’s online safety regime is that it does not give the required Article 10 protection. True, the bill has been certified as compliant with the Human Rights Act by the government, but this is a box-ticking exercise. Experience tells us to treat state certification with caution, especially from a government that wants to abolish the Human Rights Act itself.

The idea that this legislation will never lead to violations of free speech rights is untenable. This is because the bill imposes too many obligations and covers far too wide a spread of content, using open-ended and vague terminology. ISPs will be pressured to over-moderate by the guidance, the risk of sanctions and a desire to keep the government sweet. Restricted content will inevitably include much that is not necessary to censor in a democracy. This will get worse when the Department for Culture begins describing in legal regulations the categories of content it deems to be legal but harmful.

Internet service providers will effectively be doing the state’s bidding

We cannot rely on ISPs, or their technology, to make the nuanced judgments required to protect free speech. They cannot assess whether someone is guilty of the complex speech offences in the Terrorism Act. Technology does not “do” satire. It can end up targeting LGBTQ+ content and information about sexual health. Content in Arabic or Urdu is over-moderated. Tyranny of the algorithm looms.

The bill does require ISPs to consider the importance of journalistic freedom of expression. But these are weak obligations, applying only to UK-linked content. Article 10 requires all expressive content on matters of public concern globally to be strongly protected.

There is a case for state restrictions on some online material, especially to protect children and vulnerable adults. The bill will catch much of this without violating Article 10 rights, though more by luck than judgment. The government should have limited itself to uncontentious, clearly defined harmful content and ensured restrictions are compatible with freedom of speech. But performative lawmaking, ensuring politicians can claim to be “robustly” tackling social ills, is in fashion. Unfortunately, this bill is a classic example of such legislative overreach.