Democracy in DecayFeatured

The Online Safety Act, a system of mass censorship unparalleled in the Western world

THE UK Online Safety Act was approved by the British parliament with broad cross-party support and came into law in October 2023. It was intended to make the use of internet services ‘safer for individuals in the United Kingdom’. The Act purported to achieve this by ‘[requiring] providers of services regulated by this Act to identify, mitigate and manage the risks of harm’ from ‘(i) illegal content and activity, and (ii) content and activity that is harmful to children’. The main enforcer of these duties is Ofcom (‘Office of Communications’) which regulates communications services in the UK.

A strong case can be made for articulating more clearly the obligations of internet service providers to make their algorithms more transparent, provide certain content controls to their customers, and take reasonable measures to protect users from degrading and illegal content, particularly content that may be viewed by children. Who would oppose such a noble goal as the protection of children from harmful and exploitative content and communications?

On the other hand, laws should not be judged on the lofty intentions of their authors alone, but on the way they are liable to be interpreted and applied by human beings acting under realistic constraints and incentives. The 300-page Online Safety Act is certainly not as innocent and ‘safe’ as it sounds. UK civil liberties group Big Brother Watch believes the Act as it stands ‘will set free expression and privacy back decades in this country’. Reform UK has promised to do all in its power to scrap the Act completely.

A careful reading of the Act reveals that, in spite of significant improvements secured before its passage, it is anything but neutral in its impact on freedom of speech. Most notably, it radically alters the constraints and incentives under which social media companies operate, in particular regarding their content moderation policies, by shifting the onus away from users on to the communications service provider for ensuring content produced and disseminated on their service is legally compliant. These changes may well bring some modest benefits for child protection. But these benefits are limited and are secured at an unacceptably high cost for freedom of speech.

Age verification, one of the main mechanisms for shielding children from inappropriate content, adds another layer of intrusive bureaucracy to the internet but will have limited efficacy, given that many under-18s are fully capable of bypassing age verification requirements: they can use a VPN service to trick regulators into thinking they are accessing the internet outside the jurisdiction of the United Kingdom. Indeed, when the age verification requirements kicked in in July 2025, VPN signups in the UK reportedly surged by as much as 1,400 per cent.

While the intention of protecting children from harm is obviously laudable, the British government chose to pursue this aspiration at the cost of creating a regulatory environment distinctly unfriendly to freedom of expression. Unlike the traditional machinery of censorship, in which the government acted directly upon citizens, the UK’s Online Safety Act saves the government the trouble of censoring citizens directly, by imposing somewhat vague obligations upon service providers to ‘mitigate risk’ by flagging and removing illegal content and content deemed harmful to minors.

Now, on its face, it might seem perfectly reasonable to require a social media company to mitigate risks of illegal content and content that is potentially harmful for under-age users. But there are several features of this Act and the monitoring mechanisms contained within it that will make people in the UK who post lawful content in good faith more vulnerable than ever to arbitrary censorship.

To begin with, the idea of ‘sufficient’ compliance with the Act, most notably the obligation to mitigate risks of exposure to illegal content through appropriate moderation policies, is hard to define with any precision, and its operational meaning will ultimately be at the discretion of Ofcom. This opens the prospect of a worrying level of discretionary power on the part of government agency officials over the limits of speech across the entire UK digital public sphere – exactly the same problem we see in the EU’s Digital Services Act, which imposes similarly vague ‘due diligence’ duties on ‘very large online digital platforms’.

This broad discretionary power is even more worrying given that the penalties for non-compliance with the Act are extremely steep – up to 10 per cent of a company’s global turnover. In the absence of a clear idea of what ‘sufficient’ compliance might entail in practice, or how Ofcom might interpret it, the logical thing for a social media company that wants to protect its ‘bottom line’ is to err on the side of taking down content in case there is the slightest doubt about its legality.

The net effect of the incentive to err on the side of suppression of potentially illegal content is that a lot of perfectly legitimate and lawful content will be swallowed up by the censorship machine. Certain categories of illegal content, such as ‘terrorism’ content, child sexual exploitation, certain types of violent content, and criminal ‘incitement to hatred’ offences, are deemed to constitute ‘priority’ illegal content and therefore service providers are required to take steps to reduce exposure to them pre-emptively (say, by suppressing its visibility for its intended audience) rather than reactively (say, in response to a complaint or allegation by a customer).

Because this must be done at scale, and a finding of non-compliance would be extremely costly to the company, it is inevitable that AI-driven censorship algorithms will be used to shadow-ban or suppress content deemed suspect or ‘risky’, The problem is that AI-based models analysing massive amounts of data, particularly if trained to err on the side of intervention, will suppress content that is lawful and reasonable just because it contains certain ‘red flag’ language patterns or keywords.

Lest all of this sound like the speculative musings of a philosopher, it is worth quoting from an article by Chris Best, the CEO and co-founder of Substack, one of the few blogging platforms that truly championed free speech during the pandemic, on how the UK Online Safety Act makes it harder than ever for a business like his to live up to its commitment to create an environment supportive of free speech:

‘In a climate of genuine anxiety about children’s exposure to harmful content, the Online Safety Act can sound like a careful, commonsense response. But what I’ve learned is that in practice, it pushes toward something much darker: a system of mass political censorship unlike anywhere else in the Western world. What does it actually mean to ‘comply’ with the Online Safety Act? It does not mean hiring a few extra moderators or adding a warning label. It means platforms must build systems that continuously classify and censor speech at scale, deciding – often in advance of any complaint – what a regulator might deem unsuitable for children. Armies of human moderators or AI must be employed to scan essays, journalism, satires, photography, and every type of comment and discussion thread for potential triggers. Notably, these systems are not only seeking out illegal materials; they are trying to predict regulatory risk of lawful, mainstream comment in the face of stiff penalties.’

To make this a little more concrete, let’s consider at-scale censorship from the covid era. A large volume of legitimate debate and commentary, including my own, was shut down by ‘public health’ moderation algorithms. For example, when I attempted to upload a blog post to Medium about my experience of being censored for critically discussing controversial issues such as vaccination and masking, my post was immediately taken down based on the allegation that it constituted ‘covid misinformation’. So apparently even discussing a past episode of censorship on a different platform was identified by the content moderation algorithms as a ‘misinformation’ offence.

Similarly, when I documented spectacular cases of Big Pharma fraud settlements on LinkedIn, my account was suspended based on ‘public health misinformation,’ even though what I stated was indisputably true and on public record.

Now, imagine if someone writes a hard-hitting social media post on child grooming gangs, or a bit of harmless satire with some sexual innuendo, or a critical discussion of the mindset of this or that terrorist movement, or a candid report of the sentiments of a small town overwhelmed by immigration: what sort of automated content moderation policy designed to minimise exposure to illegal child sexual exploitation content, terrorism, or criminal incitement to hatred toward racial minorities, could pick apart legitimate commentary from illegal content?

Surely there is a high likelihood that a significant number of lawful posts on these topics would be taken down or shadow-banned by an AI-driven content censorship machine, much as the slightest criticism of vaccination or mask policy triggered censorship during the covid era? Why should we believe that the oversensitivity of the covid censorship machine, which was even highlighted as problematic by one of its own architects, the CEO of Meta/Facebook Mark Zuckerberg, will not be replicated by companies attempting to comply with the UK Online Safety Act?

This article appeared in The Freedom Blogon January 18, 2026, and is republished by kind permission.

Source link

Related Posts

Load More Posts Loading...No More Posts.