By Akash Thiara, a Placement Student with the Electoral Reform Society from the University of Nottingham.
After a long delay, the government has finally released plans to tackle ‘online harms’.
The online harms bill, which was first proposed over a year ago, sets out a framework for online companies to remove illegal content of a sensitive and extreme nature, such as those promoting terrorism or abuse.
While the new regulations do seem to show a step in the right direction, in protecting users against harmful content, some key questions need answering.
We have seen how the system of allowing ‘legal but harmful’ content to be regulated by companies just isn’t working when it comes to misinformation – particularly worrying during a pandemic but also a real threat to informed political debate.
What the government has proposed
Ofcom, the regulatory body for broadcasting and telecommunications, has been appointed to undertake and enforce the new set of guidelines. This includes the powers to levy fines of up to £18m or 10 percent of global turnover, and block services from the UK entirely, for any company which breaches the new set of regulations.
The regulations will establish a statutory duty of care for online companies – such as social media firms – to their users. The companies will be legally obliged to identify, remove and limit the spread of illegal content.
To put this into perspective, if for example Facebook failed to comply with the new rules and did not minimise harmful content on their site, they could be made to pay a fine of up to $7bn (£5bn) and risk being unable to operate in the UK.
One issue that has caused concern is that the government proposes to make social media companies draft their own terms and conditions to justify how they will approach tackling ‘legal but harmful’ content, including misinformation.
While it’s good to see the government take notice of the dangers of false news online, tech giants shouldn’t be able to set their own rules.
Further improvements can be made
In their response to the online harms white paper, the government mentioned that the aim of the bill is to ‘keep people safe online and promote a thriving democracy’. However, some of the proposals regarding disinformation essentially miss the point.
For instance, delegating much of the responsibility of governing ‘legal but harmful’ content to big tech companies is arguably little better the system that is already in place. It is evident that this does not work. We could see mis- and disinformation defined too narrowly, and allowed to spread across social media platforms.
By restricting the regulations to ‘disinformation and misinformation that could cause significant harm to an individual’, the government is effectively exempting disinformation that is damaging to our democracy. For example, spreading false rumours about election fraud could dramatically undermine confidence in voting, as we’ve seen in the US. The rise of conspiracy theories such as QAnon should also give cause for concern. While not necessarily harmful to an individual, these beliefs can spread racist ideas or undermine vital public health work such as vaccination programmes.
The government must be more transparent of what exactly is meant by ‘harmful content’ and what will fall into this category, while stepping up efforts against mis- and disinformation, before the legislation is put to parliament this year.
If the online harms bill is poorly implemented, we risk missing a vital opportunity to ensure online debate is free, fair and factual.