How do we tackle the ‘infodemic’ of misinformation around coronavirus?

Author:
Guest Author, the views expressed are those of the author and do not necessarily represent those of the Electoral Reform Society.

Posted on the 27th April 2020

The proliferation of false, misleading and harmful information about the coronavirus has been described as an ‘infodemic’ by the World Health Organisation. Government, social media companies, and others have taken concerted action against it.

ERS Research and Policy Officer Michela Palese and the Constitution Unit’s Alan Renwick examine these responses and consider potential lessons for tackling online misinformation more broadly. 

COVID-19 is rightly dominating the international agenda. Besides the crucial health, economic, and social dimensions, considerable attention is being paid to the information on COVID-19 that is circulating online.

Ever since the virus emerged, false, misleading and/or harmful information has spread, especially online. Newsguard, which ranks websites by trustworthiness, found that, in the 90 days to 3 March, 75 US websites publishing coronavirus misinformation received ‘more than 142 times the engagement of the two major public health institutions providing information about the outbreak’. Ofcom found that ‘[a]lmost half of UK online adults came across false or misleading information about the coronavirus’ in the last week of March. The World Health Organisation (WHO) described the misinformation as an ‘infodemic – an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it.’

The capacity of social media and 24/7 news to proliferate misinformation was already manifest. But this is the first time the potentially nefarious effects of an unregulated online space have combined with a global pandemic. As Conservative MP Damian Collins put it, this is the ‘first major public health crisis of the social media age’.

Governments and tech companies across the globe are responding. In this post, we highlight key steps and consider lessons for dealing with misinformation in general.

Actions by the UK government and independent public bodies

The UK government has followed a four-pronged strategy for tackling misinformation on COVID-19. First, in early March it established what it has called the Counter Disinformation Unit or Counter Disinformation Cell – a cross-governmental body based in the Department for Digital, Culture, Media, and Sport (DCMS) with a remit to work with others in developing ‘a comprehensive overview of the extent, scope and impact of disinformation related to coronavirus’. This is coordinating with social media companies and others to agree action.

Second, the government has developed its own capacity for rebutting misinformation: the Rapid Response Unit in the Cabinet Office, established in 2018, is identifying the worst cases of misinformation and responding to them.

Third, the government, with the NHS, Public Health England, and others, are pursuing a massive public information campaign. The centrepieces have been daily press conferences from Downing Street and extensive advertising through traditional channels. More innovative methods have included a text message sent to all mobile phones and a Coronavirus Information Service on WhatsApp.

Finally, the government has sought to promote digital literacy skills, publicising its ‘SHARE’ checklist of points to think about before sharing stories online.

Independent public bodies have stepped in too. The BBC has been most prominent, fact-checking, myth-busting, and promoting high-quality information. Ofcom research in March found that 82% of people in the UK were turning to BBC services for news about the pandemic, and that ‘[a]verage daily news viewing across all channels was up by 92% in March 2020 compared to March 2019’. Ofcom itself has also intervened, ruling in early April against a community radio station that aired an interview in which false conspiracy theories about the virus were propounded without adequate challenge.

Actions by tech companies

Social media companies have taken a two-fold approach to tackling misinformation surrounding COVID-19: taking down false, misleading and harmful posts; and making high-quality, official and authoritative information more prominent. In a joint statement, Facebook, Twitter, Google, YouTube, Microsoft, LinkedIn, and Reddit said they would work closely together on their COVID-19 response.

Twitter has broadened its definition of ‘harmful’ content that can be removed to include material going against the advice of global and local health authorities. It has prioritised removing content posing a direct risk to people’s health or well-being, including tweets by prominent politicians and other public figures.

Facebook launched a ‘COVID-19 Information Centre’, giving real-time updates from health authorities and global organisations. It is providing free ads to the WHO and other health authorities, and working with fact-checkers to reduce dissemination of misinformation and attach warning labels to it. Like Twitter, it is removing posts that could cause ‘imminent physical harm’ to the community, such as misinformation on cures for COVID-19.

WhatsApp – owned by Facebook – is encrypted, so misinformation is invisible to the platform itself. But users can now sign up to a ‘WHO Health Alert’ and submit content directly to fact-checkers for verification. WhatsApp has also limited how far messages can be forwarded.

In addition to displaying content from authoritative sources more prominently, Google has launched an ‘SOS Alert’ for coronavirus resources, which directs people who search for ‘coronavirus’ to news and information from the WHO.

Other responses

Fact-checkers such as Full Fact are heavily involved in tackling misinformation about COVID-19. Snopes, one of the world’s leading fact-checking organisations, experienced a 50% increase in traffic in March. A coalition of fact-checkers from over 70 countries have created a searchable, online database gathering the falsehoods they have identified relating to the coronavirus. In an unusual move, the Conservative MP and former Chair of the House of Commons DCMS Committee Damian Collins led the creation of a new fact-checking body focused specifically on coronavirus. Social media companies are providing funds to some fact-checking organisations.

How should we judge these responses?

These responses to misinformation go well beyond anything that has been seen in the UK in recent times. Nevertheless, criticisms have mainly been that they have been too limited, rather than too strong. In March, the then Shadow DCMS Secretary, Tracy Brabin, criticised how long ministers had taken to act, saying it was ‘time for the government to stop dragging its feet on online harms and introduce robust, comprehensive measures for curbing the spread of fake news’. On 11 March, in a letter to the Secretary of State, the Chair of the Commons DCMS Committee, Conservative MP Julian Knight, also expressed frustration at the government’s slow response, arguing that ‘False narratives could potentially undermine the ongoing efforts by Government and public health organisations’ to deal with the pandemic. The committee’s Sub-committee on Online Harms and Disinformation asked for reassurances that the DCMS Counter Disinformation Unit had sufficient resources and expertise, and that it would be working closely with social media companies to ensure ‘people receive vitally important and accurate information and can trust what they see online’. Knight added that ‘Tech giants who allow this [harmful content] to proliferate on their platforms are morally responsible for tackling disinformation and should face penalties if they don’t’.

Damian Collins, former chair of the DCMS Committee and long-standing advocate of online campaign regulation, went further, arguing on 16 March that the then forthcoming Coronavirus Bill ‘should make it an offence to spread misinformation about coronavirus with the intention of undermining public health’.

Ofcom’s action against a community radio station, cited above, is instructive. In the case of broadcasting, clear regulations exist, allowing Ofcom to intervene against misinformation that could be directly harmful. But no such regulatory framework exists online. Given the convergence of different media channels and the fact that many people, particularly in younger age groups, now consume news mainly online, this regulatory distinction is increasingly hard to justify. The government proposed a new framework for online regulation in its Online Harms white paper published last year. But this would still be relatively light-touch except in relation to illegal content such as child pornography. The government’s response to its consultation on the white paper, published in February, mentioned disinformation only twice, and indicated no inclination to take serious action. Time will tell whether the COVID-19 crisis prompts a rethink.

Are there wider lessons to learn?

The willingness that government and tech companies have shown to tackle misinformation about COVID-19 contrasts sharply with their reluctance to move against false and misleading claims in election or referendum campaigns. One of us explored the challenges on this blog in relation to last year’s election campaign, and we examined the issue in detail in our report on Doing Democracy Better, published last spring. Might any of the practices adopted for the current crisis deserve to be applied to political debate more broadly?

Clearly, coronavirus misinformation is very different from misinformation on political issues. Despite uncertainty, matters of public health are grounded in scientific, verifiable facts. Many political issues, by contrast, are inherently contestable and grounded in partisan or ideological divides, with claims and counterclaims being hard to distinguish. Free speech requires that a wide range of political views can be expressed, even if many of us would find them wrongheaded or unpalatable. Well-grounded concerns have been expressed that governments around the world might use the current crisis to suppress legitimate dissent.

Equally, however, misinformation in politics can cause real harm. It may not lead directly to disease and, tragically, death, as is the danger with the current infodemic. But it can underpin electoral or policy choices that have profound effects on people’s lives. When policy debates are rooted in deceptions, we should all worry.

Direct action to ban misinformation is not always be possible or desirable. But the current case shows that that is not the only policy tool available. Indeed, as we argued in Doing Democracy Better, banning misinformation is unlikely – alone – to work. Ensuring the public have access to high-quality information and can distinguish this from misinformation is also required.

The current case also shows that tackling misinformation requires adequate resources. Fact-checking must be rigorous, and well-funded journalism is needed at national and local levels to ensure that information is easily accessible to citizens. Those citizens themselves have a vital role too in identifying and exposing misinformation. This is seen in a call from the Commons DCMS Committee for members of the public to submit examples of misinformation on the virus.

Perhaps the most important lesson offered by the responses to this infodemic relates to the importance of cooperation between governments and tech platforms, combined with the political will to take decisive action. The speed and scale of these responses indicates that, where there is the will, large-scale, concerted action can be taken to prevent the spread of online misinformation.

This piece originally appeared on the Constitution Unit blog and is part of a continuing Unit series on coronavirus and democracy.

Read more posts...