In the middle of a global health pandemic, misinformation can prove fatal.
On Thursday, MPs brought experts and social media bosses together to discuss the rapid spread of ‘junk news’ during the crisis.
Many of us will have seen messages from – for example – a ‘brother’s cousin’s friend who works at a hospital’, stating (wrongly) that you should shine a torch in your ears to stop coronavirus. Or something about the merits of consuming bleach (please don’t do this).
These spread like wildfire on the internet and are difficult to police. So what can be done?
In the dark
Professor Philip N. Howard, Director of the Oxford Internet Institute, told Parliament’s Digital, Culture, Media and Sport disinformation task-force that one problem was a lack of transparency. Major social media firms don’t currently share data on what misinformation they’re taking down – so it can be hard to spot trends or gaps in tech giants’ responses.
Prof Howard suggests it’s time for a public body to collate this data, so researchers and journalists can assess the misinformation threats in real time. At present, any data like this ‘tends to be released weeks, months or years after the fact’. Not helpful when the situation is changing day to day.
Tackling misinformation is slightly trickier when it comes to politicians: there is sometimes a public interest in seeing what they are saying, even if it is wrong. In that case, demonstrably false statements could come ‘with a warning label’.
Facebook say they do not put any oversight on political speech. But the company took down a recent post by Brazil’s president Jair Bolsonaro, because it went against WHO guidelines on Covid-19. Will the they maintain this political scrutiny role after the pandemic – including in elections?
Governments are becoming more active, too. Ministers here have set up a ‘counter-disinformation cell’ to limit the spread of false claims about coronavirus online. It’s worth considering whether this could continue after the current crisis.
There is always an election on, somewhere in the world. Prof Howard says he fully expects to see fake election stories spread by bots, taking advantage of the Covid crisis. Disinformation suggesting that coronavirus means ‘the election has been cancelled’ could undermine turnout in the US’ November elections, for instance.
But social media giants are playing whack-a-mole with ‘junk news’ and dodgy claims right now. However well or poorly they’re performing, they need clear responsibilities and guidance on how to respond.
Stacie Hoffmann, Digital Policy and Cyber Security Consultant, Oxford Information Labs, told the committee that social media giants’ reaction to the surge in online mis/disinformation in 2016 and 2017 was ‘very very slow’ Hoffman told the committee: when they did act, the likes of Twitter and Facebook were mainly responding to what was coming out in the news, from Cambridge Analytica to Holocaust denial. Social media giants have improved their responses since then – but big gaps remain.
As committee chair Julian Knight MP noted: “If you infringe copyright, social media firms act very quickly, if you spread fake news, not so much…”
So what can be done?
Shine a light
Firstly, all social media companies could bolster their tools for reporting misinformation. Prof Howard says ‘Google are the only ones who haven’t implemented a report function to flag mis/disinformation’.
The social media firms know when someone has liked or shared a piece of misinformation, Dr Claire Wardle, Co-Founder and Director of First Draft News noted. They could ensure those users receive proper NHS or other official advice in their feeds – or a quick video on spotting misinformation.
While it can be trickier to moderate online spaces than media outlets or the ‘offline world’, it can be done. Fact-checking organisations are growing and need secure funding.
Disinformation feeds on strong emotional reactions – and media giants’ algorithms often feed that.
The ERS has long called for stronger citizenship education, which could help all of us identify fake news and figure out how to stop its spread.
There are positive signs when it comes to fighting false news. WhatsApp putting a limit on the maximum number of times a message can be shared has led to a significant reduction in the spread of falsehoods about coronavirus.
Another positive: polling shows people do trust public health officials and ‘traditional’ sources of information amid this crisis, including the BBC and the NHS.
But with millions more hours being spent on social media during the lockdown, Silicon Valley giants need to step up to this challenge, working with public bodies, to tackle falsehoods that could cost lives. To ensure there’s public trust, we’ll have to be able to see what these companies are doing. On this one, sunlight is the best disinfectant…
You can watch the committee hearing here.