By Anne Whitehouse, Director of Communications and Public Engagement at the Nuffield Department of Population Health, University of Oxford
Whatever you think of Donald Trump, he has helped to put fake news on the agenda. Whilst opinions might differ about what constitutes fake news, why it should be ‘called out’ and what to do about it, at least everyone’s talking about it.
However, sorting fact from fiction can be a complicated and confusing business. A variety of solutions have emerged over the last year or so, though some of them may have added to the confusion. Last year, developer, Justin Hook, creator of online game, Push Trump Off a Cliff, launched a fake news generator website. He argued that enabling people to easily create their own fake news would draw attention to the real fake news that is harmful. That’s all very well, provided people know the difference and read beyond the headlines, but often they don’t.
Can AI help against disinformation?
This year, the Open AI research group took the opposite tack, announcing their ability to generate convincing text on any topic, but deciding not to release the details of their text generator because of the potential for it to be misused.
It can be hard to work out who are the good guys and who are the bad guys. Facebook is using artificial intelligence to identify and remove inappropriate content (supplemented by teams of ‘real people’) and has committed to being more proactive in preventing election interference and stopping the spread of hate speech and misinformation. However, the Facebook-Cambridge Analytica data scandal has severely dented their reputation and the UK Government’s Select Committee Inquiry on Disinformation and Fake News found their approach severely lacking.
Academic institutions have tried to tackle the issue by using machine learning systems to identify bias or examining the efficacy of different fact-checking methods such as using experts, crowd-sourcing information, or developing computational fact-checking methods. These techniques and analyses are useful, but unlikely to be deployed by the average user. A review of fact-checking technologies by the University of Oxford’s Reuters Institute for the Study of Journalism found that automated fact-checking technologies will ‘require human supervision for the foreseeable future’.
The role of ethics
So, what is the answer? The Select Committee Inquiry issued its final report recently, calling for a Compulsory Code of Ethics for tech companies, an independent regulator able to issue large fines, review of the regulations on communications during elections and referenda, and an obligation for social media companies to take down known sources of harmful content, including proven sources of disinformation. They stated that democracy is at risk from ‘malicious and relentless’ disinformation and ads from unidentifiable sources.
The recommendations were welcomed by the CIPR and will be addressed in a White Paper. CIPR President, Emma Leech, stated that ‘Public relations professionals should be front and centre of efforts to promote truth and denounce dishonest information.’ True PR professionals have been ahead of the game for some time, abiding by a Code of Conduct that has integrity and honesty at its core, but the need to ensure the veracity of the information we disseminate has never been more urgent.
Misinformation can have a significant and devastating impact on audiences – and, yes, people do sometimes die because of it. The chief executive of NHS England has warned that anti-vaccination fake news is fuelling a rise in measles. The World Health Organization has labelled ‘vaccine hesitancy’ one of the 10 big threats to global health in 2019, alongside antimicrobial resistance, air pollution and climate change. There are numerous examples of the impact of misinformation on politics, healthcare, the environment and many other spheres of life. We all have a responsibility to help sort fact from fiction and to do what we can to prevent fake news from circulating.