10 questions to ask before covering misinformation
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

10 questions to ask before covering misinformation

Can silence be the best response to mis- and dis-information?

First Draft has been asking ourselves this question since the French election, when we had to make difficult decisions about what information to publicly debunk for CrossCheck. We became worried that – in cases where rumours, misleading articles or fabricated visuals were confined to niche communities – addressing the content might actually help to spread it farther.

As Alice Marwick and Rebecca Lewis noted in their 2017 report, Media Manipulation and Disinformation Online, “[F]or manipulators, it doesn’t matter if the media is reporting on a story in order to debunk or dismiss it; the important thing is getting it covered in the first place.” Buzzfeed’s Ryan Broderick seemed to confirm our concerns when, on the weekend of the #MacronLeaks trend, he tweeted that 4channers were celebrating news stories about the leaks as a “form of engagement.”

We have since faced the same challenges in the UK and German elections. Our work convinced us that journalists, fact-checkers and civil society urgently need to discuss when, how and why we report on examples of mis- and dis-information and the automated campaigns often used to promote them. Of particular importance is defining a “tipping point” at which mis- and dis-information becomes beneficial to address. We offer 10 questions below to spark such a discussion.

Before that, though, it’s worth briefly mentioning the other ways that coverage can go wrong. Many research studies examine how corrections can be counterproductive by ingraining falsehoods in memory or making them more familiar. Ultimately, the impact of a correction depends on complex interactions between factors like subject, format and audience ideology.

Reports of disinformation campaigns, amplified through the use of bots and cyborgs, can also be problematic. Experiments suggest that conspiracy-like stories can inspire feelings of powerlessness and lead people to report lower likelihoods to engage politically. Moreover, descriptions of how bots and cyborgs were found give their operators the opportunity to change strategies and better evade detection. In a month awash with revelations about Russia’s involvement in the US election, it’s more important than ever to discuss the implications of reporting on these kinds of activities.

Following the French election, First Draft has switched from the public-facing model of CrossCheck to a model where we primarily distribute our findings via email to newsroom subscribers. Our election teams now focus on stories that are predicted (by NewsWhip’s “Predicted Interactions” algorithm) to be shared widely. We also commissioned research on the effectiveness of the CrossCheck debunks and are awaiting its results to evaluate our methods.

Without further ado, here are some questions our work has inspired:

  1. Who is my audience?
    Are they likely to have seen a particular piece of mis- or dis-information already? If not, what are the consequences of bringing it to the attention of a wider audience?
  2. When should we publish stories about mis- and dis-information?
    How much traffic should a piece of mis- and dis-information have before we address it? In other words, what is the “tipping point,” and how do we measure it? On Twitter, for example, do we check whether a hashtag made it to a country’s top 10 trending topics?
  3. How do we think about the impact of mis- and dis-information, particularly on Twitter?
    Do we care about how many people see the content? Or do we care about who sees the content? In particular, is Twitter important in virtue of the number of people who use it, or is it important because certain groups, like news organizations and politicians, use it? How do our answers to these questions change how we evaluate the impact of information?
  4. How do we isolate human interactions in a computationally affordable manner?
    When we talk about the “reach” of a piece of content, we should be referring to how many humans saw it. Yet, identifying the number of humans who saw a piece of information can be difficult and computationally expensive. What algorithms might be devised to calculate human reach (at least on Twitter) in a timely and inexpensive way?
  5. For those of us whose primary goal is to stop mis- and dis-information, what strategies of distribution beyond publishing might we consider?
    Should we target accounts who have engaged with problematic content with direct messages, to decrease our chances of perpetuating the falsehood? Should we be using Facebook ads that target certain groups? Is this even the role of news organizations or non-profits like First Draft?
  6. How do we write our corrections?
    How can we use research from the fields of psychology and communication to maximize the positive impact of our corrections and minimize chances of blowback?
  7. Why do we report on attempts at manufactured amplification?
    Are we putting the popularity of artificially boosted content into proportion? Are we trying to make people aware of bots so that they’ll be more vigilant? Are we trying to encourage platforms or government to take action against mis- and dis-information?
  8. Who should be talking about manufactured amplification?
    News organizations aren’t in a position to do work that won’t be published. So, given that it may sometimes be counterproductive to publish about bot networks, should news organizations be investigating them?
  9. Where do the responsibilities of journalists end and the responsibilities of the intelligence community start?
    The monitoring and active debunking of information is falling uncomfortably across different sectors. We’re seeing more disinformation monitoring initiatives emerge outside journalism, such as the Hamilton 68 dashboard, which was co-created by current and former counterterrorism analysts. What role should journalists have in actively combating attempts to influence public opinion in another country?
  10. How should we write about attempts at manufactured amplification?
    Should we focus on debunking the messages of automated campaigns (fact-checking), or do we focus on the actors behind them (source-checking)? Do we do both? How might we show our investigations are credible without informing bot operators or perpetuating the content they were boosting?

The questions above, though fundamental, are not easy to answer. Nor is it simple to decide what to do once we have answers to these questions. Perhaps we need new ethical policies for reporting on these topics. Maybe newsrooms should coordinate about handling large-scale attempts to manipulate the media, such as strategically timed leaks.

Organizations covering mis- and dis-information need to discuss these issues, and it’s clear that those conversations should include academic researchers, some of whom have been studying corrections and disinformation for decades. This is too important to get wrong.

Leave a Reply