Seven studies about mis- and disinformation from 2017
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

Seven studies about mis- and disinformation from 2017

circle_research

Research that advanced our understanding of information disorder this year

The year 2017 will go down as the moment when the study of mis- and disinformation on social networks took off. In 12 months, we went from learning of viral “fake news” in the US election to constructing a rich body of research about it. Thanks to this work, we have already made strides toward understanding how false and misleading content operates in our modern information ecosystems.

So, I thought it would be appropriate to end the year by highlighting seven research articles from 2017 that pushed our understanding of information disorder. While there are many more worth reading, each of these studies addresses an important dimension of the problem or an intervention. I’ll only repeat top-line findings here, as this article is just meant as a directory.

Social Media and Fake News in the 2016 Election – Allcott & Gentzkow

When encountering a new problem, one of researchers’ first steps should be to gauge its severity, and two economists from New York University and Stanford University were among the first to try to quantify the “fake news” problem. Using a list of disconfirmed articles from the last three months of the US presidential election campaign, they collected data from web browsers and surveys, and came to these findings:

  • 14 percent of US adults viewed social media as their “most important” source of election news.
  • Misinformation favoring Donald Trump was almost three times as prevalent as that favoring Hillary Clinton. Pro-Trump stories were shared on Facebook about 30 million times, whereas the pro-Clinton stories were shared around 7.6 million times.
  • The average US adult saw and remembered 1.14 ‘fake news’ stories in the final three months before the election.

However, the authors offer the following caveat in the conclusion of their paper:

“[A]s we emphasize above, there are many ways in which our estimates could understate true exposure. We only measure the number of stories read and remembered, and the excluded stories seen on news feeds but not read, or read but not remembered, could have had a large impact. Our fake news database is incomplete, and the effect of the stories it omits could also be significant.”

Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation – Chan, Jones, Jamieson & Albarracín

One top-of-mind intervention for misinformation is fact-checking. However, research on when fact-checking is effective at correcting false beliefs has been contradictory at times. Researchers from the University of Pennsylvania and the University of Illinois at Urbana-Champaign conducted a meta-analysis of 20 studies pooled from eight reports on the efficacy of fact-checks. Their findings can be summarized in three points:

  • Fact-checks that encourage readers to think of arguments in favor of the misinformation are less effective.
  • Fact-checks that encourage readers to argue against the misinformation are more effective.
  • Fact-checks that provide new, accurate information are more effective than those that simply label misinformation as false. However, “[a] caveat is that the ultimate persistence of the misinformation depends on how it is initially perceived, and detailed debunking may not always function as expected.”

Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity – Pennycook & Rand

If we’re interested in fact-checking as an approach to countering mis- and disinformation, it would be helpful to know who is convinced by bad information and why. Yale University researchers gathered participants on Amazon Mechanical Turk (MTurk) and measured each person’s political ideology, tendency to think analytically, and tendency to consider randomly generated sentences to be profound—what’s known as “pseudo-profound bullshit receptivity” (seriously).

The researchers then asked participants to answer questions about a number of headlines. The political valence of headlines varied from Democrat-consistent, to neutral, to Republican-consistent. Some were from articles found to be false by the fact-checking website Snopes; some were from legacy news sources like the New York Times, CNBC and Fox News. This is what they found:

  • People who tend to think analytically are better able to distinguish a fake headline from real headline, regardless of whether that headline is favorable to their political ideology.
  • People who are more receptive to “pseudo-profound bullshit” are also more gullible than others when it comes to fake headlines. This correlation is partially accounted for by differences in analytical thinking. In other words, people who tend to think analytically are less receptive to “pseudo-profound bullshit” and less likely to believe a fake headline.
  • People who tend to think analytically are also less likely to share both “fake” and real news on social media.

Taking Corrections Literally but Not Seriously? The Effects of Information on Factual Beliefs and Candidate Favorability – Nyhan, Porter, Reifler & Wood

Discussions about the impact of mis- and disinformation almost always bring up democratic elections, as many are especially concerned about information pollution’s potential to sway a vote. Thus, it makes sense to ask: Beyond changing beliefs, can mis- or disinformation actually change how someone feels about a candidate?

Researchers conducted experiments with Trump supporters from Morning Consult and MTurk, having them read corrections of claims now-President Trump had made during the Republican National Convention and his first debate with Hillary Clinton. Here were the results:

  • Trump supporters were receptive to corrections to his statements, adjusting their beliefs appropriately (although there was some evidence that Clinton supporters we more receptive to these corrections).
  • However, in any case, corrections did not impact Trump supporters’ attitudes toward their preferred candidate.

Lateral Reading: Reading Less and Learning More When Evaluating Digital Information – Wineburg & McGrew

Another popular response to the newly widespread concern about mis- and disinformation has been a reinvestment in media literacy education. However, producing an effective curriculum requires knowing what techniques are useful for quickly detecting misinformation in the online age. To identify some of these techniques, researchers from the Stanford History Education Group observed 10 historians, 10 fact-checkers and 25 Stanford undergraduates while they tried to judge the credibility of online information. They found the following:

  • Fact-checkers were better able to distinguish reputable sources from disreputable sources.
  • Fact-checkers tended to make their judgments in a fraction of the time.
  • Unlike historians and students, who tended to look only to the source itself to judge its credibility (e.g. website appearance, domain names), fact-checkers opened up new tabs and used search engines to judge the credibility of a source. This what made them better able to identify misleading information.

Media Manipulation and Disinformation Online – Marwick & Lewis

The media often fails to consider its role in our information ecosystems, reporting on false beliefs without regard for how they might be helping spread those beliefs farther. As a result, they are vulnerable to being taken advantage of by savvy disinformation producers and disseminators. To understand how journalists are manipulated, researchers from Data & Society spent hours reading 4chan boards, alt-right subreddits and the comments sections of blogs. Here’s what they saw:

  • Loose, issue-based coalitions of conspiracy theorists, white nationalists, anti-feminists and others strategically use social media, memes, bots and the targeting of influencers (e.g. journalists and bloggers) to promote their ideas.
  • These groups congregate in the comments sections of ideological blogs and websites, anonymous or pseudonymous forums and message boards, and mainstream social media sites.
  • Members of these groups are usually motivated by a combination of ideology, money, and/or status and attention.
  • The media’s dependence on social media for sourcing, and web traffic for revenue, makes them vulnerable to being manipulated by these groups.

The Spread of Misinformation by Social Bots – Shao, Ciampaglia, Varol, Flammini & Menczer

Many reports from journalists, civil society and academics have provided anecdotal evidence of misinformation being boosted by social bots and “cyborgs” (i.e. accounts operated by both software and humans). However, fewer studies have tried to systematically demonstrate how, and to what extent, “fake news” traffic benefits from automated accounts.

Researchers from Indiana University, Bloomington scraped about 400,000 articles published by fact-checking websites, as well as websites included on at least of one of several “fake news” blacklists. They also collected around 14 million tweets linking to those articles, and analyzed them to look for evidence that the articles had been promoted by automated accounts. These were their results:

  • Accounts that frequently tweet links to “fake news” sites are more likely to be bots.
  • Bots are highly active in promoting misinformation when it first appears on Twitter, pushing claims out to as many eyes as possible, and tending to target users with high numbers of followers using mentions.
  • Humans frequently retweet automated accounts sharing misinformation.
  • Successful sources of false and biased claims are heavily supported by social bots.

Conclusion

There is still much we don’t understand about information disorder. We still don’t know why fabricated news sites seem to disproportionately target political conservatives. We don’t have a firm idea of how misinformation moves between social platforms, television and radio. And, critically, we haven’t yet grappled with visual forms of misinformation. However, the articles above—and many others—have provided solid first steps.

2 thoughts on “Seven studies about mis- and disinformation from 2017”

  1. Pingback: Big clock parts

Comments are closed.