We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.

Understanding and Addressing the Disinformation Ecosystem

A collection on short papers prepared as part of a workshop bringing together academics, journalists, fact-checkers, technologists and funders to better understand the challenges produced by the current disinformation ecosystem


The problem of mis- and disinformation is far more complex than the current obsession with Russian troll factories. It’s the product of the platforms that distribute this information, the audiences that consume it, the journalist and fact-checkers that try to correct it – and even the researchers who study it.

In mid-December, First Draft, the Annenberg School of Communication at the University of Pennsylvania and the Knight Foundation brought academics, journalists, fact-checkers, technologists and funders together in a two-day workshop to discuss the challenges produced by the current disinformation ecosystem. The convening was intended to highlight relevant research, share best-practices, identify key questions of scholarly and practical concern and outline a potential research agenda designed to answer these questions.

In preparation for the workshop, a number of attendees prepared short papers that could act as starting points for discussion. These papers covered a broad range of topics – from the ways that we define false and harmful content, to the dystopian future of computer-generated visual disinformation.

Download the papers here.

As Michael Schudson and Barbie Zelizer point out in the first paper, “fake news” is nothing new, but it certainly has become much more complicated. To address the newest, digital form of this problem, we have to define it with terms that reflect its complexity, argue Claire Wardle and Hossein Derakhshan. In this way, “fake news” is a “woefully inadequate” phrase. Categories such as misinformation, disinformation and malinformation are needed to acknowledge that there are many different types of producers, motivations, types of messages and ways these messages can be interpreted and acted upon.

However, labeling the problem is just the first step to understanding it. As Richard Fletcher and Rasmus Kleis Nielsen show, the general public doesn’t place its news in neat categories. Instead, they conceive of “fake news” and well-reported journalism as sitting on opposite ends of a spectrum. An increasing number of people distrust the news. “News deserts” left in the wake of media consolidation have only worsened this distrust, creating a void that “far-right” ideologies have swooped in to fill, according Rebecca Lewis and Alice Marwick.

Even more difficult than defining the problem is pinpointing its origin. Though the internet has balkanized, as Deen Freelon’s paper explores, there’s little consensus on whether extreme views are the cause or effect of media echo chambers. What news we read is determined by our social identities and opportunities to think critically – as Natalie Stroud, Emily Thorson and Dannagal Young argue.

In fact, one paper discussed how motivated reasoning causes people to dig deeper into their false beliefs. Even Daniel Kahan, the paper’s author, was surprised by the fact that the better people are at critical reasoning, the more they lean in to their beliefs. This only gets worse as they are presented with more information – accurate or not. People’s inaccurate beliefs, Kahan proposes, may be the cause – not the result – of the mis- and disinformation they consume.

Not all of our attendees were convinced that the problem of mis- and disinformation on social platforms warrants the concern it has attracted. Perhaps substantiating the general public’s view that “fake news” is nothing more than an extreme version of bad journalism, Duncan Watts and David Rothschild of Microsoft argue that “real news” is the real problem. The two cite statistics showing that mis- and disinformation are a drop in the bucket of online media consumption. Further, they argue that we should look more critically at traditional newspapers, encouraging them to cover policy and not horse race politics.

No matter the scale of the problem, all our writers acknowledge that mis- and disinformation exists. How can we best combat it? If the issue begins with flaws of human psychology – as much if not more than what humans read – than we need to craft people-centered solutions. Behavioral psychology, as explained by Brian Southwell and Vanessa Boudewyns, can stem the sharing of mis- and disinformation by addressing the preconscious motivations that drive users’ sharing. As Mike Barker points out, educators, especially librarians, could teach the next generation to question before they share. Though libraries may seem outdated when we have search engines sitting in our pockets, they matter now more than ever.

Though social platforms have made an effort in dealing with mis- and disinformation, they’ve taken a mostly piecemeal approach consisting more of selective experiments than comprehensive solutions, Claire Wardle argues. However, if platforms and researchers collaborated more, we might see the solutions we’ve lacked thusfar. Platform representatives, interviewed by Nic Dias, call for researchers to act as trusted brokers, and to conduct studies that offer concrete answers rather than abstract questions. As one platform representative put it, “That work is actually 95% useless to industry… [W]e’ve got to do something. So, reading an analysis of why it’s hard doesn’t move us forward.” Similarly, Amy Sippitt acknowledges that a “new generation” of fact-checking is needed, and entreats researchers to ask how fact-checking can improve.

While much of the conversation about mis- and disinformation focuses on the West, the most troubling cases exist outside of it. As Sarah Oh argues in her paper, emerging democracies –  largely forgotten by the platforms demand individual attention, something that civil society organizations are well positioned to provide. Finally, looking towards the future, the papers end with a glimpse of the disinformation of tomorrow. While “fake news” is as old as news itself, the ways in which videos can now be manipulated with artificial intelligence will further throw reality into doubt.

New solutions will be needed to address the problems described above, as well as the problems that we have yet to encounter. We share these papers in the hopes that they can inspire further conversations.

Download the papers here.

Leave a Reply

Your email address will not be published. Required fields are marked *

First Draft is a project of the Harvard Kennedy School’s Shorenstein Center.