Mis- and disinformation moved up the news agenda over the last 12 months as researchers, journalists and the public faced unprecedented problems navigating the online news ecosystem. Information Disorder: Year in Review hears from experts around the world on what to expect and how to be more resilient in 2020.
Kate Starbird is an Associate Professor at the University of Washington’s Department of Human Centered Design & Engineering and co-founder of the UW Center for an Informed Public. Her research looks at human-computer interaction and the emerging field of crisis informatics, with a focus on the spread of online disinformation and other forms of strategic information operations.
First Draft: What was the biggest development for you in terms of disinformation and media manipulation this year?
Kate Starbird: It’s hard to pick one development because it really has been an incremental accumulation of knowledge from different researchers and journalists about the problem of disinformation.
One of the things we have started to see is the way platforms have begun to address disinformation, both in taking productive actions and in facilitating research that is based on their takedowns. It’s opened up the kinds of questions that can be asked and answered because they are enabling a new sort of methodological process. Their taking action — which does change some of the dynamics of disinformation and misinformation operations and media manipulation — is an important development.
Another important development is the growing understanding that a significant piece of the disinformation process occurs when influencers pick up disinformation and spread it through media. There is a rising awareness that it’s not just disinformation targeted at the broader audience but disinformation targeted at specific kinds of influencers who pick up these messages and spread them further.
And this is why tweets should never have been used to represent public opinion. It also demonstrates the underlying mechanics of disinformation… to reach and then use “influencers” (in this case news media) to boost the content and reputations of disinformation agents. https://t.co/Pyb5tD2lln
— Kate Starbird (@katestarbird) December 31, 2019
What is the biggest threat journalists in your part of the world are facing in 2020 in terms of information disorder?
This question has the answer in it, in some ways. Journalists are targeted with disinformation, as potential spreaders of it, and they still play a significant role in disseminating information even on social media.
For example, when journalists publish raw materials that have been leaked to them, that is a way disinformation spreads. They don’t know where the materials come from, but they can assume it’s political opposition research — whoever gave it to them had a political purpose for doing so. And the content may be true, which creates tension for a journalist. They might think “I want to get the scoop” but if the information originated from a disinformation campaign — even if pieces of that information are factual, it’s dangerous to publish materials that are stolen or taken out of context.
So how do journalists decide whether to push the [leaked] information out there into the world? Sometimes they may rationalise a bit and say, people need to hear this because it’s true. Well, okay, but there are also other truths you’re not publishing about the opponent. It’s critical to understand who is trying to use you and why. Is this part of a disinformation campaign and, if so, do you want to be a part of it?
Unfortunately, we are all learning a lot about how disinformation targets the media but we don’t yet have a set of best practices on how to deal with disinformation. First Draft has been putting out information for journalists and there are things we’re doing better. But there are still open questions about when, how and whether to cover certain topics, and questions about how to address the origins of disinformation.
I wish there were clean answers — “do this and everything is going to be okay” — but it still feels like there’s a lot of learning to be done about what the tactics are, how the information systems work, and the short-term versus long-term effects of different decisions journalists can make. But I don’t envy the position journalists are in, especially covering some of these political topics right now, because it’s a fraught space.
Disinformation isn’t simply false information. Instead it builds false narratives by layering true & false, selecting/omitting info, *misleading* for strategic intent. And it often works specifically by creating doubt—in this case, doubt in our investigative/legal institutions. https://t.co/L2X4MB7D65
— Kate Starbird (@katestarbird) December 10, 2019
What tools, websites or platforms must journalists know about going into 2020?
This is difficult to answer because as a researcher, we use a lot of our own in-house tools. One of my students has been doing some research on journalists, and I know from her work that each journalist has their own suite of tools and practices. Certainly, we’ve heard a lot about CrowdTangle in the past few years. It’s been a great resource for the journalists we’ve talked to in terms of understanding the trajectories of information and how it’s being shared and by whom.
Social media companies like Twitter are putting out public data but a lot of it is not in a format that most journalists can use. There would be real value in developing research resources within the newsroom, or maybe a common set of resources, that can help journalists with querying and sorting that data.
What was the biggest story of the year when it comes to these issues of technology and society?
I don’t think there’s any one biggest story of the year, there’s so much going on. There’s great work being done in different places, which makes it hard to choose just one.
One recent story I found interesting was Craig Silverman’s piece about a massive Facebook scam that tricked consumers into clicking on fabricated news articles about celebrities and signing up for difficult-to-cancel subscriptions of products allegedly endorsed by these celebrities, while renting thousands of personal Facebook accounts to advertise these sham subscriptions. This article helps us understand the mechanisms underlying the spread of mis- and disinformation in online spaces.
One of the things we don’t yet understand very well is the distinction between financial and political motivations in some of these ecosystems. We don’t know how to disentangle them. What part of this is financially motivated, what part is politically motivated, and how do they work together? Silverman’s article opens up the trunk a little bit to get a sense of the marketing scams that are underneath some of these operations, how they function and how they’re so lucrative.
Another interesting story is the work of Kristofer Goldsmith on veterans and disinformation. In the course of his research, he finds that disinformation campaigns are targeting veterans in different ways and impersonating veterans’ groups on Facebook and other places. His work is fascinating in part because it’s accidental. A lot of us end up here [in the disinformation field] in a similar way. We’re looking at something else and notice: what is that?
Goldsmith ends up covering part of the macro-strategy of disinformation, which is the targeting of US active duty military and veterans. It’s not that he’s looking for disinformation, it’s just sort of there. It’s important for us to step back and see [Goldsmith] is not a person who had political motivations to go out and find disinformation campaigns. This is a person who couldn’t miss it because it was targeting a group he cared about, and he saw how it exploited that group.
If you had access to any form of platform data, what would you want to know?
On the topic of YouTube and radicalisation, it would be amazing to be able to look at the last six years of YouTube recommendations and how it led different people into different parts of their media ecosystem. There are questions that are crucial for our understanding of how these things work and how and why we’re vulnerable that we haven’t been able to approach because of the limitations of the data we have.
We know from other research that you can take the disinformation out of the system but that doesn’t fix how it’s already shaped the networks, ideologies and norms that it was a part of. When the disinformation has been functioning for five or six or seven years, you’re not going to just undo the damage it’s done. It would be good to understand what those effects have been so we can start thinking about how to help society counter some of the things that have happened.
When it comes to disinformation in your country what do you wish more people knew ?
Disinformation targets all of us. It’s asymmetrically used to support specific political leaders in various parts of the world, but we all are vulnerable to disinformation and we get played in different ways.
In particular, we’re very vulnerable when we’re engaging in our political identities online. What we’ve seen is that political activism is targeted and that’s a place where we can become part of someone else’s disinformation campaign.
It’s also important to help people understand that, in the short term, you may think disinformation supports your views or it’s going to help you in some way but there are long-term effects that undermine the things we care about and are really not helpful for society.
The danger of pervasive disinfo is not being misled about a single topic, but losing our ability to discern truth. When we lose confidence in what we know, we become unable to make decisions that democratic societies need to make to govern themselves. https://t.co/lptEK5bGVX
— Kate Starbird (@katestarbird) November 18, 2019
The understanding [that we are all vulnerable to disinformation] slows me down in terms of what I share, and I would really recommend that for folks. Technology designers and journalists and even ordinary users need to do better to make these information spaces more trustworthy. Before we pass something along, we should do our due diligence. And if we’re not sure about it, don’t share it. A rising collective awareness of that would contribute to a healthier information space.
This interview was lightly edited and condensed for clarity.
Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.