Year in Review: 'Journalists need to recognise they are a target of influence operations'
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

Year in Review: ‘Journalists need to recognise they are a target of influence operations’

Mis- and disinformation moved up the news agenda over the last 12 months as researchers, journalists and the public faced unprecedented problems navigating the online news ecosystem. Information Disorder: Year in Review hears from experts around the world on what to expect and how to be more resilient in 2020.

Renée DiResta is the technical research manager at the Stanford Internet Observatory and a 2019 Mozilla Fellow in Media, Misinformation and Trust. She investigates malign narratives across social networks and helps policymakers understand and respond to the problem.

First Draft: What was the biggest development for you in terms of disinformation and media manipulation this year?

Renée DiResta: A few of us at the Stanford Internet Observatory spent a large part of 2019 investigating [Russian intelligence agency] GRU influence operations worldwide, from 2014 to 2019, based on a data set that Facebook provided to the Senate Select Committee on Intelligence.

One key finding that stood out is the extent to which their social media operation failed, even for the dissemination of sensational hacked materials like the DNC emails. Although they tried to create Facebook pages to dump the documents, ultimately they gained no traction via that channel.

What gave them lift was reaching out to Wikileaks and DMing journalists to entice them to cover the dumps. Our perception of the operations in aggregate is that social media success was actually not the point. It was a very distinct strategy, markedly different from the Internet Research Agency’s simultaneous effort (which was strongly social-first, memetic propaganda).

Social platforms have recognised that they are a target: it’s important that the media does so as well.” – Renee DiResta

Media manipulation, of both mainstream as well as independent media, was instrumental to achieving their operational goals. Social platforms have recognised that they are a target; it’s important that the media does so as well, and plans in advance for how it will cover and contextualise a significant hack-and-leak in the 2020 election.

Another significant thing we saw was the franchising of influence operations to locals. This took the form of entities linked to Evgeniy Prigozhin hiring locals in several countries in Africa to run their influence operations.

Researchers had expected to see this kind of activity happening, because detection is significantly more difficult when local administrators are running the Facebook Pages. Having it confirmed reinforces the challenge of dealing with evolving adversaries and the importance of platforms having multi-faceted, behaviour-based platform for addressing coordinated inauthentic activity.

What was the biggest story of the year when it comes to these issues of technology and society?

There was remarkable breadth of issues related to technology and society, it’s kind of impossible to pick just one. I think, actually, that the aggregate amount of coverage of tech and society topics was remarkable in itself.

There were numerous disinformation-related findings. There were significant platform policy shifts related to health misinformation; as someone who’s worked in that space since 2015, I found the pronounced steps to reduce the spread of anti-vaccine and health quackery a bright spot (with the caveat that there’s more implementation refinement needed).

But there was also a much broader shift towards actually having critically important public conversations about the balance between privacy and security, the costs and benefits of the integration of AI into our daily lives, the distinction between free speech and mass dissemination when content is mediated and ranked by algorithmic curators. There’s a lot of nuance in these topics, and the public is far better informed about them now than they were even just a year ago.

What is the biggest threat journalists in your part of the world are facing in 2020 in terms of information disorder?

Referencing the GRU operation I alluded to in the first question: journalists need to recognise that they are a target. The goal of the GRU operation was to reach an audience, and media manipulation — of both mainstream as well as niche media — was instrumental to achieving that reach.

The GRU was running narrative laundering operations globally, creating media fronts, fake think tanks, fake journalists who got articles placed in authentic popular independent media outlets — this all served to legitimise or conceal the origin of state-sponsored propaganda. It’s an old tactic but updated for the era of a fragmented, hyper-partisan media ecosystem.

Social platforms have recognised that they are a target and created policies to address the threat; it’s important that independent media and mass media do so as well. In the GRU research, we observed a network of niche and conspiratorial outlets that accepted numerous contributed pieces from multiple fake journalists who wrote for ideologically-aligned GRU fronts (on the left as well as the right).

But perhaps most importantly, large media properties are a target because of the sheer size of their audiences. They need a plan in place in advance for how they’ll cover and contextualise the contents of the next significant hack-and-leak operation… a significant risk for the US 2020 election.

What tools, websites or platforms must journalists know about going into 2020?

CrowdTangle is an excellent resource. Bellingcat’s OSINT investigation guide is also indispensably helpful for finding tools to support various research areas.

When it comes to disinformation in your country, what do you wish more people knew?

That it’s a nonpartisan issue. Anyone can run these operations, and they can be — and are — targeted at any group along the ideological spectrum. The topic is so heavily politicised in the United States because of how it manifested in the 2016 election, particularly the muddling of “interference” and “collusion”.

If you had access to any form of platform data, what would you want to know?

I’d like to see more anonymised comment data from unwitting-audience interactions with influence operation content. I think it would go a long way toward helping us understand impact.

Right now, researchers can see how many comments there are on a disinformation meme related to, say, voter suppression, but not what they are. Are people who are pushed this content skeptical? Are they receptive? Does this change over time if they follow a subversive propaganda page? We need to better understand whether exposure translates to an internalisation, entrenchment, or shift in views.

We are currently dependent on platforms to proactively share content related to inauthentic activity and some are extremely reluctant to provide any material that might violate privacy (or GDPR). A more formalised public-private structure to facilitate this type of security research would benefit everyone.

This interview was lightly edited and condensed for clarity.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.