First Draft spent the best part of last week soaking up the Louisiana heat and the latest insights in all things digital journalism at the 2019 Online News Association conference down in New Orleans.
As well as sharing the latest resources from the First Draft training team, and between all the beignets, sazeracs, po’ boys and crawfish etouffe, we’ve picked out some of the best lessons and learnings in information disorder from the conference.
Fighting misinformation
Former First Drafter and DFRLab fellow Andy Carvin moderated the conference’s opening panel on global strategies in fighting misinformation. Rounding off the session, he asked the other participants for practical recommendations in the ongoing battle. All but one turned on the social media platforms.
Hazel Baker, global head of UGC newsgathering at Reuters, suggested what she called a “pause button” when people try to share something during news critical moments, like on election days or during natural disasters.
“A pop up to remind you that what you’re sharing might not be what you think it is,” she said.
Michael Hayden, an investigative reporter at the Southern Poverty Law Center, was less forgiving.
“We have been calling for social media companies to change the terms of service” for years, he said, accusing the networks of “making money off exploiting people’s fears“.
The same could be said of some news organisations, of course, but Hayden said he “would like to see [the social networks] get serious about people weaponising the platform to create dangerous situations.”
Tai Nolan, who founded Brazilian fact-checking site Aos Fatos, repeated what is fast becoming a rallying cry for fact-checkers: More transparency and more data from Facebook.
Director of Aos Fatos, @tainalon, made a great point that we often forget — these apps were not created for news consumption. That’s a large part of the problem because we’re now playing catch up. #ONA19FightMisinfo
— Alex Ptachick (@alexptachick) September 12, 2019
Despite being one of Facebook’s army of fact-checking organisations, “it’s difficult to know what we are fact-checking” at times, she said, urging that even “basic data about these actors” would help stem the flow.
Shalini Joshii broke from the trend, however, calling for “collaborations and coalitions… across different sectors and addressing misinformation at scale”.
.@shalinimjoshii talks misinformation during elections in the largest democracy in the world, India #ONA19FightMisinfo
Some strategies:
✍️ Make fact checks more relevant to all communities
✍️ Use local languages
✍️ Be on the platforms people are using
✍️ Involving local people— Elizabeth Gillis (@itsgillis) September 12, 2019
Tools of the trade
ONA wouldn’t be ONA without some practical tips. One session in particular aimed to download the skills and tools of some leaders in the field, with Malachy Browne from the New York Times, Jane Lytvynenko from BuzzFeed and Slate’s Ashley Feinberg sharing their advice.
The level of detail these experts gave in how they applied the tools is too much to cover here. Thankfully both the slides from the panel and a list of the tools has been made available online, and there’s a livestream of the session online to watch for the time-rich readers out there.
Other highlights included sessions from Reddit and CrowdTangle. Even though they were sponsored workshops to promote the brand, they still served up some useful advice.
Looking for your own links or those from a suspicious website? Type reddit.com/domain/bitchute.com, for example, and find all the latest and most popular links from that site.
Found a link of interest? Use the CrowdTangle Link Checker to see where it was shared on Facebook and Twitter to identify communities and accounts of interest. This Chrome plugin was a top recommendation from AFP and Politifact at the CrowdTangle session.
Tools for trust
Elsewhere, another expert panel gave their views on how to rebuild trust in “an era of ‘fake news’”.
“We’re at a moment where we have communities struggling with news literacy and news organisations struggling with community literacy,” said Ashley Alvarado, director of community engagement at LAist, a situation which is causing a breakdown in trust. That breakdown and its knock-on effects have become a survival issue for newsrooms, said fellow panellist Joy Mayer, director of the Trusting News project.
Communication, transparency and community were the key topics in a wide-ranging discussion which sought to pin down some practical advice. Moderator Talia Stroud, director of the Engagine News project and professor at UT Austin, rounded up the main recommendations in a handy list.
Understand why people don’t trust news organisations. “The question of earning and building trust starts with identifying what is blocking trust,” said Mayer.
Invite the community in to the conversation. When the Dallas Morning News said it would be backing Hillary Clinton in 2016 there was a “very angry reaction” among readers, said Stroud. In response, the editor invited them into the newsroom to share their views.
“They didn’t come out in agreement and singing kumbaya” but they appreciated an editor who took the time to listen and take their concerns on board, she added.
“If we want people to listen to us, we need to listen to them. If we want people to trust us, we have to trust them.” @ashleyalvarado #ONA19Trust
— MediaMaven (@mediadissector) September 12, 2019
Ask people what they need and adjust your output accordingly. A perennial problem that has been magnified in the age of social media, diversity and representation can be a key factor in trust.
“How committed are you to serving all the communities that you serve in an area?” asked Lehman, stressing that relationships form the core of building trust.
Be public facing about your desire to build trust. “Having the four-hour conversation and never letting the people know about it won’t work,” said Stroud.
Go where you haven’t spent time, into the communities and stories otherwise forgotten.
Assess whether it works by making sure a feedback loop or review is diaried from the start.
the easiest way to lose trust is to not follow through on your promises – be committed to following up (tbh good life advice) #ona19trust #ona19
— anna bold (@annathebold) September 12, 2019
Fake videos
After watching dozens of misleading or manipulated videos, Nadine Ajaka said the video team at the Washington Post had developed their own taxonomy of altered videos: Those with missing context, those with deceptive editing, and those with malicious transformation.
Despite the often-panicked conversation around deepfakes, videos presented with missing context are far more dangerous, she said, and the video team at the Washington Post always asks whether reporting will amplify misinformation further than it had already travelled.
Beyond that, Ajaka recommended asking some of the same questions of a video that journalists seek to answer when writing any story: who, what, when, where and why? Check our First Draft’s visual verification guide for investigating video to find out more.
Automation, inauthentic behaviour and bots
Andy Carvin followed Ajaka to talk about his experience tracking automation and bots at the Atlantic Council’s DFR Lab, and offer some useful advice. He started with a warning, however. Bots should be understood as automated accounts controlled by software, but not all of these accounts are bad and not all which look like bots are actually bots. He recommended using couching language like “exhibiting bot-like behaviour” to prevent embarrassing mistakes.
DFR Lab has a list of ten indicators which can help identify a bot, he said, with more than four indicating an account is likely to be automated.
High levels of activity. Generally you don’t see humans tweeting at a high volume. Of course there are always exceptions Sports fans who livetweet games, people who livetweet debates, or just zealous about certain things.
Anonymity. Lack of information about a real person, images purposely not showing anyone, and a handle or username which don’t match are all suspicious signs.
Amplification. If almost all an account’s activity is retweets or likes, for example, that could be a sign it has been set up expressly for that purpose. Carvin used Twitonomy to identify an account with almost 99% retweets in recent years. But again, there are caveats.
No profile picture. When someone has a slew of followers with no profile picture all of a sudden, that may be indicator that they are being swarmed by bots, said Carvin.
Stolen/shared photo. Use reverse image search to figure out whether a profile picture has been used in other accounts.
Suspicious handle. Look for machine-generated handles with random sequences or sequential numbering, said Carvin, or with mismatched names and genders, like pics of women, but with names like Tom, Nicholas, Matthew.
Tweeting in different languages. Millions of people are bi- or trilingual, so tweeting in multiple languages isn’t a red flag by itself. It should still be on the checklist, however, said Carvin, for an account exhibiting potentially suspicious behaviour. When an account is active in seven or eight languages, that should be a red flag.
Automatic software. IFTTT and ow.ly are the most common examples of software used to automate tweets and can often be spotted in the links an account shares. Like many points here, this should never be a sole indicator for a bot as automated tweeting can be useful.
Tweeting day and night. Twitonomy tells users how often people post, and during what hours. So if someone is posting 24 hours a day, that is a good indicator that it is a bot.
Hashtag spamming. This is a useful way for spammers to get people’s attention, especially when certain tags start trending, Carvin said.
He also shared six indicators for a suspected network of bots, or ‘botnet’. If a number of accounts share more than three of the following then it is likely they are part of a bot net, Carvin said.
- Patterns of speech
- Identical posts
- Handle patterns: look for very similar handles
- Date and time of creation
- Identical twitter activity
- Location
Deepfakes
It wasn’t all that long ago when the word “deepfake” first entered the modern lexicon. The first academic paper about deepfakes came out in 2016, said Matthew Wright, professor at the Rochester Institute of Technology, in a panel chaired by First Draft co-founder Claire Wardle.
In 2017, director Jordan Peele worked with BuzzFeed to produce a deepfake of President Obama, and by the end of the year Redditors were sharing porn videos altered to include celebrities. This non-consensual activity got deepfakes banned from Reddit and some porn websites but by 2018, when FakeApp debuted, anyone could make deepfakes.
“As an academic, I’m jealous how quickly this idea travelled,” said Wright. The technology went from paper to public deployment in just two years and continues to evolve at an alarming pace.
Joan Donavan, director of the Technology and Social Change Project at Harvard’s Shorenstein Center, said one of the main questions to think about around deepfakes is how they will change our standards of evidence, not just for journalism, but also for law.
What happens, for example, when deepfakes are introduced as evidence of police brutality, or other crimes? Going forward, we may need to find other ways to contextualise the stories we are telling, said Donavan.
.@BostonJoan says her aspiration for the future is that we learn how to harness the power for social good but also be building guardrails into the technology to not create more inequality and harm. #ONA19
— Kristy Roschke (@roschkekj) September 14, 2019
At the Washington Post, director of strategic initiatives Jeremy Gilbert said they had identified three core challenges for newsrooms around deepfakes:
- How do we create a vocabulary and understanding in our newsrooms of what is possible?
- How do we find tools to identify and report on them?
- How do we help audiences better understand the issue?
Networked factions
One of the final but most engaging sessions from the conference covered the topic of online communities, radicalisation and so-called “source hacking”, the name of a recent report from Data & Society into media manipulation.
The session covered what the panel described as “networked factions”, debating “the internet and how society is reflected in it and how groups form online” said panel chair Lam Thuy Vo, senior reporter at BuzzFeed and professor at the Craig Newmark Graduate School of Journalism.
The discussion was broad, taking in the tactics extremist groups use to infiltrate subcultures online, how some groups use “memetic warfare” to push dogwhistle rhetoric into the mainstream, the distribution of manipulated images of WhatsApp in India and the use of a digitised “playbook” for advancing a white supremacist agenda, from Reddit and 4chan to Gamergate and Hindu nationalism.
Growing up in the hardcore and punk movement, @BostonJoan saw white supremacists latch on to the skinhead movement to push their message. A similar thing is happening now but with gaming, she says, as a topic with passionate followers looking for meaning in life #ONA19
— First Draft (@firstdraftnews) September 14, 2019
Joan Donovan, one of the authors of the report into source hacking, said the term networked factions is crucial for understanding how certain groups now operate online to gain supporters and influence discussions, working towards a common goal if not necessarily co-ordinating the activity directly.
Vo gave attendees three core takeaways:
- Understand that, a lot of the time, co-ordinated behaviour is trying to get journalists’ attention.
- Get empirical data about co-ordinated or networked activity and take a research approach to reporting.
- Understanding networked factions is a beat, so understanding the main players is vital.
Catch up with all the sessions, speakers and highlights from the 20th Online News Association conference on the ONA website.
Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.
Shaydanay Urbani and Victoria Kwan also contributed to this report.