As part of our work on building resilient societies in an age of information disorder, First Draft recently launched a series of ‘Standards Sessions’, regular off-the-record meet-ups that bring together journalists in London, New York and Sydney to discuss urgent ethical challenges that newsrooms currently face.
At our three kickoff sessions, reporters and editors tackled issues of reporting on disinformation research when the data and methodologies are not available or not easily understandable, and covering “shallowfakes”, such as the recent Nancy Pelosi video, while avoiding further amplification. Additionally, the London and Sydney sessions deliberated the benefits and limitations of newsroom collaboration when reporting on online extremism.
First Draft will turn the suggestions and questions raised in our sessions into a set of resources to help inform newsroom policies and practices around the world. These will include checklists, templates, glossaries and training materials, developed in consultation with participants in each session.
In the meantime, here are some key takeaways from our first three events.
Reporting on disinformation
Inspired by recent online discussions suggesting that a New York Times article lacked sufficient evidence of attribution to support claims of Russian disinformation campaigns targeting the European Parliamentary elections in May, participants in all three cities agreed that there is a need for guidance and training on this issue. Newsrooms would benefit from having a clearer policy on how to dissect, examine and report research relating to online behaviour and activity. Most do not have in-house cybersecurity experts or data scientists to help make sense of the methodological techniques used by researchers. In addition, some participants flagged the challenge of reporting on complex disinformation stories during overnight or weekend shifts, when there are fewer colleagues available. Attendees also discussed how competition between newsrooms has sometimes led to pressure to report on stories that, in hindsight, they might not have.
From the discussions and suggestions at each event, First Draft has collated a list of questions that should be asked of research submitted or discovered in support of a story:
- What evidence has the organisation or researcher provided in terms of data collection or data analysis?
- Is the data publicly available?
- Is the research process clearly explained, and possible to reproduce?
- Has the organisation or researcher been transparent about their own funding?
- What political agenda or motive might the organisation or researcher have?
- Have newsroom colleagues with specific topic expertise been consulted (in areas such as cybersecurity and cyber attribution, for example)?
- If your newsroom does not have expertise internally, are there other information sources you could cross check against? Have you reached out to academics, data scientists and other experts in the field and asked for their opinion on the strength or weakness of the evidence on which your report is based?
First Draft is currently working with technologists, academics and other experts familiar with cybersecurity and cyber attribution to create a set of recommendations specifically for newsrooms reporting on disinformation research. We welcome any additional thoughts on this issue. Please email your comments or suggestions to ethics@firstdraftnews.com by 8 July 2019.
Writing responsibly about shallowfakes
The millions of social media views that a doctored “shallowfake” video of United States Speaker of the House, Nancy Pelosi, recently garnered demonstrates the need for newsrooms to develop guidelines for responsible reporting on manipulated content.
Participants in London, New York and Sydney each discussed the role that their organisations may have played in amplifying awareness of the doctored video merely by choosing to cover it. Some journalists pointed out that the media could not ignore the video since the US President and well-known officials in his administration had directed attention to them via their social media. This amplification was a key factor in some newsrooms’ decision to cover the video, and also dictated the prominence that these stories were granted.
There were also multiple angles to this story, which contributed to the expansive coverage. Many newsrooms opted to focus on Facebook’s decision not to remove the video, for example. Reporting on this may have indirectly raised awareness of the video’s existence without providing sufficient context and evidence to eliminate any doubts regarding their authenticity in the minds of audiences.
The decision by some newsrooms to embed the manipulated video in their stories may have also aided the spread of the lie, though some participants countered that it was crucial to show the public both the unedited video and the doctored version, as a “teaching moment” on the importance of emotional skepticism.
Discussions of this example focused on the importance of the responsible use of language when reporting on deliberate falsehoods. Merely repeating the words “Drunk” and “Pelosi”, or “Health” and “Pelosi”, in the same headline could cause readers to associate those words in their minds.
A final point of debate was The Daily Beast’s decision to reveal the identity of the creator of the Pelosi shallowfake. Attendees balanced the benefits of stories that educate the public on how “regular” individuals — and not just governmental actors — can be effective disinformation agents, against the potentially disproportionate harms that such stories could bring to private citizens.
Based on the dialogue captured by our rapporteurs at each event, here are some things for newsrooms to keep in mind as they report on shallowfakes:
- When making a decision as to whether to cover manipulated content, consider how far and how quickly the content has already spread, and its predicted virality and impact.
- If choosing to feature the video in your reporting, consider using selected clips or images instead of embedding or linking to the original (while being mindful of any additional copyright considerations). Overlay these with graphics or text that clearly inform audiences of any ways that the video has been manipulated, and how you know.
- Consider carefully the language used to describe this type of disinformation. It may be useful to explain that the content is “altered”, “manipulated” or “distorted”, rather than to say that it is “fake”.
- Where possible, lead with the truth and try not to repeat or amplify the intended outcome or accusatory language in the headline (for example, “drunk”, “unwell”).
- Provide context and present the existence of a piece of content within a bigger picture relating to intentions, motivations, threat and harm.
Newsroom collaboration when covering online extremism
In the wake of the Christchurch massacre, five New Zealand newsrooms agreed on a public set of protocols for reporting on the trial of the accused attacker. These protocols include a commitment to limit coverage of statements that “actively champion white supremacist or terrorist ideology”, including the accused’s manifesto, and an agreement to select experienced journalists (to the greatest extent possible) to cover the trial.
Asked whether their own newsrooms would collaborate on such a public agreement — either for major trials or for breaking news events — some journalists suggested that each of their newsrooms could have one or two designated staff members who immediately liaise with their counterparts in other newsrooms whenever a news-critical event occurs, in order to agree on tactics for responsible reporting.
Other participants raised a number of concerns about joining such an agreement publicly, including:
- Whether this would fuel notions that the mainstream media is working together to suppress certain pieces of news.
- What level of transparency is required of the collaborating newsrooms when explaining the agreed-upon protocols to the public, and whether newsrooms should state how and why these protocols were chosen.
Drawing from the collective input of our session attendees, First Draft has curated the following questions for newsrooms to consider when reporting on the online tactics of extremism:
- What language should you use when referring to documents and materials produced by the perpetrator and/or their supporters?
- Is it responsible for newsrooms to use a word like “manifesto”?
- How else could you refer to an inflammatory document that has been deliberately created for inclusion in media coverage in order to reach a wider audience?
- What risks are attached to using the chosen language of extremist communities when referring to such a document or similar materials that they have produced?
- Could your mention of such materials and messaging encourage more people to seek out the original online?
- How should newsrooms handle videos or imagery that feature individuals promoting extremist ideology? Is it sufficient to blur symbols and apparently coded gestures, for example?
______________________________
First Draft thanks all of the participants in our first Standards Session for their thoughtful contributions on these important ethics challenges. We will continually update our checklists and other materials as we begin to shape a set of final recommendations and resources.
If you have any thoughts about each of these challenges, please email them to ethics@firstdraftnews.com.
Journalists interested in attending our next Standards Session in London, New York or Sydney can fill out this form to be considered for invitation. Spaces are limited at each session and we seek diverse contributions from a range of backgrounds and experience.
To stay informed, become a First Draft subscriber and follow us on Facebook and Twitter.