How journalists can responsibly report on manipulated pictures and video
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

How journalists can responsibly report on manipulated pictures and video

The first of six guides from First Draft covers the tricky ethical terrain that comes with reporting in a world of information disorder.

This is the first in a series of Essential Guides published by First Draft. Covering newsgathering, verification, responsible reporting, online safety, digital ads and more, each book is intended as a starting point for exploring the challenges of digital journalism in the modern age. They are also supporting materials for our new CrossCheck initiative, fostering collaboration between journalists around the world. 

This extract is from First Draft’s ‘Essential Guide to Responsible Reporting in an Age of Information Disorder’, by ethics and standards editor Victoria Kwan.

Download the full guide:

What does responsible reporting mean in an age of information disorder?

Many newsrooms rely on their editorial guidelines and code of ethics. Report the truth. Get the facts right. Be independent and impartial. Be transparent with your sources. Own up to your mistakes and issue prompt corrections.

These fundamentals are still the bedrock of journalism. But as audiences become hyper-networked, technological innovations have expanded the ways in which news can be gathered and distributed. In response, agents of disinformation have devised increasingly inventive methods for manipulating journalists, the social platforms, and the subsequent media coverage. As a result, news organisations
find themselves facing an array of new ethical challenges relating specifically to amplification.

Take, for example, a situation where:

  • A reporter is gathering information for a story about a disinformation-spreading website that a prominent politician has been promoting online. The creator of the site has publicly stated that coverage from media outlets is part of their goal. How should the reporter approach writing this article?
  • Searching on Twitter after a breaking news event, a journalist finds a tweet from an eyewitness. Conspiracy theories are starting to float around online, with a small but vocal community claiming that the event was staged by the federal government. After verifying the eyewitness’s post and their identity, what else should the journalist consider before retweeting
    or embedding?
  • In the wake of a violent extremist attack, a reporter discovers that the accused assailant posted extremist writings to an online message board. Should the reporter include a link to the text, screenshots, both or neither?
  • An editor must craft a headline for an article about a manipulated video of a politician, which has been slowed down to make the politician appear ill or inebriated. How should the headline be worded so that it avoids amplifying the lie?

In all of these scenarios, there is no single ‘right’ way to do things. The mere act of reporting always carries the risk of amplification and newsrooms must balance the public interest in the story against the potential consequences of coverage.

Responsible reporting in this day and age — when our information ecosystem has been so polluted with misleading and false content — means that journalists have an obligation to be aware of:

  • The impact our work has on sources, subjects and readers.
  • The consequences of what we say and share in digital spaces, which — even if it is not part of a published article — still has the potential for amplification.
  • The role the media plays in the polluted information ecosystem.

Editorial guidelines and codes of ethics rarely include information about these new challenges. Questions laid out in this guide can be used to spark discussions in newsrooms about best practices for reporting on this type of content.

Covering manipulated content

“Manipulated content” is when an aspect of genuine content is altered, relating most often to photos or videos.

Visuals are an especially potent way of spreading mis- and disinformation, since people are more inclined to believe what they see. Manipulated images are also much harder to detect, compared to textual disinformation.

As an example, in March 2018, Teen Vogue published a video that showed Emma González, a Parkland High School shooting survivor, ripping a target in half. Shortly after this video was released, a doctored version began circulating, edited to appear as though González was tearing up the United States Constitution.

The manipulated video quickly spread on Gab, 4chan and Twitter.


In May 2019, a ‘shallowfake’ (defined by First Draft as a low-quality video manipulation) which distorted footage of US Speaker of the House Nancy Pelosi to make her appear inebriated or ill, racked up millions of views on social media. Below is a screenshot of The Washington Post’s comparison of the videos.

While side-by-side visuals can be useful for informing audiences about popular disinformation, adding prominent graphics or a text overlay could increase a debunk’s effectiveness, clearly distinguishing between the original and the manipulation.

The bright colours and large text are more likely to stand out to readers as they quickly scroll through their social media feeds.

Here are some things to keep in mind when we cover manipulated content:

  • When deciding whether to cover manipulated content, have we considered how far and how quickly the content has already spread, and its predicted virality and impact?
  • If we choose to feature a manipulated video in my reporting, have we thought about using selected clips or images instead of embedding or linking to the original (while being mindful of any additional copyright considerations)?
  • Can we overlay the manipulated content with graphics or text that clearly inform audiences of how the video has been manipulated?
  • Have we thought about the language we are using to describe this type of disinformation? It may be useful to explain that the content is “altered,” “manipulated” or “distorted,” rather than to say that it is “fake”, which may be confusing to readers especially when the disinformation is based on genuine photos or videos.
  • Where possible, have we led with the truth and avoided repeating or amplifying the intended outcome or accusatory language in the headline?
  • Have we provided context and presented the existence of a piece of content within a bigger picture relating to intentions, motivations, threat and harm?
  • Before referring to a falsehood, have we provided an explanation of why it is false and evidence to support verified conclusions?
  • Have we taken care not to undermine or make fun of those who believe the manipulated content? Doing so can lead to a hardening of these beliefs.

First Draft’s ‘Essential Guide to Responsible Reporting in an Age of Information Disorder’ is not designed to give you all the answers. What it will do, however, is provide you with questions you can ask as you navigate the tricky ethical terrain that comes with reporting in online in 2019 and beyond.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.