Misinformation markers
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

Image: First Draft Montage / Photography: Shutterstock

Misinformation markers

How to signpost and label problematic online content

Cues for a digital era

Information disorder is a permanent reality. The way we understand and relate to information won’t be solved — it will need to evolve.

We lack the many tools we used to have to decide what information is credible. As Claire Wardle, co-founder and US director of First Draft, explains in our Essential Guide to Understanding Information Disorder, “On social media, the heuristics (the mental shortcuts we use to make sense of the world) are missing. Unlike in a newspaper where you understand what section of the paper you are looking at and see visual cues that show you’re in the opinion section or the cartoons, this isn’t the case online.”

We need better credibility cues in the digital era.

Misinformation markers

As more people manipulate information and media as part of their everyday speech, more people are wanting to be transparent. And more governments — such as the EU — may look to require some form of labeling by law.

To explore what’s happening, what works and what doesn’t, and provide practical advice for journalists, technologists and policymakers, we have launched “Misinformation markers”: a program of work dedicated to exploring the cues, labels and other indicators we need to safely navigate digital media and information.

We are looking at the new frontiers of misinformation labeling, including tools such as overlays, to help those working on misinformation use labels effectively. 

Next steps

We have published guidance for journalists on how to use overlays to prevent amplification of screenshotted information they feature in their stories. We’ve examined the benefits and risks of labeling content manipulated by artificial intelligence, and we’ve published recommendations for platforms on how to design warning labels.

Next we will be working with platforms, in continued collaboration with the Partnership on AI, to assess the opportunities for standardizing the design of misinformation markers, as well as the metrics for measuring their effectiveness.

 

It matters how platforms label manipulated media. Here are 12 principles designers should follow

Manipulated photos and videos flood our social media platforms. With PAI, we provide guidance for labelling them.

→ Read the article

 

From deepfakes to TikTok filters: How do you label AI content?

What does a future of AI content look like? The first in our two-part series on explaining the role of AI in manipulating online content, exploring emergent tactics.

→ Read the article

 

There are lots of ways to label AI content. But what are the risks?

The questions, tensions and dilemmas of labelling AI content.

→ Read the article

 

Overlays: How journalists can avoid amplifying misinformation in their stories

Applying an overlay to images of misinformation prevents further amplification of harmful content.

→ Read the article

 

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.