There are lots of ways to label AI content. But what are the risks?
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

Image: First Draft Montage / Photography: Unsplash (CTRL, Albert Antony)

There are lots of ways to label AI content. But what are the risks?

by Tommy Shane, Emily Saltz, and Claire Leibowicz

Artificial intelligence isn’t the future of our media ecosystem. It’s the present. 

AI is already completing our sentences in iMessage and Gmail, generating our influencers and even altering our sense of the past.

In some ways we’ve been slow to mark this development. Attention has been paid suddenly and fleetingly. But we need to pay sustained attention so we can ask questions about when it is necessary to disclose the role of AI in creating content, when it isn’t and how we should do it.

What should we consider?

There are many elements to consider. Should our approach be based on intent — did someone want to deceive others? Or should it be based on its potential to influence, harm or deceive, regardless of intent?

Other ethical tangles arise: How might deepfakes normalize behaviors such as forced apologies, when someone is made to appear to say something they did not? How should we think about the role of consent? And what about AI technology’s role in a broader erosion of trust? 

The role of labels

As well as taking down what’s faked or false, we need to empower audiences to understand what they are seeing or hearing. One method is AI labels — visual or aural icons that can explain or indicate the role of AI. 

But before we start trying to label everything, we need to think about when labels might create more confusion than if the content was not labeled at all. We might inspire false confidence that something is fabricated when it isn’t, or suggest that all unlabeled content is trustworthy — just two of many considerations.

In the first piece of this series, we explored the emerging conventions being used by people and organizations to explain the role of AI in their content.

Now we examine the questions, tensions and dilemmas they raise, focusing on these considerations: when and how to apply an indicator, how well people understand indicators and how robust they are against attack and removal.

 

When and how should we apply an indicator?

While many have written about best practices for explaining AI models and datasets, the best practices around labeling AI outputs as they travel across social media and messaging platforms are still nascent. Similarly, although many thinkpieces have argued for the necessity of disclosing AI authorship, what does this look like in practice as users encounter AI media on platforms? 

Twitter has undoubtedly been an industry leader in driving forward a policy on synthetic media. Drawing from a public feedback process, it focuses on three issues: whether media is synthetic or manipulated, whether it was shared in a deceptive manner and whether it has the potential to cause harm.

But these three dimensions, all complex in themselves, are only some of the considerations when thinking about when and how to apply an indicator. We explore some of the other questions they raise.

 

  • Who and what created the AI media? Attributing authorship to AI can mislead as to the capability of the tech, and conceal the actual role of human editors — known as the “attribution fallacy.”
  • How do we decide when the role of AI needs to be labeled? AI is involved to some degree in nearly every part of our information ecosystem. What is the threshold for AI’s involvement that qualifies it for labeling?
  • Is it clear who applied the label, and were they the right party to apply it? Some labels are applied by the creator, and some are applied retroactively by another party. This distinction may need to be communicated to viewers. And the entity that applies the label should have evidence to substantiate it.
  • What happens when we don’t label? Not labeling can create a space for speculation and confusion in the comments.
  • What are the effects of labeling a subset of AI media? Will there be an “implied un-manipulated effect”? Like the “implied truth effect,” labeling some media could falsely imply that unlabeled media is not manipulated.

 

How well do audiences understand the indicators?

As we’ve explored elsewhere in guidance on labeling visual misinformation, the way we design labels really matters. A label is only useful when it is understood. We explore some of the UX issues that have arisen from our landscape review.

 

  • Is it clear to viewers what the label means? In the world of AI, many things can be called “AI” or “deep,” even when very different processes are involved. How comprehensible are these different types of terminology to end users?
  • How easily are they noticed? A label only works insofar as it is noticeable. Where (or,  during time-based media, when) should the indicator be placed so that it is seen at the optimal point?
  • How accessible are the labels? Many people have difficulty seeing and hearing. Labels need to be designed for everyone. They must be visually, audibly, technically, physically and cognitively accessible for as many people as possible.
  • How does the label appear on different platforms and screen sizes? It is important to design labels with different platform interfaces and devices in mind.
  • What other information and interactions might help people understand the AI manipulations? Specific types of information and interaction design could provide a better understanding of the manipulation.
  • How do we communicate uncertainty? Highly authoritative and seemingly quantifiable annotation, such as that offered by many deepfake detectors, could result in false confidence that something is AI-generated or manipulated.

 

How robust are the indicators? 

Labels have the potential to be attacked, manipulated, forged or edited. We need to understand how they might backfire through poor infrastructure or simplistic design.

 

  • How easily can the label be removed on the front end? It’s important for labels to travel with the media, even when reshared and cut for different contexts, otherwise many people will not see it.
  • How easily can the label be removed on the back end? Is the label embedded in the metadata/infrastructure that travels with the asset? How robust is that to reproduction?
  • How easily can a label be faked? A deepfake label could be applied to a genuine video to deceive or sow confusion.
  • What happens when labels are erroneously applied? Without some kind of validation and means for redress, labels could mislead.

 

Now is the time to test and reflect

AI is playing a greater and greater role in not just the curation, but also the creation of our content. And people and organizations want to explain that to their audiences. 

New conventions will create new expectations, and so AI labels could be taking us into a new era of media transparency. Or they could further confuse and mislead.

Now is the time to test their effects, and reflect on their long-term implications for authenticity on the social web.

 

This is the second in our two-part series on explaining the role of AI in online content. Read the first, a landscape review of how AI is currently being labeled.

Thanks to Claire Wardle and Sam Gregory, who provided feedback on this report.