In the months since we announced the Partnership on AI/First Draft Media Manipulation Research Fellowship, the need for collaborative, rigorous research on whether to add descriptive labels that explain manipulated and synthetic media, and how to do this so audiences understand those labels, has become increasingly urgent. Technology platforms and media entities not only face challenges when evaluating whether or not audio-visual content is manipulated and/or misleading, but they must also determine how to take action in response to that evaluation.
Labelling is touted as an effective mechanism for providing adequate disclosure to audiences, thereby mitigating the impact of mis/disinformation; it has even been highlighted by certain platform users themselves as a particularly appealing strategy for dealing with manipulated and synthetic media.
While labels have the potential to provide important signals to online content consumers, they may also lead to unintended consequences that amplify the effects of audio-visual mis/disinformation (both AI-generated and low-tech varieties). We must, therefore, work collaboratively to investigate the impact labelling has on audiences. Doing so can help preempt any unintended consequences of scattered, hasty deployment of manipulated and synthetic media labels and ensure they prove useful for mitigating the effects of mis/disinformation.
Why we must study the impact of media manipulation labels
Labelling could vastly change how audiences make sense of audio-visual content online. Academic research on the “implied truth effect” suggests that attaching warning labels to a portion of inaccurate headlines can actually increase the perceived accuracy of other headlines that lack warnings. While this realm of academic research is in its nascent stages and was conducted on text-based mis/disinformation, might video and image labels change how people make sense of and trust content that is not labeled?
Such early findings imply a need to consider potential unintended consequences associated with deploying labelling interventions that do not draw from diverse input and testing. Although rigorous academic studies can sometimes take up to two years to get published, and organizations responding to mis/disinformation challenges must take swift action to respond to manipulated and synthetic media, we must work promptly alongside the academic community to further understand how labels, the language we use in those labels, and the dynamics of labels across different platforms and online spaces impact perceptions of information integrity.
This is an area that is ripe for multi-stakeholder, coordinated effort from technology platforms, media entities, academics, and civil society organizations – within the Partnership on AI (PAI) and beyond.
While each technology platform has a very different way of presenting audio-visual information, and even within particular tech organizations there may be visually distinct platforms, entirely disparate labelling methods across platforms could sow confusion in an already complex and fluid online information ecosystem. As we’ve suggested previously, we need to evaluate a new visual language that is more universal and proves effective across different platforms and websites. This underscores the potential benefit to cross-sector collaboration in testing, sharing insights, and deploying interventions to deal with manipulated media.
Our upcoming work
PAI and First Draft are excited to begin work that furthers our community’s collective understanding of how the language and eventual labels we use to describe audio-visual manipulations impacts audience perceptions of information integrity. It has been heartening to see related efforts from The Washington Post and the Duke Reporter’s Lab/Schema.org that attempt to drive towards shared language amongst fact checkers, platforms, and researchers to capture the nuances of different audio-visual manipulations.
Our research seeks to understand how labels describing manipulated and synthetic media might ultimately be leveraged to help audiences and end-users recognize mis/disinformation and interpret content online. To do so, we plan to landscape existing interventions, develop a series of manipulated media labels for testing, and then ultimately conduct audience research to study the label designs and how labeling impacts audiences more generally. This research will help illuminate whether or not a robust labeling mechanism can help audiences confidently and accurately recognize mis/disinformation. In doing so, we hope to improve the general understanding of how people interpret mis/disinformation and the effectiveness of interventions aimed at helping people to do so effectively.
Introducing our Media Manipulation Research Fellow
Emily Saltz has joined the PAI team as the PAI/First Draft Media Manipulation Research Fellow in order to drive this timely work. Emily joins us from The New York Times (a PAI Partner), where she was the User Experience Lead on the News Provenance Project. Her previous research focused on how people assess news photography on their social media feeds, and what types of labels and contextual information might help audiences better make sense of the photo posts they see. Emily brings her design and research experience to this very human-centered work, work that will only become more integral to ensuring information integrity as techniques for manipulating audio-visual content become more widespread and varied.
PAI and First Draft look forward to collaborating with our partners and beyond on this important project, one that requires consistent and collective attention across the media integrity community. We plan to share many of our findings as we work with a diverse suite of stakeholders to consider meaningful methods for promoting information integrity.
This article is re-published with permission from PartnershiponAI.org.
Stay up to date with First Draft’s work by subscribing to our newsletter and follow us on Facebook and Twitter.