An introduction to live audio social media and misinformation
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

An introduction to live audio social media and misinformation

Image: First Draft Montage / Photography: Shutterstock (Fizkes)

What ‘the pivot to audio’ means for misinformation research and platform moderation.

What does the rise of live, audio-based social media mean for identifying and moderating misinformation? We’ve taken a look at the platforms and their moderation policies, and offer some key takeaways for journalists and misinformation researchers. (Updated January 31, 2022.)

Audio misinformation comes in many forms

One of the challenges of tracking audio misinformation on social media is that audio is easily recorded, remixed and transcribed. During the 2020 US election, one piece of misinformation that went viral was a recording of a poll worker training in Detroit. The recording itself didn’t have evidence of anything nefarious, but it was cut to ominous music, overlaid with misleading text and titled #DetroitLeaks. In 2018, First Draft set up a tip line with Brazilian journalists to monitor misinformation circulating on WhatsApp in the lead-up to that country’s presidential election. Over a 12-week period, 4,831 of the 78,462 messages received were audio files, many containing misleading claims about election fraud. Transcriptions of misleading audio and video were also popular, one example of which was reported to the tip line over 200 times.

What all these cases had in common was that they were extremely difficult to track and verify. Live audio chats, like the kind happening on Clubhouse and its competitors, share these problems. However, they invite live, targeted harassment and are even more ephemeral, often disappearing when the conversation ends. So how can misinformation researchers track this content and how are platforms designing policy around it? A few key themes arise.

Moderating live audio is labor-intensive

We wrote about how audiovisual platforms have been able to sidestep criticism about misinformation even though they are a significant part of the problem. One of the reasons is that audiovisual content takes longer to consume and study. It is also more difficult to moderate automatically. Most content moderation technology relies on text or visual cues that previously have been marked as problematic; live audio provides neither.

It’s no surprise then that Clubhouse and Twitter Spaces rely on users to flag potentially offending conversations (see below for specific policies) as their primary form of moderation, shifting the burden to the people targeted by the content. As for Facebook (now known as Meta), listeners can report an audio room for potential breach of community standards.

Ethics and privacy 

Part of the appeal of live audio chats is that they feel intimate. This raises issues of consent and privacy for researchers and journalists who might be interested in using the spaces for newsgathering or tracking misinformation. Clubhouse room creators can record chats; Facebook lets audio room hosts post a recording after the room has ended.

Regardless of the platform’s official policy, journalists and researchers should take care to consider whether it’s necessary to listen in on or use material from conversations in these spaces. Participants might not be aware who is in the room or that their words could be made public. Reporters should also question whether misinformation they hear on the platform merits coverage: Has the falsehood traveled widely enough that reporting on it will not amplify it to new audiences? The size of the room is the only metric available in many cases, as there aren’t shares, likes and comments to help evaluate the reaction and reach the conversation is getting.

The platforms and what we know about their moderation policies

Clubhouse

  • How it works: Launched in March 2020, Clubhouse is the app that many have credited with the surge in live audio social media. Speakers broadcast messages live from “rooms” to listeners who tune in and out of these rooms.
  • Moderation policy: Part of the reason Clubhouse gained so much attention soon after its launch was its laissez-faire attitude toward moderation. It provided some community guidelines, which emphasize users’ role in moderating and flagging content. 

Facebook live audio rooms

  • How it works: In public groups, anyone can join a live audio room. Private groups are members only. A room’s host can monetize the room by letting users send Stars — paid tokens of appreciation — or donations.
  • Moderation policy: Room creators can remove unwanted participants.

Twitter Spaces

  • How it works: The feature provides live audio chat rooms that are public for listeners. Up to 13 people (including the host and two co-hosts) can speak at a time. There is no limit on the number of listeners.
  • Moderation policy: If users think a space violates Twitter’s rules, they can report the space or any account in the space, according to Twitter’s FAQ

Reddit Talk

  • How it works: As of this writing, the feature remains in beta. It lets users host live audio conversations in Reddit communities. For now, only community moderators can start a talk, but anyone can join to listen. Hosts can give permission to speak.
  • Moderation policy:Hosts can invite speakers, mute speakers, remove redditors from the talk, and end the talk,” according to Reddit. Community moderators have the same privileges as well as the ability to start talks and ban members from the community.

Discord Stage Channels

  • How it works: Discord is a chat app geared toward gamers, letting them find one another and talk while playing. It supports video calls, voice chat and text. In 2021, it introduced “Stage Channels,” a Clubhouse-like function allowing users to broadcast live conversations to a room of listeners.
  • Moderation policy: The platform’s community guidelines, last updated in May 2020, read: “If you come across a message that appears to break these rules, please report it to us. We may take a number of steps, including issuing a warning, removing the content, or removing the accounts and/or servers responsible.”

Spoon

  • How it works: Spoon, an audio-only live platform, has been around since 2016. It allows users to create and stream live shows where audience members can sit in and participate. 
  • Moderation policy: Spoon details its community guidelines here. Guidelines on inappropriate language offer an example of how violators are affected. “Some language is not appropriate for users under 18. We reserve the right to consider that when deciding to restrict or remove content including stopping Lives or asking you to change stream titles.”

Quilt

  • How it works: It is similar to Clubhouse, but with a focus on self-care.
  • Moderation policy:  Its community guidelines read: “As a Host, you’re given the ability to mute other participants or move speakers back to the listening section in case anything should happen that takes away from the experience of the group.” Users are invited to report any behavior that doesn’t align with the guidelines to the Quilt team, upon which the platform “may remove the harmful content or disable accounts or hosting privileges if it’s reported to us.”

Stay up to date with First Draft’s work by following us on Facebook and Twitter.