An introduction to live audio social media and misinformation
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. .

An introduction to live audio social media and misinformation

Image: First Draft Montage / Photography: Shutterstock (Fizkes)

What ‘the pivot to audio’ means for misinformation research and platform moderation.

Facebook’s planned suite of new audio features could be the most significant development in live, audio-based social media since Clubhouse splashed onto the market and prompted copycat features on other platforms, such as Twitter Spaces and Reddit Talk. But what does this trend mean for identifying and moderating misinformation? We’ve taken a look at each of the platforms and their moderation policies, and offer some key takeaways for journalists and misinformation researchers.

Audio misinformation comes in many forms

One of the challenges of tracking audio misinformation on social media is that audio is easily recorded, remixed and transcribed. During the 2020 US election, one piece of misinformation that went viral was a recording of a poll worker training in Detroit. The recording itself didn’t have evidence of anything nefarious, but it was cut to ominous music, overlaid with misleading text and titled #DetroitLeaks. In 2018, First Draft set up a tip line with Brazilian journalists to monitor misinformation circulating on WhatsApp in the lead-up to that country’s presidential election. Over a 12-week period, 4,831 of the 78,462 messages received were audio files, many containing misleading claims about election fraud. Transcriptions of misleading audio and video were also popular, one example of which was reported to the tip line over 200 times.

What all these cases had in common was that they were extremely difficult to track and verify. Live audio chats, like the kind happening on Clubhouse and its competitors, share these problems. However, they invite live, targeted harassment and are even more ephemeral, disappearing when the conversation ends. So how can misinformation researchers track this content and how are platforms designing policy around it? A few key themes arise.

Moderating live audio is labor-intensive

In December, we wrote about how audiovisual platforms have been able to sidestep a lot of criticism about misinformation even though they are a significant part of the problem. One of the reasons is that audiovisual content takes longer to consume and study. It is also more difficult to moderate automatically. Most content moderation technology relies on text or visual cues that have previously been marked as problematic; live audio provides neither.

It’s no surprise then that Clubhouse and Twitter Spaces are relying on users to flag potentially offending conversations (see below for specific policies) as their primary form of moderation, shifting the burden to the people targeted by the content. As for Facebook, CEO Mark Zuckerberg has not yet indicated how or to what extent the live audio chats ought to be moderated.

Ethics and privacy — or the illusion of them

Part of the appeal of live audio chats is that they feel intimate. This raises issues of consent and privacy for researchers and journalists who might be interested in using the spaces for newsgathering or tracking misinformation. Clubhouse tries to mitigate this by prohibiting recordings. (Facebook, on the other hand, almost seems to joke about informed consent: In a short promotional video titled “Bringing Social Audio Experiences to Facebook,” one voice asks another if they want to record something together. The other voice replies, “I do. Actually, I’m already recording.”)

Regardless of the platform’s official policy, journalists and researchers should take care to consider whether it’s necessary to listen in on or use material from conversations in these spaces. Participants might not be aware who is in the room or that their words could be made public. Reporters should also question whether misinformation they hear on the platform merits coverage: Has the falsehood traveled widely enough that reporting on it will not amplify it to new audiences? The size of the room is the only metric available in many cases, as there aren’t shares, likes and comments to help evaluate the reaction and reach the conversation is getting.

The platforms and what we know about their moderation policies

Clubhouse

  • How it works: Launched in March 2020, Clubhouse is the app that many have credited with the current surge in live audio social media. First Draft published an extensive primer in February — the main thing to remember is that speakers broadcast messages live from “rooms” to listeners who tune in and out of these rooms.
  • Moderation policy: Part of the reason Clubhouse gained so much attention last year was its laissez-faire attitude toward moderation. It provided some “community guidelines,” which emphasize users’ role in moderating and flagging content. Clubhouse says it retains a recording of a room as long as it is live, in case a community guidelines violation is reported and needs to be investigated, but otherwise the recording is deleted when the room ends.

Facebook live audio rooms

  • How it works: On April 19, Facebook published a detailed overview of a suite of “social audio experiences,” including short-form audio clips that people can post to their feeds. These are dubbed “Soundbites” and Clubhouse-style “Live Audio Rooms” in Facebook and Messenger. Users will also be able to tip their favorite content creators using Facebook’s digital currency. Live audio rooms are expected to be available by summer and Vox reports that the other products will roll out in the next few weeks and months.
  • Moderation policy: In an interview with Platformer’s Casey Newton, Zuckerberg said the jury was still out on how and to what extent Facebook would moderate live audio rooms. Although the feature hasn’t launched yet, Facebook does have a statement about privacy for its video-based Messenger Rooms.

Twitter Spaces

  • How it works: The social messaging platform launched Twitter Spaces on iOS in December 2020 and on Android in March 2021 (it is not yet available for desktop browsers). The feature provides live audio chat rooms that are public for listeners. The creator — or “host” — of the room can designate up to 11 people, including themself, to speak at a time.
  • Moderation policy: If users think a space violates Twitter’s rules, they can report the space or any account in the space, according to Twitter’s FAQ. Twitter retains copies of audio for 30 days after a space ends in case it needs to review potential rules violations. If Twitter finds a violation, it keeps the copy for an additional 90 days to allow people to appeal.

Reddit Talk

  • How it works: On April 19, Reddit announced early testing of a new feature that lets users host live audio conversations in Reddit communities. For now, only community moderators can start a “talk,” but anyone can join to listen. Hosts can give permission to speak.
  • Moderation policy: “Hosts can invite, mute, and remove speakers during a talk. They can also remove unwanted users from the talk entirely and prevent them from rejoining,” according to Reddit.

Discord Stage Channels

  • How it works: Discord is a chat app geared toward gamers, letting them find one another and talk while playing. It supports video calls, voice chat and text. This March, it introduced “Stage Channels,” a Clubhouse-like function allowing users to broadcast live conversations to a room of listeners.
  • Moderation policy: The platform’s community guidelines read: “If you come across a message that appears to break these rules, please report it to us. We may take a number of steps, including issuing a warning, removing the content, or removing the accounts and/or servers responsible.”

Spoon

  • How it works: Spoon, which has been around since 2016, allows users to create and stream live shows where audience members can sit in and participate. 
  • Moderation policy: Spoon details its community guidelines here. Guidelines on inappropriate language offer an example of how violators are affected. “Some language is not appropriate for users under 18. We reserve the right to consider that when deciding to restrict or remove content including stopping Lives or asking you to change stream titles.”

Quilt

  • How it works: It is similar to Clubhouse, but with a focus on self-care.
  • Moderation policy:  Its community guidelines read: “As a Host, you’re given the ability to mute other participants or move speakers back to the listening section in case anything should happen that takes away from the experience of the group.” Users are invited to report any behavior that doesn’t align with the guidelines to the Quilt team, upon which the platform “may remove the harmful content or disable accounts or hosting privileges if it’s reported to us.”

Tommy Shane contributed to this report. 

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.