What is Birdwatch?
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

What is Birdwatch?

Birdwatch is a new program launched by Twitter to combat misinformation on the platform, using volunteers to help determine the reliability of content. According to Twitter, Birdwatch invites selected users “to identify information in Tweets they believe is misleading and write notes that provide informative context.” Eventually, Twitter plans to attach curated versions of these notes directly to tweets. 

This strategy — known as crowdsourced moderation — places some of the onus of dealing with misinformation on the users of a platform. Researchers have in the past advocated for crowdsourced methods in part because the sheer scale of misinformation far exceeds the capacity of dedicated content moderators and fact checkers. MIT researchers studying crowdsourced fact checking suggested in an October 2020 preprint that the practice could protect against accusations of bias often lobbed at the fact checking industry (although this hasn’t been empirically tested).

Birdwatch is still in its pilot phase, so for now it will exist on a separate site, only viewable by users in the US. For now, only a small set of users are allowed to add notes to tweets, “largely on a first come, first-served basis.” When “birdwatchers” leave notes on a tweet, they are also asked to designate a category of misinformation: misleading context, opinion presented as fact, etc. These notes can then be rated by other birdwatchers in terms of helpfulness. Data from Birdwatch, including the number of notes, the reasoning behind them, and the ratings the notes received, are now being published daily, allowing researchers to track the success of Birdwatch in real time.

What are the issues with crowdsourcing?

Researchers, journalists and users have expressed concerns about:

  • Brigading: Brigading refers to the collective action of social media users, usually for nefarious purposes, that often targets a particular user. Members of a subreddit teaming up to all report one particular post is an example of brigading. Birdwatch is vulnerable to being brigaded. Twitter claims it is bracing for this possibility, but only time will tell how effective it can be at mitigating this.  
  • Bias and discrimination: Because Birdwatch is an all-volunteer force, Twitter runs the risk of cultivating a self-selecting and unrepresentative group of fact checkers that could replicate existing biases and exacerbate discrimination on the platform.
  • Labor concerns: Crowdsourced content moderation efforts have been criticized for placing the burden of dealing with unpleasant content on unpaid users, who are often themselves the target of unpleasant content. According to J. Nathan Matias, founder of the Citizens and Technology Lab at Cornell University, this also allows the platforms that profit to position “themselves as champions of free expression and cultural generativity” while outsourcing the labor of moderation. 
  • Prioritizing salacious content: In one analysis of crowdsourced fact-checking efforts, a fact checker noted that particularly interesting content tended to be selected quickly for checking over more mundane misinformation. 

To ensure that Birdwatch is accomplishing its stated goal of “empowering the Twitter community to create a better-informed world,” Twitter should:

  • Keep a close eye on harassment occurring on Birdwatch. Several experts have already expressed concern that Birdwatchers will be targeted by bad actors. In six months’ time, Twitter should be able to say, “We saw X instances of targeted harassment in this new space.”
  • Share information with researchers about misinformation on the platform, so they can assess whether Birdwatch is having an effect on the spread of misinformation.
  • Have (and be transparent about) a plan for determining which voices are considered authoritative on Birdwatch, to ensure inequality is not being replicated. (Birdwatch eventually intends to institute a “reputation” measure that awards reliable users more weight on the platform.)

Has crowdsourcing content moderation been tried before?

Yes! Even before the announcement of Birdwatch, mainstream platforms including Twitter relied on user reporting mechanisms to address misinformation, which is in itself a method of crowdsourced moderation. 

Several other popular sites, including Reddit, Quora and Stackoverflow, use some kind of upvoting mechanism, where users click on the content they would like to see show up first on a page. This is similar to the “ratings” element of Birdwatch, which allows users to rate particularly helpful content.    

Studies have shown positive results for crowdsourcing content moderation:

  • Using a set of articles flagged for fact checking by an internal Facebook algorithm, one October 2020 pre-print study found that a “crowd” of 10 lay people rating only headlines and introductory passages matched the performance of professional fact checkers researching full articles.
  • Full Fact, a UK-based fact-checking organization, identified in a 2018 blog post a number of tasks that are well-suited to crowdsourcing, such as “spreading the correct information” and “submitting claims to check.”
  • In another 2020 study, where emerging Covid-19 misinformation was identified by tracking Twitter replies that provided accurate information, the authors found that volunteer fact checkers can help pinpoint and correct misinformation on social media. 

Assessing early usage of Birdwatch

On February 1, Twitter released its first significant dataset about the content on Birdwatch, giving researchers insights into how the experiment is working. The most current data First Draft’s researchers analyzed provided information on 1,425 notes left on 1,081 unique tweets. Seventy-three percent of those notes marked tweets as “misinformed or potentially misleading” and 27 percent as “not misleading.”

Clicking through notes on the Birdwatch interface reveals some early issues with the new functionality. There are a number of Birdwatch notes full of sarcasm, trolling, satire and partisan bickering. For example, one note left on an Elon Musk tweet reads: “I have a theory that Claire (Grimes) tweets from his account from time to time.” In a sense, Birdwatch is operating like an extension of Twitter itself, not an additional layer of evidence-based posts providing much-needed context.

Ratings are available on Birdwatch so that less useful comments get downgraded in favor of more evidence-based content. However, there’s not yet a critical mass of users to make this possible. Only 512 unique users left notes on Birdwatch in the data provided, and more than 85 percent of the 1,081 Birdwatched tweets in the dataset contain only one note, making upvoting and downvoting moot. Until the user base grows, it’s hard to say how effective Birdwatch will be.