First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

After the Capitol: tech and disinfo

On January 6, hours after President Donald Trump repeated falsehoods about the 2020 election to a crowd of his supporters in Washington D.C., a mob stormed and vandalized the US Capitol, forcing the evacuation of legislators and interrupting their work to certify the presidential election result. Five people were killed, one of them shot by a Capitol Police officer. It looks like the shocking incident was the last straw for the major social media companies, who responded with their strongest action yet against Trump and the harmful disinformation he has spread. Twitter announced a 12-hour suspension, Facebook and Instagram banned the president’s profile for the rest of his term, and YouTube said it will begin restricting accounts that post videos containing false information about the election. 

With this sweeping action, Silicon Valley’s leading social media firms answered to some extent the chorus of voices calling for the president to lose the digital platforms through which he has aimed to discredit the 2020 election’s results and, some say, to incite violence. 

In many ways, the actions were also a reversal for the social media companies, albeit one toward which they had been gradually building. In a 2018 blog post, Twitter said that blocking a world leader from the platform or removing their tweets “would hide important information people should be able to see and debate.” Of course, by the time it temporarily suspended Trump this week, Twitter had already taken to labeling, limiting or removing his offending tweets, a step it first took in May. Facebook, too, had previously balked at limiting Trump’s reach on its platform, with CEO Mark Zuckerberg citing Facebook’s strong “free expression” principles. 

We don’t know yet what the big picture of disinformation will look like after January 21. 

There’s already widespread anticipation of further action against Trump and the disinformation he has fomented. Kevin Roose of The New York Times reported that several Twitter and Facebook employees expect those bans to be extended beyond Trump’s term. 

If he loses his bully pulpit on social media, an obvious short-term fix for Trump might be to migrate to Parler or Gab, two smaller social media services seen as friendlier to right-wing figures for their less strict moderation. This would surely limit Trump’s audience, but as Roose points out, the president has benefited from the close attention afforded his tweets by mainstream newsrooms. Whatever happens between now and Inauguration Day — and afterward — one thing seems clear: Trump will remain a newsworthy figure. How will newsrooms cover Trump when he is never more than a few keystrokes away, whether it’s on Twitter or in the comparative niches of Gab or Parler? And how will companies like Facebook continue to fulfill the role they tacitly accepted in the wake of the storming of the Capitol when harmful messaging is sure to continue circulating?

The platforms have set a new standard. Can they enforce it worldwide? 

The platforms’ enforcement actions this week have been a response to concerns they don’t do enough to police disinformation. By acknowledging to the greatest extent yet their aim to limit the spread of harmful information, the companies have also raised questions about their worldwide reach — and responsibility. 

Brazil’s president, Jair Bolsonaro, has been compared in many ways to Trump, particularly as a user of social media to bypass journalists he views as hostile in order to speak directly to his base. He’s also been the subject of enforcement action, after Twitter, Facebook and YouTube in March removed some of his posts containing misinformation about the coronavirus. If Brazil finds itself in a political crisis and Bolsonaro uses his digital presence to organize a rally that might turn violent, will social media platforms be prepared to head off unrest?  

Look elsewhere, and a similar dilemma emerges for Facebook. In the Philippines, a country with an alarming level of violence against the press, President Rodrigo Duterte’s government has used official Facebook pages to target media outlets that criticize it. When social media platforms are a vehicle for threats and intimidation, will they take a closer look at the tension between a government’s right to communicate and the capacity for dangerous incitements to violence, not just in the US, but around the world? 

Are the platforms equipped to prevent further political violence?

We already know that much of the chaos of January 6 was planned in wide-open online spaces. 

Organizing a political rally online is legal on its own, of course, but inciting a violent riot isn’t. At First Draft, our monitoring team, like other organizations that work to counter disinformation, found numerous calls for violence in fringe spaces like The Donald, a gathering place for Trump supporters that was banned from Reddit in June. On Facebook, a Group with nearly 8,000 members planned travel to the January 6 rally. 

Can social media platforms connect the dots between organizing that takes place on their platforms and the red flags that portend violence? What about when those red flags pop up elsewhere? Individual social media platforms can be focal points for harmful information, but its spread is complex and eludes the reach of any one service. On January 6, the authorities knew clashes were likely and prepared accordingly, although, it turned out, inadequately. But it doesn’t take coordination on the massive scale seen in Washington to lead to violence. Social media has been and will remain a driver for disinformation, unrest and worse in venues large and small. 

To ensure that the shocking — and deadly — events of January 6 aren’t repeated, social media platforms will need to embrace the fragmentary nature and massive scale of harmful information online. This may be easier said than done. 

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

The “broadcast” model no longer works in an era of disinformation

By Carlotta Dotto, Rory Smith and Chris Looft

After November 3, some believers in the conspiracy theory that the US presidential election was rigged joined together, undeterred by official assurances of a fair and secure vote, to crowdsource evidence of massive fraud. They organized under hashtags like #DominionWatch, staking out election facilities in Georgia and doxxing voting-machine technicians. The ad hoc group investigation recalled the offline spillover of the QAnon-promoted #saveourchildren, which led participants to flood the hotlines of child welfare services with questionable tips about what they claimed was a vast child-trafficking conspiracy. 

Empowering members to be an active part of a conspiracy theory’s development is fundamental to the QAnon community, which has been likened to “a massive multiplayer online game,” complete with “the frequent introduction of new symbols and arcane plot points to dissect and decipher.” And as the year closes, the community model of conspiracy theories has also energized #theGreatReset, which encourages participants to connect the dots between geopolitical events to form a picture of a nefarious plot by a shadowy world government to enslave the human race. 

2020 was the year that demonstrated conclusively that effective disinformation communities are participatory and networked, while quality information distribution mechanisms remain stubbornly elitist, linear and top-down.

In trying to explain the influence of false and misleading information online, researchers and commentators frequently focus on the recommendation algorithms that emphasize emotionally resonant material over facts. Algorithms may indeed lead some toward conspiracy theories, but the dominant yet deeply flawed assumption that internet users are passive consumers of content needs to be discarded once and for all. We are all participants, often engaging in amateur research to satisfy our curiosity about the world. 

It’s critical we keep that in mind, both to understand how bad information travels, and to develop tactics to counter its spread. We need to move beyond thinking of disinformation as something broadcast by influential bad actors to unquestioning “audiences.” Instead, we’d do well to remember that ideas, including false and misleading information, circulate and evolve within robust ecosystems, a form of collective sense-making by an uncertain online public.

Networks of amateur researchers can be formidable evangelists for their theories. A team of researchers led by the physicist and data scientist Neil Johnson found in a May 2020 investigation of online anti-vaccine communities that while the clusters of anti-vaccine individuals were smaller than those formed by defenders of vaccination, there were many more of them, creating more “sites for engagement” — opportunities to persuade fence-sitters. In our research into the “Great Reset” conspiracy theory, we found the content underpinning the theory had spread widely across Facebook in many local and interest-based spaces, garnering considerable opportunities to persuade the curious. 

Crucially, the networks powering conspiracy theories like QAnon and the anti-vaccine movement are democratized, encouraging mass participation. On the other hand, the fact checks deployed against conspiracy theories almost always come from verified sources that are less than accessible to the general public: government agencies, news outlets, scientists. Networked conspiracy communities are proving resilient against this top-down approach. 

The conspiracy theory network in action

This year, the World Economic Forum’s founder, Klaus Schwab, launched a messaging campaign calling for sweeping political and economic changes to address the inequalities laid bare by the coronavirus pandemic — a “Great Reset.” The language used by Schwab and allies, including Prince Charles and Canadian Prime Minister Justin Trudeau, sparked a pervasive conspiracy theory that was advanced both by influential surrogates such as Fox News host Tucker Carlson and a diverse web of communities online. 

To illustrate the networked dynamics that reinforce conspiracy theories as they did around the Great Reset, we examined that theory’s rise to prominence on Facebook, with an eye on the impact of the fact checks and debunks published by trusted sources in response.  

We gathered 7,775 public Facebook posts that contained links and that mentioned “Great Reset” between November 16, when the conspiracy theory trended on Twitter, and December 6, and found that just a tiny fraction of the posts shared in that time included debunks or fact checks challenging the conspiracy theory. 

Instead, the leading posts in our dataset advanced various aspects of the complex conspiracy theory. They warned of “globalist overlords,” a “socialist coup,” and an “organized criminal cabal” planning a “genocide.” The most-shared URL quoted Carlson referring to the Great Reset as a “chance to impose totally unprecedented social controls on the population in order to bypass democracy and change everything.” Fact checks appeared in just one per cent of the posts mentioning the Great Reset, and even then, some of the top posts linking to them came from accounts condemning the news outlets, such as in one post claiming “fake news and gaslighting,” further advancing the conspiracy theory narrative. The network of Groups and Pages, sharing these conspiracy theory URLs among one another, drove up the readership and engagement on those links, unhindered by the fact checks offered by traditional sources. 

We took a look at the websites posted most often in our dataset and found that YouTube was the most popular domain, appearing in 1,945 posts — about a quarter of them — followed by BitChute, another video platform known for its far-right and conspiracy theory-related content. 

Top domains in public Facebook posts mentioning the “Great Reset” between November 16 and December 6, 2020

A bar chart showing the number of posts per social media domain. The top result is with over 1,500 posts, followed by Bitchute, with around 250, and Zerohedge, with slightly less than that.

After isolating the individual YouTube videos appearing in the dataset, we found that the 20 most-viewed videos all advanced a version of the conspiracy theory narrative. Together, as of December 17 they’ve been viewed nearly 9 million times on the platform. The widespread sharing of YouTube and BitChute links underscores how disinformation seamlessly skips among platforms, unhindered by moderation.

The prominent role of YouTube in the conspiracy theory makes sense, given what we know about the “echo chamber” of right-wing content on the platform. YouTube’s recommendation algorithm certainly plays a role in trapping viewers in echo chambers, also known as “information bubbles.” But more importantly, this content proliferates with the help of organic peer-to-peer networks that drive audiences to conspiracy theory URLs and videos.

Visually mapping the ecosystem of Facebook Pages and Groups posting links about the “Great Reset” and the URLs most frequently shared in the set, we were able to find further evidence of the power imbalance between the conspiracy theory and the fact checks. 

In the network visualization below, Pages and Groups that frequently share links among one another are grouped more closely together. The Facebook Groups and Public Pages promoting the conspiracy theory  — represented by purple dots in the diagram — form a dense cluster of highly interconnected accounts, pointing to an ad hoc network that drives up engagement figures on conspiracy theory content. Importantly, we can’t conclude that the density of this network points to deliberate coordination. Rather, it suggests that like-minded Groups and Pages amplify a narrative by widely sharing, often among one another, the URLs and videos that advance it. 

On the other hand, the orange dots representing fact checks and debunks spread in relatively small and isolated community clusters, remaining marginalized from the concentrated group in the center. The most frequently posted fact check — the Daily Beast’s “The Biden Presidency Already Has Its First Conspiracy Theory: The Great Reset” — couldn’t penetrate the core cluster of Pages and Groups championing the “Great Reset” conspiracy theory, even though its URL appeared in 37 different posts. The fact checks are having trouble keeping up. 

The network of Facebook Pages and Groups sharing links about the “Great Reset”

Combating networked disinformation

A new approach is needed, one that moves away from “chasing misinformation” and instead proactively challenges and pre-empts its emergence. Rather than focusing on what messaging to send, and who should send it, we need to focus on initiatives based on co-creation and participatory strategies with existing communities. Public health professionals have shown the value of participatory strategies in dealing with rumors, falsehoods and stigma around HIV and the Ebola outbreak. The appeal to expertise inherent to traditional fact checking limits trusted sources, however large their audiences, from drawing on the same kind of participant networks that power effective communications strategies — as well as conspiracy theories. It won’t be an easy task, but in 2021, we’ll have to transform the fact-checking strategy built around the model of a one-way broadcast to one that addresses the complexity of disinformation networks before more fence-sitters are pulled into the fray.

Note on methodology: 

We used CrowdTangle’s API to collect 11,808 posts from Facebook groups and unverified Facebook pages that mentioned “great reset” or “greatreset” between November 16, 2020 and December 6, 2020. We then filtered for posts that included outward links to other websites and social media platforms. To understand the potential impact of fact-checks on the conspiracy theory we also identified all the URLs from reliable news organizations that debunked the conspiracy theory and ran those URLs through CrowdTangle’s links endpoint to gather all the Facebook posts that shared them. These two datasets, which included a total of 7,775 posts, were then merged for analysis using Pandas and Gephi. In Gephi we re-sized the nodes based on the in-degree values in order to highlight the URLs that were most frequently shared in this set. URLs with more shares are represented by larger nodes. We used the “ForceAtlas 2” layout to bring accounts that frequently share among each other closer together. 

AI won’t solve the problem of moderating audiovisual media

In a 2018 testimony before the US Senate, Facebook chief Mark Zuckerberg said he was optimistic that in five to ten years, artificial intelligence would play a leading role in automatically detecting and moderating hate speech. 

We got a taste of what AI-driven moderation looks like when Covid-19 forced Facebook to send home its human content moderators in March. The results were not encouraging: greater quantities of child pornography, news articles erroneously marked as spam and users who were temporarily unable to appeal moderation decisions. 

But the conversation about automated moderation is still mostly focused on text, despite the fact that visuals, videos and podcasts are powerful drivers of disinformation. According to research First Draft recently published on vaccine-related narratives, photos and visuals accounted for at least half of the misinformation that was studied.

One issue with such an approach is that platforms geared toward audiovisual content, such as YouTube, TikTok and Spotify, have sidestepped a lot of scrutiny in comparison. Part of this is a structural issue, says Evelyn Douek, a lecturer at Harvard Law School. Watching visual media is more time-intensive, and the journalists and researchers who write about platform moderation spend more time on text-based platforms like Facebook and Twitter. “Text is easier to analyze and search than audiovisual content, but it’s not at all clear that that disproportionate focus is actually justified,” Douek says. 

Unlike Zuckerberg or Twitter’s Jack Dorsey, YouTube’s CEO, Susan Wojcicki, has not been called in for questioning by the Senate Judiciary Committee, even though many of the company’s moderation policies have been opaque and the subject of criticism. For example, YouTube recently announced that it would remove videos claiming that voter fraud altered the outcome of the US presidential election, only to later tell some users how to post similar content to bypass such moderation. Others were quick to point out that policy enforcement seems to vary according to inconsistent semantic distinctions.

We know even less about moderation on TikTok, a platform that is expected to top one billion monthly active users in 2021 and which once instructed its moderators to remove content of people who have an “abnormal body shape” or poor living conditions.

When audiovisual disinformation is both widespread and difficult to define, how prepared are the platforms to moderate this type of content, or for researchers to understand the consequences of moderation? And are we as confident as Zuckerberg that AI will help?

Automated platform moderation comes in two main forms: “matching” technology that uses a database of already-flagged content to catch copies, and “predictive” technology that finds new content and assesses whether it violates platform rules. We know relatively little about predictive tech at the major platforms. Matching technology, on the other hand, has had some major successes for content moderation, but with important caveats.

One of the first models of moderating audiovisual content at scale was a matching technology called photoDNA, developed to catch sexual abuse of children. Adopted by most of the major platforms, photoDNA is largely viewed as a success, in part due to the relatively clear boundaries around what constitutes child sexual abuse material, but also because of the clear payoff in taking down this content.

Similar technology was later developed to identify content that might be connected to  suspected terrorist activity and collect it in a database run by the Global Internet Forum to Counter Terrorism (GIFCT), a collaboration of technology companies, government, civil society and academia. Following the 2019 Christchurch shooting livestreamed across Facebook, there was renewed government support for GIFCT, as well as growing pressure for more social platforms to join.

Yet even GIFCT’s story highlights huge governance, accountability and free speech challenges for platforms. For one, the technology struggles with interpreting context and intent, and with looser definitions about what constitutes terrorist content, there is more room for false positives and oppressive tactics. Governments can use “terrorist” as a label to quiet political opposition, and in particular, GIFCT’s focus on the Islamic State and Al-Qaeda puts content from Muslims and Arabs at greater risk of over-removal

Data from GIFCT’s recent transparency report also shows that 72 per cent of content in its database is “Glorification of Terrorist Acts,” a broad category that could include some forms of legal and legitimate speech. This underscores one of the biggest challenges for platforms looking to moderate audiovisual content at scale: Legal speech is a lot harder to moderate than illegal speech. Why? Because even though the boundaries of illegal speech are fuzzy, they are at least defined in law. 

However, speech being legal does not make it acceptable, and the boundaries of acceptable legal speech are bitterly contested. This poses an extra challenge for moderating audiovisual misinformation. Unlike child sexual abuse material, misinformation is not a crime (some countries have criminalized misinformation, but in most cases its definition remains vague). It is also extremely difficult to define, even if you try to set parameters around certain categories as defined by “authoritative” sources, much like photoDNA and GIFCT try to match content against an existing database.

For example, in October, YouTube said it would remove any content about Covid-19 vaccines that contradicts consensus from local health authorities or the World Health Organization. But as we’ve learned, identifying consensus in a rapidly developing public health crisis isn’t easy. The WHO at first said face masks did not protect the wearer, amid a global shortage of PPE, before reversing course much later on. 

Which is to say: Even though automated moderation has had mixed results with certain types of content, moderating at scale — particularly when it comes to visual disinformation — poses enormous challenges. Some of these are technological, such as how to accurately interpret context and intent. And some of these are part of a wider discussion about governance, such as whether platforms should be moderating legal speech and in what circumstances.

To be sure, this is not an argument for harsher content moderation or longer working hours for human content moderators (who are mostly contractors, and who already have to watch some of the most traumatizing content on the internet for less pay and fewer protections than full-time employees). What we do need is more research about visual disinformation and the platforms where it spreads. But first we’ll need more answers from the platforms about how they moderate such content.

Combating misinformation in under-resourced languages: lessons from around the world

2020 has taught us that virtually all communities struggle with the effects of misinformation. While those that speak majority languages have access to fact checks and verified information in their native tongues, the same is not always true of other communities. Minority languages are often under-resourced by both platforms and fact checking organizations, making it even more difficult to tackle misinformation and build up media literacy in these communities.

As part of its Rising Voices initiative, Global Voices’ Eddie Avila and First Draft’s Marie Bohner spoke to experts from around the world to see how local communities are tackling the threat of misinformation in their native languages. Among them are Rahul Namboori of Fact Crescendo in India, Endalkachew Chala of Hamline University from Ethiopia and Kpenahi Traoré of RFI from Burkina Faso, who shared their perspectives in tackling this unique challenge.


India is home to 22 official languages and at least 500 unofficial ones, making it especially difficult for fact checkers, journalists and educators to ensure everyone has access to verified information. The spread of false information has had especially dangerous repercussions in the country, as rumors have incited attacks and riots against ethnic and religious minorities, leaving dozens dead in the past few years.

It is in this context that Fact Crescendo is fighting false information, Namboori explained. The organization verifies information in seven regional languages, in addition to English and Hindi, with the help of localized teams to both find rumors and prevent their spread. Fact Crescendo’s regional teams are made up of local journalists who speak the local language and understand the cultural and political environments where they are operating in. Using tools such as Facebook’s CrowdTangle, the teams monitor hundreds of Groups and social media accounts to track misleading and false information. Fact Crescendo also uses WhatsApp tip lines and groups so their fact checkers can communicate directly with local communities and provide them with verified information in their own languages. 

It is not just about how misinformation travels between major languages spoken in India such as English or Hindi, but also how misinformation from abroad makes its way into the country. False claims about the coronavirus from Italy or Spain jumped from Spanish and Italian into English and Hindi, before making its way into regional languages, Namboori said. A layer of localized context is added with each jump, making the misinformation that much more believable and allowing it to echo in linguistic communities with little or no access to reliable information.

Ethiopia is similarly diverse when it comes to languages, with three major ones and 86 others spoken in the east African country. “Most of these languages are under-resourced, and there is no fact checking in these languages,” said Endalkachew Chala. Even though several of these languages are heavily used on social media platforms, their speakers do not have easy access to verified or reliable information.

Recently, internet shutdowns in the northern Tigray region due to the conflict in the country have exacerbated the issue of under-resourced languages. It has led to the creation of “two universes of information, where people living in the Tigray region do not know what is going on,” said Chala. The people there only have access to information broadcast by regional media, which has led to a “disjointed” understanding of the situation as those within Tigray have a different perspective to those outside, Chala added.

Chala has also seen health-related misinformation proliferate across Ethiopia. False claims about the Covid-19 pandemic were rife in Ethiopia in several languages, spreading both within and across linguistic communities. The lack of fact checking organizations or access to reliable information in under-resourced languages allowed these rumors to permeate into smaller rural communities and linguistic minorities.

“There are people from across Ethiopia who are willing to verify and fact check,” said Chala, but that foreign funding almost always goes to those with political connections or speakers of dominant languages in urban centers. He added that platforms and foreign companies need to hire native speakers of minority languages, to help verify information for these communities so as not to rely on outsiders.

Elsewhere in Africa, speakers of the minority Bambara language in the western part of the continent are also dealing with similar resourcing issues. Although members of the Bambara communities receive their news from local media on television and radio and may therefore avoid the tsunami of online misinformation, they are also entirely dependent on traditional media outlets, said Kpenahi Traoré. The responsibility of these outlets to report factual information is therefore even greater because Bambara communities often do not have access to the internet to fact check what they hear. Although local organizations do not currently have the resources to transmit factual information to their communities, there are foreign organizations including RFI Mandenkan that fact check information in Bambara and other local languages, said Traoré.

In the Amazon, where indigenous communities predominantly communicate orally, it has been key that resources — particularly those around the coronavirus — were similarly “localized,” said Avila of Global Voices. “Who or what may be trusted in one culture may not be the same in another culture, and so keeping that in consideration is really important in the kinds of things we’re seeing online.”

While individuals and certain organizations are making an effort to provide speakers of under-resourced languages with factual information, linguistic minority populations are still at a distinct disadvantage to those who speak dominant languages. Even as the world has become more globalized through the internet, those with less connectivity or who communicate differently are left behind. Media outlets and social media platforms can invest in these local communities to build media literacy and so they can have the same access to verified information in their languages as everyone else.   

And yet, these globe-spanning examples also underscore the need to address broader questions like literacy rates or internet connectivity when designing solutions to counter the problem of misinformation in minority languages. Understanding how linguistic communities communicate is key to building the required infrastructure to improve media literacy.

Correction: An earlier version of this article said Hamline University was in Ethiopia, rather than the US.

Failure to understand Black and Latinx communities will result in a critical misunderstanding of the impact of disinformation

In the months leading up to the presidential election, reporting detailing the importance of the Black and Latinx vote proliferated across the media. As uncertainty increased about which election-related issues were most important to communities of color, many journalists looked to researchers for insights. But as researchers from Black and Latinx communities, we found a disconnect in questioning that appeared to demonstrate that many newsrooms were commissioning stories based on preconceived notions of what they assumed were important issues, rather than attempting to learn what was happening on the ground. Appreciating the nuanced characteristics of these communities is critical for anyone who studies, funds or works on information disorder.

That most newsrooms do not represent the audiences they are meant to serve is well known. As awareness grows about the impact of disinformation on communities of color, the absence of reporters, social media staff and funders from those same communities is becoming an increasingly urgent issue.  

This year, First Draft partnered with the Disinformation Defense League (DDL), a collective established by the Media Democracy Fund that is composed of over 200 grassroots organizations across the US tackling disinformation and voter suppression campaigns targeting Black, Afro-Latinx, and Latinx communities. Through this partnership, we gained perspective on the gap between the organizations that work with the communities they serve, and large newsrooms that too often fail to understand the most important issues within communities of color, resulting in a failure to fully understand racialized information contexts.

Inquiries First Draft received from newsrooms showed an obsession with the impact of a small number of issues, such as birtherism claims about Kamala Harris and niche voting demographics in South Florida, rather than attempts to understand the much more fundamental impact of the pandemic and of systematic voter suppression campaigns against Black and Latinx communities. As we reflect on 2020, here are some key lessons.

The terms Black and Latinx are not mutually exclusive, and the community as a whole is not homogenous.

The term Latinx is not a race or an ethnicity, but a geographical descriptor with roots in the colonial ways people talk about Latin America, a descriptor derived to differentiate “Latin” peoples from the “Anglo-Saxon” peoples of the Americas. According to the Pew Research Center, Afro-Latino is a term deeply rooted in the identity of Hispanics in the US, and understanding that one can identify as both is crucial. As Meghna Mahadevan, Chief Disinformation Defense Strategist of United We Dream, explains: “The Latinx community is not a monolith. The Afro-Latinx experience is different from the Latinx, and is different from the Black experience. However, both communities share some negative experiences, like police brutality.” 

Anti-Black sentiment exists within and across Black and Latinx spaces

Anti-Blackness is global, and doesn’t exclude Latinxs from spreading racist and anti-Black sentiment online regardless of whether they reside in Latin America or the United States. This is something that often goes unexamined. Within many Latinx spaces online, social media posts displaying anti-Black sentiment were prominent this election cycle, serving as a vector of misinformation against two social justice movements: Black Lives Matter and the organization’s call to Defund the Police. The Black Lives Matter movement (BLM) is a grassroots organization with sets of independently led chapters, and with leadership decentralized, data deficits about the movement appeared on a national stage. Misinformation associating BLM with extreme ideologies through the relationship of members with foreign leaders resonated in Latinx communities. For instance, in South Florida, dated and out-of-context photos showing BLM co-founder Opal Tometi with Venezuelan president Nicolas Máduro were consistently shared by local influencers such as Miami-based Venezuelan journalist Carla Angola

The flow of information isn’t straightforward

There were too many assumptions from newsrooms that misinformation was flowing from Latin America to the US, when the flow was often quite the opposite. Much of it originated in the US and then spread across borders. Known misinformation agents translated articles, videos and memes from English to Spanish, sharing them on multiple platforms, such as WhatsApp, Facebook Groups, Pages and distribution lists. For example, Estamos Unidos, a Spanish- language conservative Facebook Page managed in the US, but with a large audience in Latin American countries, often used the The Epoch Times, a newspaper known for its promotion of right-wing conspiracy theories, as a source, translating articles pushing unproven claims of voter fraud.

A common narrative circulating in Latinx spaces online connected socialism to the Democratic Party, exacerbated by ads from Donald Trump’s re-election campaign that were then reposted by content creators such as PR Conservative and The Hispanic Conservative. For many immigrants, correlating socialism with the policies of a particular candidate played on the negative experiences of families who emigrated from countries with leftist authoritarian regimes, such as Cuba and Venezuela, or fled regions controlled by organizations such as FARC in Colombia and Sendero Luminoso (Shining Path) in Peru. The messages were originally created for target audiences in Florida, where Latinx make up 17 per cent of the voting population. However, these posts gained traction in different online communities, then were amplified to become a nationwide trend of misinformation.

Covid-19 is critical to understanding what is happening in Black and Latinx communities

Black and Latinx communities were hit the hardest by Covid-19, with a death rate almost three times higher than white Americans. The day-to-day impact of the virus on communities of color has not received significant media coverage, but organizers on the ground understood the realities. As Candice Fortin, Senior Regional Field Manager at Color Of Change, explained: “[T]here were so many folks who when we reached out via text or phone call would name that they knew someone who was sick, that they themselves may be sick or someone at their work, in their neighborhood or church was sick [so it was very hard to get people to vote].” 

The pandemic also had devastating effects on the economy. According to an analysis by the Pew Research Center, the economy, health care and the pandemic were the top election issues for Latinx voters. The Brookings Institution also warned of the harsh economic situation of Latinos, catalyzed by the pandemic. According to the study, one-third of family businesses closed or experienced significant declines in revenue, three out of ten Hispanic families include someone who lost their job, and 40 per cent of households are having trouble paying their rent/mortgage. The impact of this reality on the election was rarely covered.

The importance of recognizing the impact of decades of targeted voter suppression tactics

A primary concern of Black and Latinx voters was the active attempt by President Donald Trump to undermine the mail-in voting process. These targeted messages were a constant drumbeat of the campaign, including the promotion of unsubstantiated claims that non-citizens were taking advantage of the postal voting system to cast ballots. In the Latinx community, there was a general feeling of distrust toward the Postal Service and its workers, evidenced in cases with misleading and sensationalist headlines in small publications such as Cuba en Miami, and in regular posts by well-known disinformation agents, such as the libertarian website PanAm Post.

The president’s messages were coupled with the fact that ballots of Black and Latinx voters had been disproportionately rejected leading up to the election. Fortin from Color of Change confirmed that “misinformation about people not getting their mail-in ballots was a very common thing,” and First Draft’s research surfaced numerous examples of misinformation about how to request, properly fill out or “cure” a ballot. As methods for voting by mail varied from state to state, some found it difficult to obtain a definitive answer in the weeks leading up to the election.

Preparing for 2021

As we move away from the election and toward the widespread availability of a coronavirus vaccine in the US, data deficits will likely increase, and with it, vaccine hesitancy. While vaccine misinformation will be a barrier to trust, that alone does not explain why certain communities of color will be more likely to resist vaccination. Considering the historical legacies of the Tuskegee experiments, Henrietta Lacks, and the recent allegations of forced sterilization and hysterectomies of undocumented women, it is crucial that those who report on Black and Latinx communities understand this legacy. 

Racialized information contexts require specific strategies and collectives such as the Disinformation Defense League, which facilitates partnerships between organizations such as United We Dream; campaigners like Win Black/ Pa’lante and others who are working to combat Black and Latinx mis- and disinformation.

Problematic coverage of the protests following George Floyd’s murder showed the urgent need for newsrooms that are more inclusive. But this isn’t enough. Newsrooms need to spend more time speaking with people on the ground, such as organizers, grassroots organizations, activists and educators, who can assist in shaping reporting through insights that can only be found by those with expertise. 

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Identifying ‘data deficits’ can pre-empt the spread of disinformation

The 2020 rabbit hole: Why conspiracy theories draw people in

Conspiracy theories have simmered on the fringes of society for years. But in 2020 they found new audiences: Celebrity chef Pete Evans welcomed foodies to vaccine conspiracy theories in Australia, while UK health care workers started anti-vaccine Facebook Groups where some falsely claimed the Covid-19 vaccine was “poison.” Anti-5G figures including British conspiracy theorist David Icke became household names as people vandalized phone masts across Europe; yoga influencers and suburban women in the US adopted QAnon beliefs and wellness bloggers made them pretty in pastels.

The pandemic created the ideal framework for mass distrust of institutions, thrusting these years-old, baseless beliefs into the mainstream. According to studies and surveys, a significant number of people globally found a tidy home in conspiracy theories promising to connect the dots between 2020’s turbulent events and their lives.

How did people become exposed?

The channels for fringe theories and misinformation were already in action on social media as millions entered lockdown measures at the start of 2020. But the nature of the digital communication landscape, evolving strategies from conspiracy theory supporters, and the additional time spent online while in lockdown helped content challenging evidence-based views reach ordinary people on a whole new level. 

With a simple click, friends could invite each other to join any of a growing number of online spaces opposing government restrictions, such as the Yellow Vest protest groups in France and Ireland, and anti-lockdown group Stand Up X in the UK. There, people were eventually exposed to further unfounded (and sometimes increasingly extreme) ideas, such as when Stand Up X partnered with conspiracy theory outlet Eyes Wide Open UK to organize nationwide rallies promoting QAnon-linked beliefs. Some spaces grew exponentially: Facebook Group “Stop 5G UK,” created in 2017, gained more than 28,000 followers between March and April, taking its membership to more than 56,600 before it was banned by the platform. 

As platforms began to take action against some of these groups, conspiracy theory communities adapted to bans by creating channels on alternative networks, such as Telegram and Parler, advertising such spaces to users ahead of anticipated deletions. The network of conspiracy theories coalescing under the QAnon banner proved particularly skilled at evading moderation while recruiting members, infiltrating established anti-trafficking groups, adopting the agreeable slogan “save our children” and taking QAnon offline through rallies across the globe. Other conspiracy theorists sidestepped moderation by publishing newspapers and leaflets, reaching new audiences on a physical, local level.

Why are people susceptible?

Even before the pandemic hit, researchers had found that situations of anxiety, uncertainty and loss of control make people more susceptible to believing conspiracy theories — they provide answers and relief, while those usually drawn to them tend to be relatively untrusting and concerned for their personal safety. The threat of a deadly disease, fast-changing science and decreased mobility, however, made these attitudes more widespread, with more people probing for explanations as to why this was happening. As governments fumbled their coronavirus responses and trust in them diminished, conspiratorial outlets promising neatly packed explanations also became more enticing.

Searching for answers in times of great confusion and grief can send people down dark rabbit holes, as experienced by Dannagal Young, a social psychologist who turned to conspiracy theories after her husband’s terminal diagnosis. “These feelings of collective uncertainty, powerlessness, and negativity likely account for the popularity of Covid-related conspiracy theories circulating online,” she wrote. 

Within ambiguous and terrible events, conspiracy theories increase perceptions of control by providing a channel for anger or fear, said Young. Psychologists have found that feelings of anger are likely to be followed by confidence and urges to take action. In the pandemic, creating an anti-mask Facebook Group, taking to the streets to protest or attacking phone masts may have been affirming reactions.

Increased social isolation, alienation and loneliness have also been key factors; several studies suggest links among conspiracy theories, social exclusion and ostracism. More time spent on social media globally meant internet users were more likely to find and fall into digital rabbit holes. One demonstrator at a QAnon rally in Minnesota told CNN reporter Donie O’Sullivan that she had more time to be on social media and “research” QAnon during lockdown — leaving her an ardent believer in the conspiracy theory.

Those who have suffered the worst financial losses during the pandemic may be more vulnerable — a University of Kent study suggested disadvantaged populations are more likely to find conspiracy beliefs appealing.

What next?

Conspiracy theories are often partisan, feeding from and into people’s political leanings. But they can also shape people’s outlooks and affect relationships. Reporting throughout the pandemic has shown how these theories have divided families. BBC disinformation reporter Marianna Spring documented how conspiracy theories broke up Sebastian Shemirani’s relationship with his mother, Kate Shemirani, a prominent British conspiracy theorist. Since she skyrocketed to notoriety on the coattails of the pandemic, Sebastian said his mother was “too far gone” to be able to repair their relationship.

It’s no coincidence that the conspiracy theories reaching new heights during the pandemic contain common themes around institutions and elites. They are built on similar grounds, with plenty encompassing old, antisemitic tropes.

Conspiracy theories are no longer neatly siloed ideas, as multiple theories coalesce into one worldview. What has developed is something more akin to a conspiracy theory mindset, in which people choose from a buffet of false ideas. Anyone, from a nurse to your hairdresser, is vulnerable.

With Covid-19 vaccines, potential mutations of the coronavirus, further economic fragility and US president-elect Joe Biden’s inauguration ahead, 2021 will provide more fodder for the networks of conspiracy theories that have expanded and merged this year. Journalists and researchers need to fill data deficits faster and consider how they can better connect factual information with audiences. Platforms should refine their approaches to countering the evolving tactics of conspiracy theorists — banning harmful groups long after they’ve already proliferated is not enough. Conspiracy theories should no longer be viewed as isolated strands — they need to be treated as an interconnected web of ideas. Otherwise, they will continue to promise a tempting nest full of answers for those asking questions.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

The return to old-school methods to sow chaos

As millions of people around the world were under lockdown this year, social media became a lifeline for many. While researchers and journalists were focused on mis- and disinformation flourishing on the main social media platforms, information disruptors returned to old-school methods of sowing chaos and confusion through leaflets, billboards, emails, SMS and robocalls. 

The pandemic became an opportunity for the dissemination of Covid-19 hoaxes and conspiracy theories through letterboxes straight into people’s homes. A leaflet sent out in the UK claimed that the government, the media and National Health Service representatives were attempting to “create the illusion of an unprecedented deadly pandemic” to justify “extreme lockdown measures.” People living near the Canberra Hospital in Australia received a flyer alleging that Covid-19 is being spread by the government through the water supply, and that a vaccination would contain a tracking device. Misleading claims about the virus were also printed on billboards and posters: An Indian example promoted essential oils to protect people from Covid-19. Two US billboards bore the message that “It’s NOT about a ‘VIRUS’! It’s about CONTROL” alongside an image of a crash-test dummy wearing a mask. 

 A billboard in India that promotes essential oils to protect people from Covid-19. Photo by Ali Abbas Ahmadi 

People received emails from fraudsters pretending to be with the Ministry of Health in Colombia, alleging they had to have a mandatory Covid-19 test. Similar attempts to gain access to personal information were conducted over text messages and phone calls, such as in South Korea, which saw a rise in “smishing,” scam text messages that spread false information about Covid-19 cures and offered free masks in exchange for personal information. 

The US presidential election was a greatest-hits compilation of the old-school genre, with unsolicited, misinformation-filled newspapers such as The Epoch Times sent to households across the country, unofficial “ballot boxes” erected on sidewalks, and robocalls telling people to “stay home, stay safe” on Election Day that reached millions. 

As the social media platforms become more active in tackling false claims around politics and health, disinformation agents are searching for new ways to spread their messages.

Darren Linvill, an associate communications professor at Clemson University, told First Draft: “If you want to spread disinformation, you don’t go where everybody is watching. You go somewhere where nobody is looking.” Online and offline channels are not mutually exclusive to disinformation actors, who often use multiple platforms to spread untruths, Linvill said. “We frequently saw content from text messages that were screen-grabbed and shared on social media.” 

For purveyors of disinformation, one advantage of offline distribution is that provenance can be obscured — physical copies don’t leave digital traces that could point people to the source. Amid worldwide protests against systemic racism this year, misleading flyers designed to undermine Black Lives Matter were circulated in the US and the UK. In both cases, it was unclear who created the leaflets. As Full Fact noted, “almost anybody can make a sticker that looks like an official one, whether they may support or oppose the goals of the group in question.” 

And weeks before the US election, suspicious flyers threatening Trump supporters were sent to residents in New Hampshire. Photos of these flyers, whose origin and authenticity were unknown at the time, were uploaded and amplified by social media influencers and partisan groups. Some social media posts with high engagement falsely claimed residents in Kansas had received this letter. Kansas City police investigated the rumor and reported that no resident had received this message on paper, but it did appear on social media

As mis- and disinformation researchers know, leaflets, billboards, emails, SMS and robocalls present logistical challenges as it is impossible to be everywhere at once. Unless these messages are flagged by the recipient, they can remain under the radar. That makes it challenging to determine how far these hoaxes are spreading and — if the authors choose to remain anonymous — who is behind them. 

ProPublica senior technology journalist Jack Gillum reported on the impact of the robocall operation that reached at least 800,000 residents living in key states that may have affected voter turnout for the 2020 US election. “When it comes to robocalls, getting data for that is really difficult,” he said. “I didn’t know what data is easily available and that we can confirm that sort of stuff, so basically I had to rely on US government sourcing.” 

Perhaps the most high-profile example in 2020 was the case of a fraudulent email targeting Democrat voters, sent before the presidential election. It prompted a press conference hosted by the FBI and the nation’s director of national intelligence. Experts say it was the work of Iranian hackers posing as the far-right Proud Boys group. Evie Sorrell, an undergraduate student at the University of Pennsylvania who lives in Philadelphia, was among those who received a threatening email telling her to vote for Trump.

“When I first got the email, I was like, ‘Huh, that’s really weird. Also pretty illegal,’” Sorrell said. “And then I realized that if it was real, then they might have information on me.”

Of the episode, including finding out that Iran might have been behind it, she said she felt violated. She speculated that her knowledge of internet culture and digital literacy skills might have put her at an advantage: “You could definitely be swayed to at the least not vote, or take it very seriously and vote for Trump because you’re worried for your life and safety.”

As Sorrell’s experience shows, many of these messages can feel uncomfortably intimate to the recipient, as they were sent directly to homes or mobile phone numbers. Linvill says, “They have the potential to be more persuasive, simply because they’re more personal. Because they’re sent to you directly, as opposed to messages on social media that you scroll down through and it’s one message in a list of messages.”

There are laws regulating false advertising and broadcasting materials, but these vary from country to country, as does enforcement. In January, the US government took additional steps to limit the scourge of illegal robocalls, putting the onus on phone service providers instead of consumers. But days before the US election, voters were still flooded with text messages containing damaging disinformation narratives, as The Washington Post reports. Peer-to-peer texting platforms used during elections are not as clearly covered by the anti-robocall rules, as the companies contend they are not an automated service. 

As we look to 2021, it’s important to remember that misinformation is not just happening on the major social platforms. Journalists and researchers will need to devise ways to understand the complexities of scope and impact, beyond just hoping concerned citizens will report problematic emails and phone calls.

It’s crucial to understand how misinformation flows through diaspora communities

The 2020 news cycle provided an object lesson in how multilingual misinformation can travel at high velocity across social platforms and geographical borders, often iterating faster than platforms and fact checkers can correct it. For those who monitor the online information ecosystems of diaspora communities, 2020 allowed us to further our understanding of common tropes, tactics and narratives that take root. But part of what we’ve also learned this year is how much we still don’t know. The way misinformation flows across these networked audiences, effective interventions within closed messaging apps, the correlation between imbalanced media coverage and community members’ reliance on alternative news sources — these are all subjects that deserve more attention from platforms, researchers, journalists and other information providers in 2021.

The Chinese diaspora is a useful case study. The overseas Chinese consume, discuss and share news in ways that often bypass traditional media gatekeepers. In addition to established media sources and the overseas Chinese press, news is obtained on platforms such as Facebook, Twitter and YouTube. Weibo, a Chinese-language microblogging platform akin to Twitter, is also used by some with ties to mainland China to traverse the country’s Great Firewall and stay connected with friends and family.

Additionally, millions of overseas Chinese use the messaging features in WhatsApp and WeChat (the latter has a social networking and a group chat component) to share personal updates and news with loved ones.

These platforms and messaging apps are vectors for cross-border disinformation and distortion. “Mainstream” social media such as Twitter and Facebook offer greater transparency and perhaps better chances that users will encounter corrective information than they would in closed messaging apps. But their accessibility can be manipulated by those targeting Chinese-speaking audiences. For example, researchers have identified alleged China-backed information operations that targeted the 2019 Hong Kong protests and amplified coronavirus conspiracies on platforms such as Twitter.

Opponents of the Chinese Communist Party (CCP) use these platforms to spread Chinese-language disinformation as well. Our monitoring this year showed that some in the Chinese diaspora who harbor more extreme political views toward the CCP collaborated with American right-wing figures to spread misinformation about the pandemic and the US election on Twitter, Facebook and YouTube. They also reached people via offline pamphlets.

Meanwhile, China-based platforms such as WeChat and Weibo are heavily regulated by the Chinese government; for instance, messages with flagged keywords are blocked on WeChat. Looking at the availability of coronavirus information on WeChat and YY (a live streaming platform) at the start of the pandemic, Toronto-based online watchdog Citizen Lab found the scope of censorship “may restrict vital communication related to disease information and prevention.”

Misinformation circulating in WeChat and WhatsApp can be pernicious in additional ways: In group chats, existing trust among participants could lead users to process the information with less scrutiny. The closed or semi-closed nature of these spaces makes it difficult for journalists and researchers to obtain a complete picture of the volume and flow of misinformation. Crucially, the labels applied to misinformation in “open” spaces such as Twitter and Facebook do not travel with the false or misleading posts when they are shared on other platforms. Rather, inaccurate information circulates unchecked across the diaspora once it leaves the platforms where the contextual warnings were applied.

Injecting fact checks into closed and semi-closed spaces appears to be one obvious way to rectify dubious claims, but those who specialize in countering Chinese-language disinformation in WhatsApp and WeChat face immense challenges. WeChat “is already inherently fertile ground for misinformation, because unlike other messaging apps and social media, the platform hosts a vast number of native content publishers vying for attention,” noted Chi Zhang, a former researcher for Columbia University’s Tow Center for Digital Journalism, in 2018. The disinformation flow appears nearly unstoppable when considering its volume against the handful of WeChat-based Chinese-language fact checkers, such as 反海外谣言中心 (Centre Against Overseas Rumours) and 反吃瓜联盟 (No Melon Group) — the former consists of a team of 21 people (according to an introductory message the account sends to new followers), compared to the app’s over 1.2 billion users worldwide.

Understanding and countering misinformation in diaspora communities is complex and can’t be accomplished by fact checkers alone. Here are our recommendations for platforms, researchers, media outlets and other information providers:

Platforms must take into account the fact that misinformation can proliferate beyond a platform’s perimeters, and ensure that interventions such as contextual labels travel along with the post if it is shared off-platform. Misinformation policies should be applied consistently across languages, and platforms that offer labels should ensure their availability in languages other than English. Data about the cross-platform flow of labeled content should be shared with misinformation researchers and journalists.

For researchers, more study is needed on the development of new information ecosystems as immigrants move across geographical borders, the importance of alternative news sources in various diaspora communities, the impact of misinformation on these communities, and effective ways of correcting or prebunking misinformation in different communities.

News outlets must approach reporting on diaspora communities (and the misinformation to which they are vulnerable) with nuance and empathy. Mentions of geopolitics may be relevant in coverage of some diasporas, but placing too much emphasis on “rifts,” or amplifying an “us versus them” narrative, might over time erode these communities’ trust in traditional media sources.

In the case of the overseas Chinese, the pandemic and China’s strained relations with Western countries this year had already fueled emotionally charged public discourse. Chinese American and Chinese Canadian communities were stung by pandemic-related racism. Rapidly deteriorating Australian-Chinese relations left Chinese Australians and New Zealanders vulnerable to racist attacks, with some branded potential spies and others publicly questioned about their loyalties. On top of all this came imbalanced media coverage buoying a dichotomy of “China” versus “the West,” which could prompt members of the diaspora to retreat further into alternative news sources and closed platforms that act as “echo chambers.”

Additionally, outlets should be aware that journalists covering China-related propaganda or disinformation could come under online attack. One Chinese Australian journalist’s work for ABC Australia about a controversial skit on Chinese history drew horrific trolling (content warning: graphic and abusive language) from pro-CCP Twitter users, while the Australian public broadcaster was hit with accusations of publishing “CCP propaganda” when it reported on the highly organized anti-CCP Himalaya movement.

Finally, information providers seeking to disseminate quality content in diaspora communities should work with trusted community leaders to ensure that accurate information, couched in appropriate cultural context, is effectively communicated in the online spaces where the communities congregate.

Damage done in one community’s information ecosystem can bleed outwards into the broader ecosystem. Our work on the Chinese diaspora — as well as the experiences of those monitoring Latin American and African communities online — has shown that we need a better understanding of the ways harmful information bypasses traditional gatekeepers and gains footholds across borders, and more robust ways of preventing and mitigating this damage.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

We need independent platform oversight in 2021

Thanks to urging from stakeholders, social media platforms took visible steps to curb mis- and disinformation in 2020. Despite an array of alerts, banners, labels and information hubs, misinformation continued to poison online communities. Voter fraud myths proliferated across the globe, medical professionals reported the deaths of Covid-19 patients who believed the virus was not real, and now misinformation may hinder the ability to administer a vaccine to mitigate the pandemic. 

Google, Facebook and Twitter now all have their own transparency websites, and publish reports on the state of misinformation on their platforms. We know more about the decisions being made around content moderation than ever before. However, because these reports are published by the platforms with no independent oversight, we’re left having to trust their conclusions. The reports from Twitter, Facebook and YouTube after the US election were full of figures, but if those reports had been published by an independent agency, they would have felt more like actual transparency reports than opportunities for PR. 

The policies social media platforms have enacted have done little to stem the tide of misinformation. It’s time for these companies to look outside for help. Platforms need to commit to working with local stakeholders to create policies based on subject matter and cultural expertise. They also need to institute regular independent audits to ensure these policies are working, and give auditors the autonomy to determine what data points they require. Law Professor Nicolas Suzor from Australia’s QUT Digital Media Research Centre said the platforms are not in the best position to choose the research or the questions that need to be asked: “I think it’s dangerous to expect platforms to do this by themselves and I think it’s arrogant of them to expect that they can figure this out. This is a social conversation and a democratic conversation that can only be solved as a first step with more research that understands what’s going wrong.” (Suzor is a member of the Facebook Oversight Board, but stressed that he was speaking in his personal capacity, and not as a representative of Facebook or its Oversight Board.)

Even before 2020, the dangerous offline effects of mis- and disinformation had been well-documented — from lynchings in India to children in the US dying of preventable causes because their parents believed misinformation — but it took a pandemic to spur platforms to make their most far-reaching changes yet. Beginning in early March, the platforms took a series of promising steps to address misinformation. Twitter expanded its content labeling policy, and both Apple and Google culled apps with questionable coronavirus information. Some changes were meant to empower users with information, like Facebook’s Coronavirus Information Center or YouTube’s info panels, both of which direct users to credible health organizations. Other actions promoted digital media literacy, like Facebook’s investment in initiatives aimed at educating the public. These efforts seemed to be an indication platforms were stepping up their efforts to root out misinformation.

But while the platforms received much public praise for their actions, the implementation of these decisions wasn’t always consistent. Content moderation, fact checking and labeling efforts were applied haphazardly to different communities in the US ahead of Election Day. Policies varied from country to country, without good reason: Tweets from US President Donald Trump promoting hydroxychloroquine were left on the platform, even though similar claims posted a week later by Brazilian President Jair Bolsonaro were removed. Voter fraud claims that warranted labels in the United States were not labeled in the Bihar election in India. Twitter and Facebook announced mid-year they would label state-affiliated media, but users soon found that Twitter was selectively labeling some countries’ state-run outlets while ignoring others, leading to charges of bias. 

Nor were the platforms’ measures always effective. Some researchers have argued Twitter’s corrective interventions may not be a practical way to address misinformation. Those intent on spreading false narratives found simple ways to evade moderation policies. For example, screenshots were used to share content previously removed by Facebook and Twitter. First Draft regularly identified ads and social media posts that violated platform policies. 

Some policies were controversial from the moment they were announced: After Facebook issued a ban on political ads through the Georgia runoff election in January, Georgia candidates on both sides of the aisle worried it would kneecap their ability to effectively campaign. 

Despite obvious challenges, social media companies could do more to improve the safety of their products, if only they would rely on stakeholders outside of their paid employees. Creating policy in conjunction with relevant stakeholders, committing to regular independent audits and providing open access to data could make a measurable difference in reducing harm caused by misinformation. In Myanmar, where Facebook’s failures in 2018 contributed to a genocide of Rohingya Muslims, the platform worked with local fact-checking organizations to institute more rigorous hate-speech policies. Though there are still issues with the dangerous misinformation in Myanmar, initial analysis indicates progress was made once Facebook began addressing content flagged by local misinformation monitors. 

As in Myanmar, platform policies should be instituted in partnership with local experts and stakeholders, and tailored to make the most impact in the country where they’re implemented. Dr. Timothy Graham, a senior lecturer at QUT Digital Media Research Centre, told First Draft that platform responses should be “attuned to specific geographic, historical and cultural contexts.” Policies about vaccines should be written in consultation not only with doctors who understand the medicine behind the vaccine, but also the local public health organizations that understand how their community perceives vaccination. 

And these policies cannot go unexamined. To ensure they’re making a difference, there needs to be independent auditing. Facebook commissioned an independent audit of misinformation on its platform in 2018; the results, released in July, found that Facebook made decisions that “represent significant setbacks for civil rights.” There should be more of these audits, but regularly scheduled ones, not those convened at a time and frequency of the platforms’ choosing. And, as Graham explains, “platforms such as Facebook should not be the final arbiters of data access” — the auditors should determine what they need access to to do their jobs effectively. 

Finally, organizations should be able to use publicly available data to monitor dissemination of disinformation across platforms without being held in breach of terms of service agreements. Facebook is currently fighting a New York University political research project over its ability to do just that. Allowing open access to data would enable a network of researchers to investigate how much misinformation is out there, where it’s coming from, and how to reduce it. 

Social media companies have in no way exhausted the list of potential ways to mitigate misinformation. If they work with researchers and other experts to craft policy, then allow those policies to be independently evaluated, we may finally see misinformation in online spaces begin to wither.

Anne Kruger contributed to this report.

As online communities mobilize offline, misinformation manifests a physical threat

On the evening of November 4, Jennifer Harrison and other members of the extremist Patriot Movement AZ were forced to leave the Maricopa County Elections Department in Phoenix, Arizona, where they had shown up unannounced. The small gathering in the parking lot of the elections office had quickly grown into a larger demonstration, whose participants — predominantly supporters of President Donald Trump — demanded that election workers “count the votes.” About 200 demonstrators, some armed with AR-15-style rifles, were present, prompting the deployment of sheriff’s officers in tactical gear. Most of the demonstrators were there for the same reason: A viral video had misled them into believing that poll workers were intentionally invalidating votes, supposedly by telling Trump voters to use a Sharpie to mark their ballots.

Although past research has struggled to gauge the real-world consequences of online mis- and disinformation, 2020 has exposed a gap in understanding of the ways online spaces are intrinsically connected to offline spaces, communications and behaviors.  

“SharpieGate,” which became part of Trump supporters’ playbook in Arizona, was far from the only conspiracy theory to inspire offline action from the broader “Stop the Steal” movement. A month after the initial protests at the Maricopa election building, a Dominion Voting Systems worker in Georgia became a target for allegations of electoral fraud in Gwinnett County. 

Ron Watkins, former administrator of the far-right messaging board 8kun and QAnon promoter, tweeted two videos purporting to show a Dominion worker manipulating voter data via a USB drive that he had plugged into a separate laptop. Dominion had become a flashpoint for accusations of fraud, with unevidenced claims that the company was involved in manipulating election results. The day after Watkins’s tweets went viral, election official Gabriel Sterling publicly denounced threats being made against the Dominion worker. 

“It has all gone too far,” Sterling said in a press conference that afternoon. The worker was identified through the videos shared by Watkins. Calls to hang the worker for treason began to spread on Twitter and on The Donald, a webpage originating from the disbanded subreddit of the same name. “Death threats, physical threats, intimidation — it’s too much,” said Sterling.

The targets of these misinformation campaigns are predominantly rank-and-file government workers and career officials. They are often women and people of color, such as the Georgia election workers, who are now facing further baseless allegations after being seen in a video posted by the Trump campaign. The video alleges that they deceptively counted ballots from suitcases after telling poll watchers to leave the working area. In fact, there was nothing abnormal about the process seen on video, and the alleged suitcases turned out to be ordinary ballot containers. 

Michigan Governor Gretchen Whitmer also nearly fell victim to the offline consequences of online information disorder. As misinformation about Covid-19 spread, Whitmer moved to implement some of the strictest stay-at-home orders in the country. She quickly became the target of violent threats from anti-lockdown communities on Facebook, spurred on by mis- and disinformation about the pandemic. This culminated in an attempted kidnapping plot against Whitmer and others by members of the Wolverine Watchmen militia, who had previously participated in anti-lockdown protests, a number of which had been organized on Facebook.

These types of harassment campaigns are the most recent, visible consequences of online conversations, but 2020 has provided a number of examples of the ways they are spilling into offline meetings, rallies and protests.

Anti-lockdown protests spread not only across the US, but in the UK and Europe, where they have resulted in arrests and have been a headache for governments concerned about civil unrest during a pandemic. In August, hundreds demonstrated on the streets of Madrid in protest of mandatory face masks and other restrictions. More and more, misinformation about vaccines has become a mainstay at these protests in the US, UK, Australia and beyond as Pfizer rolls out its Covid-19 vaccine and others report progress on the development front.

Covid-19 misinformation on social media also drove people to the streets across Latin America, where a robust campaign promoted the use of chlorine dioxide solution (CDS), a bleaching agent, as a cure for the virus. Misinformation caused so much turmoil that a sign promoting CDS at a mid-August protest against central government policies in Argentina drew media coverage amid two suspected poisonings and two deaths linked to the substance. Similar signs were seen at protests in other countries. 

“I treat myself using chlorine dioxide” became a commonly seen placard at protests in Ecuador and Mexico. A number of these protests were informed and organized by networks that rely on the use of online misinformation to spread their messages. Coalición Mundial Salud y Vida (Comusav) and Asociación Médicos por la Verdad banded together to organize a crowd of about 100 people in Mexico City.

In Peru, National Police forces used tear gas to clear protesters from the Campo de Marte park during an unauthorized pro-CDS march in Lima. The protest came after a congressional health commission withdrew a speaking invitation to Andreas Kalcker, a notable online proponent of CDS in Latin America. The day of the protests, a video of Kalcker surfaced online, in which he accused the Peruvian government and media of discrediting his work. The video was removed from YouTube but is still on Facebook. Protesters at Campo de Marte carried signs in support of Kalcker and Comusav.

Although not every piece of online misinformation sparks events offline, more work needs to be done to understand what pushes people to coalesce behind some videos, images or rumors and not others. Our understanding of what propels information disorder onto the streets is still limited, but 2020 has provided a number of examples that can help us dissect viral claims that have spurred real-world action. And without taking into account how enmeshed our everyday lives and our online experiences have become, we are poorly equipped to address many of the challenges posed by the disappearance of these boundaries.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Online influencers have become powerful vectors in promoting false information and conspiracy theories

Celebrities used to be singers, actors and athletes we got to know through a little black box called television. The free and limitless access of social media has since stripped away the distance between celebrities and fans, allowing social interaction at the touch of a button. With visually appealing content and carefully crafted captions, celebrities can attract a massive global following eager to know their every next move and thought, whether they are established household names using new platforms to speak to fans (Madonna with her 15.6 million followers on Instagram) or budding superstars who shot to viral fame (Charli d’Amelio, a 16-year-old TikTok influencer who recently became the platform’s first creator to hit 100 million followers).

However, celebrities and influencers must be aware of their responsibility when it comes to how quickly they can share misinformation to their millions of fans. The damage can be exacerbated by media reports that repeat the misleading or false claims for clicks. Regardless of whether they spread misinformation intentionally or not, celebrities are complicit in information disorder

This year has shown the role celebrities can play in amplifying misinformation. In January, singer Rihanna, who has close to 100 million followers, shared a misleading image of the Australian bushfires on Twitter. In April, actor Woody Harrelson shared the “negative effects of 5G” with his two million Instagram followers. And in July, rapper Kanye West told Forbes that he believed a coronavirus vaccine could “put chips inside of us.” West has more than four million followers on Instagram. The additional challenge is that oxygen can create more oxygen a celebrity tweeting a rumor or falsehood can become a news story in itself, causing even more people to be exposed, as we saw when Harrelson and other celebrities’ posts about 5G tipped a fringe conspiracy theory into the mainstream. The groundless rumors about 5G resulted in real-world harm, with phone masts being burned to the ground in the UK.  

This isn’t just a Western phenomenon. In India, Bollywood movie star Amitabh Bachchan has acquired a reputation for spreading false and misleading information online, such as claims that applauding could “destroy virus potency” and that homeopathy could “counter corona.” Both tweets were shared by hundreds of Bachchan’s 44.8 million followers. Meanwhile, in Australia, celebrity chef Pete Evans has promoted a plethora of conspiracy theories to his millions of followers, such as the “potential health risks” of 5G (similar false claims about 5G have also been shared across the world, such as in the UK and the Philippines) and an artist’s “map” that purportedly demonstrates the connections among various QAnon conspiracy theories. Multiple media outlets, including 7News and Daily Mail, repeated those baseless claims in their headlines, inadvertently reinforcing them. Only after Evans posted an image with a neo-Nazi symbol in November did he start to face repercussions, with brands and publishers banishing him. 

Fans’ strong emotional connection to their idols and heroes means they are often predisposed to believe them and trust their messages. This trust disarms fans in the face of mis- and disinformation spread by the celebrities and influencers they follow, nudging them to research and possibly repeat false narratives. For example, Amitabh Bachchan’s tweet from March falsely claiming that houseflies spread the coronavirus was followed by a spike in searches for “housefly” and “मक्खी” (housefly in Hindi) in India, according to Google Trends data.

Google Trends data reflects a rise in searches for the words “housefly” and “मक्खी” in India shortly after actor Amitabh Bachchan falsely claimed houseflies can spread the coronavirus. (Graphic by Ali Abbas Ahmadi)

Kate Starbird, a professor at the University of Washington, goes a step further in her analysis, where she says fans engage in “participatory disinformation” because they are “inspired” by influencers to voluntarily create and expand similar, false narratives and conspiracy theories. 

The role of emotion is often overlooked in media literacy campaigns even though it is a strong driver of shares on social media, and therefore a powerful force in disinformation campaigns. Being aware of one’s emotions and how they can be manipulated, as outlined in First Draft’s “psychology of misinformation” series, is one way fans can be better inoculated against misinformation. 

The imbalance of power between celebrities and influencers and their fans is further accentuated by social media companies’ sometimes confusing and often inconsistent policies related to misinformation. According to both Facebook and Twitter, a verified badge is granted to accounts that are notable and authentic, but there appear to be no consequences when authentic, verified accounts share lies and half-truths. In addition, politicians are exempted from Facebook’s Third-Party Fact-Checking Program, so any political misinformation they share usually remains online.

2020 has also seen the continued rise of “celebrity journalists,” i.e., news anchors and commentators who have built sizeable followings because they tend to sensationalize their reporting and do not appear to be bound by traditional journalistic principles of balanced, fair and factual reporting. For instance, Indian news anchor Arnab Goswami has acquired a reputation for sensationalist reporting and for promoting hateful information and conspiracy theories such as falsely accusing members of a Muslim missionary group of spreading Covid-19. In Australia, commentators such as Sky News’ Peta Credlin and Alan Jones are often misconstrued as journalists, with their views mistaken for facts. Similarly, in Hong Kong, commentators such as Lee Yee and Chip Tsao, along with members of the pro-democracy camp, have voiced support for US President Donald Trump over his hardline approach to the Chinese Communist Party. Opinion pieces from them are considered a representation of their publishers’ stance.

Celebrities and online personalities enjoy an outsized influence on social media, but 2020 has shown they are a disproportionately important vector in the disinformation ecosystem. The size of their direct audience on the platforms, combined with the recognition their online activities receive from the mainstream media, can make them dangerous players when they share false or misleading information. In 2021, let’s avoid giving oxygen to inaccurate and false information and be mindful about the amount of airtime given to influencers and celebrities, especially those who are repeat offenders.

“Do No Harm” — Assessing the impact of prioritizing US political disinformation over health misinformation in 2020

Since 2016, the “field of misinformation” has been disproportionately focused on political disinformation, with emphases on both Facebook and Twitter. Globally, the larger threat has been health and science misinformation on a range of platforms. But the field’s focus was not determined by the communities most affected by misinformation, nor by the relative harm of different types of misinformation. Instead, it was set by US-based university researchers, media outlets, philanthropic institutions and Silicon Valley-based platforms, whose obsession with election-related disinformation directed the focus of misinformation initiatives, interventions and research projects over the past four years.

The impact of this prioritization by news and research organizations has left the US, and other countries, ill-equipped for the pandemic, with health authorities having to play catch-up around the challenges of misinformation, a disproportionate focus on interventions designed to slow down misleading political speech, and journalists unprepared to report on scientific research. To prepare for the growing levels of distrust in science and expertise, alongside the flood of actual misinformation we expect to see in 2021, researchers, technologists, journalists, philanthropists and policymakers must refocus their attention to health and science communication, most notably around medicine and climate. 

There are three recommendations that should be considered. The first is the need to educate journalists about science and research so they are able to adequately question press releases from academics, researchers and pharmaceutical companies when necessary. The second is a need to educate science and health professionals about the current information ecosystem. In this fragmented, networked world, the caution and discipline that define scientific discovery are being weaponized by bad actors. The third is the critical need to raise awareness about the harm done to communities of color globally, and how that harm has created a deep distrust of  medical health professionals. The focus on misinformation should not cover up an urgent need to understand these dynamics. 

1) The need to educate journalists about science and research

Preprints are scientific reports that are uploaded to public servers before the results have been vetted by other researchers, the process known as peer review. The purpose of pre-prints is to allow researchers to get a heads-up on new research, and to encourage others to try and replicate and build on the results. In 2020, these pre-prints reached a larger audience than usual because of Twitter bots such as bioRxiv and Promising Preprints, which automatically tweeted new publications, giving researchers and journalists immediate access to non-peer-reviewed Covid-19 studies. Unfortunately, these studies, often with small sample sizes or very preliminary research, were published by media outlets or re-shared on social media without the necessary caveats, amplifying early findings as fact. 

For 2021, journalists and communication professionals in the field of misinformation should ensure they include necessary disclaimers when reporting on non-peer-reviewed research, and more frequently consider whether reporting on such early research benefits the public. Similarly, platforms should train fact checkers and internal content moderation teams on how to respond to health and science information. Many of the fact checkers in Facebook’s Fact-Checking Project are excellent at debunking political claims or viral misinformation. Few fact checkers have deep health and science expertise in-house, yet they are being increasingly asked to work in these fields. 

2) The need to educate science and health communication professionals about the information ecosystem

The current information ecosystem is no longer structured in a linear fashion, dominated by gatekeepers using broadcast techniques to inform. Instead it is a fragmented network, where members of different communities use their own content distribution strategies and techniques for interacting and keeping one another informed. 

Scientists and health communication professionals have been in the spotlight this year, and we have to learn the lessons from mistakes that have been made. These include the real-world impact of the equivocation about the efficacy of masks or the dangers of airborne transmission. There is also the need to reflect on the impact of different language choices on different communities. We need to recognize the ways in which the complexity and nuance of scientific discovery lead to confusion, and often inspire people to seek out answers on the internet, leaving them vulnerable to the conspiracy theories that provide simple, powerful explanations.  We also need to communicate simply and increasingly visually, rather than via long blocks of text and pdfs.

File:Flatten the curve - coronavirus disease 2019 epidemic infographic.jpg

3) The need to raise awareness about the harm done to communities of color 

Explaining methodology and experimental limitations will not address institutional trust concerns ingrained in Black communities. Starting in the 1930s and concluding in 1972, the United States Public Health Service collaborated with the Tuskegee Institute, a historically Black college, to study syphilis in Black men. Those who participated were never informed of their diagnosis and did not receive the free healthcare they were promised. Additionally, doctors declined to treat the participants with penicillin, despite knowing it could cure the disease. 

Of the original 399 participants, 28 died of syphilis, 100 died of related complications, 40 of their wives became infected, and 19 of their children were born with congenital syphilis, creating generational harm. These concerns have spread to online spaces, where users fear that Black people will be used as “guinea pigs” when Covid-19 vaccinations arrive.  

Earlier this year, concerns were raised again over allegations of forced sterilization and hysterectomies of undocumented women in a for-profit Immigration and Customs Enforcement detention center, building off a long history of unwanted medical testing and eugenics programs. Injustices such as these can lead to increased distrust in both government and the medical health system in Black, Latinx and Indigenous communities. Several science communication initiatives have focused on Covid-19 misinformation in 2020, but health professionals must begin 2021 by acknowledging, appreciating and discussing mistrust. 

As research in West Africa later showed, efforts by the World Health Organization, the Red Cross and other global organizations to curb Ebola misinformation that didn’t take into account “historical, political, economic, and social contexts” were ineffective. Communication around health protocols such as hand washing did not result in behavioral changes because people did not view the action as a priority. Instead, they turned to trusted local sources, such as religious leaders, for direction. This occurred in the United States in the midst of the first wave of Covid-19 in the spring, where some pastors preached conspiracy theories to their congregations. 

In 2018, the Ebola epidemic spread both disease and disinformation in the Democratic Republic of Congo. Citizens blamed foreigners and Western doctors for the spread of the virus, using social media platforms such as Facebook and WhatsApp. Many pushed back against safety precautions, with rumors leading to attacks on hospitals and health care workers.  

Researchers, journalists and policymakers must take into account cultural and religious tradition, apprehension toward the medical health industry and government, and the role trusted local leaders play when building effective science communication strategies. 

But…remember health and political speech are not mutually exclusive 

Back in March, as the social platforms took what looked like decisive action to tackle misinformation, partnering with the WHO, creating information hubs and cracking down on Covid-19-related conspiracies, many observers applauded. But it was clear that the platforms felt a sudden freedom to act around health and science misinformation. The WHO could be the arbiter of truth, unlike fact checkers trying their best to referee political speech. Health misinformation felt like an easier challenge to solve.

By April, the growth of “excessive quarantine” and anti-lockdown communities online demonstrated the naïveté of these conclusions. Health misinformation cannot be disentangled from political speech. 

2020 has taught us that we should be focused on the tactics, techniques and characteristics of rumors, falsehoods and conspiracy theories. The same tactics researchers were documenting around elections emerged this year with a vengeance in the context of health and science. So in 2021, let’s learn the lessons collectively, rather than letting political psychologists decide whether misinformation can sway elections, and separately, infodemic managers at public health bodies decide whether memes influence mask wearing.

Understanding how established misleading narratives and strategies can be modified and repurposed to drive politicized agendas can help clarify and focus research, language and sourcing around medicine and climate communication in the new year.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

How we prepared future journalists for the challenges of online misinformation

Diara J. Townes is First Draft’s community engagement lead for the US bureau who developed, strategized, and guided the US 2020 Student Network from June to November, coordinating support, training, and output between volunteers and the organization.

The coronavirus pandemic and the lockdowns that came with it disrupted life for nearly everyone, and student journalists were no exception. After campuses, offices and businesses began shutting down in March, students and recent graduates across the US began to receive word that the summer internships they were counting on would be canceled. 

Katrina Janco, a recent graduate of the University of Pennsylvania, applied for a summer internship in journalism. “I got a message a day or so later saying, ‘I’m sorry we aren’t taking anybody because of COVID,’” she said.  

Meanwhile, the infodemic — the overload of mis- and disinformation around the coronavirus pandemic — was challenging newsrooms across the country. A looming election threatened to compound the information disorder already faced by journalists. To meet these challenges, First Draft launched a program that would support both student journalists and newsrooms: the US 2020 Student Network.

The First Draft Student Network, with summer and fall sessions that ran between June and November, was an all-volunteer effort with a collaborative objective. More than 40 students — most working in the US but several of them based abroad — researched, tracked and verified online mis- and disinformation, supporting the research needs of First Draft’s community of local and national newsrooms and journalists.

“I think one of the most interesting things about working with First Draft is learning about how misinformation works and how misinformation spreads,” said Lauren Hakimi, a junior at Hunter College. “I feel like when I become a professional journalist and if I were to report on elections, then I’ll know how to do it because I had this experience at First Draft.”

To prepare students for the network, First Draft’s investigative research team trained them on critical topics, including tactics bad actors use to manipulate information, digital verification tools, how to assess a social media post’s spread and virality, and the role of media in the amplification of unconfirmed rumors. The training sessions were recorded to make the learning experience accessible to all who participated. 

Volunteers kept track of their research in personalized documents and shared their insights in Slack and on Junkipedia, a growing misinformation database from the Algorithmic Transparency Institute at the National Conference on Citizenship. Their research tasks, which we created and assigned alongside First Draft’s community engagement intern Isabelle Perry, evolved as newsroom needs changed, a frequent occurrence in 2020. Perry checked in on the students’ work every week, noting which assignments piqued student interest and which standout posts were shared into First Draft’s wider Slack community.

At the outset of the project, Megan Fletcher from The University of Texas at Austin uncovered a tweet that said the US Postal Service would not deliver ballots to election officials unless they were sent in envelopes with two stamps. The user captioned the post with, “If you are voting by absentee ballot, you need two forever stamps! The envelope does not make that clear. Remember, two stamps!!” Fletcher’s find was an early example of a prominent post illustrating confusion around mail-in voting, alerting First Draft’s research team to a nascent potential misinformation narrative. As we saw leading up to — and after — the election, the record number of mail-in ballots cast by Americans went on to become the subject of a staggering degree of misinformation.  

Misinformation was also driven by fears of political violence months before the election. Ahead of a rally by President Donald Trump in Tulsa, Oklahoma, in late June, Ngai Yeung from the University of Southern California shared tweets from users warning about competing groups of protesters and pro-Trump bikers convening on the rally, stoking fears of a violent clash. The scuffles that ensued at the rally were a reminder of the value of social media newsgathering and verification skills in a context of heightened political tension. 

The impact of student research occasionally reached beyond the scope of the US. Sarah Baum from Hofstra University contributed to First Draft’s regular briefings to the United Nations Verified campaign with her findings on the #ExposeBillGates Global Day of Action, when social media actors around the world promoted debunked falsehoods about Gates and vaccines.

While research by students was showcased in a weekly newsletter, some of the students’ work was also featured in First Draft’s US 2020 newsletter, which reaches hundreds of journalists across the country. This gave students the opportunity to see how their research informed local media’s understanding of online misinformation. 

“I wanted to do something where I felt like I was helping people,” Baum said. “Battling disinformation about a deadly pandemic or about elections is something that I think is a very important public service.”

Over the summer, students shared more than 200 posts in Slack. When the Student Network relaunched in September after a post-summer break and with a new focus on election misinformation, a dozen students contributed more than 100 posts through November 6 and supported both local and national newsrooms as part of ProPublica’s Electionland project. 

“The First Draft Student Network was an incredible opportunity for our students to develop critical skills in social newsgathering and verification in a real-world setting,” said Dr. Carrie Brown,  the director of the social journalism program at the Craig Newmark Graduate School of Journalism at CUNY. Students in her program joined the Student Network to support First Draft’s partnership with ProPublica’s Electionland project. 

“Even though we couldn’t physically gather in a newsroom due to Covid-19, they were able to be a meaningful part of election coverage. At such an important moment in this country’s history, several of them told me they were grateful to have something to do that would make an impact besides just watching Steve Kornacki.” 

Verification has long been at the heart of what journalists do,” she continued. “But understanding how to identify and combat misinformation in a responsible way that avoids inadvertently amplifying it is a critical skill for any journalism student today, and it has helped many of our alumni get hired.” 

The skills the volunteers sharpened this summer have already lent themselves to a few emerging journalists. Lauren Hakimi from Hunter College was selected as a Hearken Election SOS fellow, drawing on the skills she developed as a Student Network volunteer to support a newsroom in Michigan.  

“Before First Draft, I didn’t know anything about monitoring, service journalism, or how misinformation spreads,” she said in an email. “I’m so grateful to have had the First Draft experience and I certainly won’t forget it.” 

Francesca D’Annunzio, from The University of Texas at Austin, began work as a full-time reporter covering election problems and misinformation in North Texas in September. “A lot of people think that non-citizens are consistently heading to the polls — even in our county. And they are worried about voter fraud via mail ballots,” she shared via Slack, highlighting narratives she saw taking shape over the summer. 

First Draft and its partners were humbled by the energy, ingenuity and adaptability of the Student Network volunteers. For many, 2020 didn’t go as planned, but we hope that the emerging journalists who collaborated with us still feel called to work in this challenging but critical profession. 

Isabelle Perry contributed research and reporting to this article.

We haven’t announced our plans for Summer 2021 student programs but interested students should follow @FirstDraftNews for any news to come on spring and summer internships.   

How QAnon content endures on social media through visuals and code words

As the network of conspiracy theories coalescing under the QAnon umbrella became increasingly visible this year, social networks have tightened their moderation policies to curtail QAnon supporters’ ability to organize. Promoters of QAnon-linked conspiracy theories have adapted to use visuals and careful wording to evade moderators, posing a detection challenge for platforms.

In July, citing the potential offline harm that online QAnon activity could cause, Twitter pledged to permanently suspend accounts that tweeted about QAnon and violated the platform’s multi-account, abuse or suspension evasion policies. Facebook updated its policies in August to include QAnon under its Dangerous Individuals and Organizations policy, banning Groups, Pages and Instagram accounts it deemed “tied to” QAnon. In October, YouTube announced its own crackdown on conspiracy theory material used to justify violence, such as QAnon, while Facebook committed to restricting the spread of the “save our children” hashtag used in conjunction with QAnon content.

The platforms’ measures appear to have significantly limited the reach of QAnon-promoting accounts, Pages and Groups: Shortly after its July announcement, it was reported that Twitter had suspended at least 7,000 accounts, while Facebook stated in October that it had removed around 1,700 Pages, 5,600 Groups and 18,700 Instagram accounts. Ever-changing tactics, however, mean QAnon content perseveres on the platforms through memes, screenshots, videos and other methods.

Memes galore

German Facebook Page “AUGEN AUF” (“eyes open”), which has more than 61,000 followers, posts conspiracy theory content, including memes that contain QAnon claims or are reshared from other Pages or accounts frequently posting QAnon material. In one post, the Page shared an image of a white rabbit with the text “leave the matrix, time is up,” bearing the name of a German-language Telegram channel that frequently shares QAnon content. In another post, a “Lord of the Rings” meme attacks platform bans of the conspiracy theory.

On Instagram, a Spanish-language account whose bio mentions the “New World Order” spreads QAnon visuals, including contradictory references to violence. One picture shows “cure for pedophilia” written on two bullets in a rifle magazine, while another image directly supporting the QAnon movement features text outrightly denying it supports violence and ends with the QAnon-associated phrase “WWG1WGA” (“Where we go one we go all”). 

Accounts posting QAnon memes and other visuals often do not include captions with the images, making these posts challenging to surface using only a text-based search for QAnon keywords. Facebook has introduced an image option to its CrowdTangle Search tool, making it possible to search for keywords and phrases within images that have been posted to public Pages and Groups. Asked how the platform sweeps images as part of its detection efforts, a Facebook spokesperson told First Draft that it uses a combination of AI and human review across text, image and video posts to enforce its policies. 

Screenshots of banned accounts, Pages and Groups

On Twitter, the accounts posting QAnon visuals despite Twitter’s crackdown on the community appear to be maintaining their influence and attracting high levels of engagement with other users. Posts allegedly from “Q,” taken from fringe message board 8kun, have been shared on the platform in the form of screenshots, with some attracting thousands of likes and shares.

Even after Facebook’s mass takedown of Pages “representing” or “tied to” QAnon, content from removed Pages has a way of reappearing in screenshots posted to still-active Pages. The Facebook Page “ENJOY THE SHOW,” which has more than 12,600 followers and regularly posts pro-Trump memes and QAnon content, recently uploaded a screenshot of the removed Page “LK Dark To Light” sharing a QAnon meme.


Videos are another method through which QAnon supporters continue to spread their claims. On YouTube, First Draft found videos with seemingly innocuous descriptions and titles containing claims supporting the conspiracy theory. The Great Awakening Report YouTube channel, for example, makes little mention of QAnon in titles, but its videos routinely promote QAnon claims.

One clip featuring an interview with Dylan Louis Monroe, an artist and administrator of the Deep State Mapping Project (whose Instagram account has been removed), contains a deluge of false assertions and conspiracy theories around QAnon, 5G and the coronavirus, incorporating images of “conspiracy theory maps.” Monroe also regularly posts QAnon conspiracy theories via his Twitter account.

Key letters and phrases

Aside from visuals, some supporters have evaded moderation by using key letters and phrases. Some Facebook Groups and Pages subtly weave the letter “Q” into otherwise generic phrases or words to signal support for the conspiracy theory network. For example, the handle of the “ENJOY THE SHOW” Facebook Page: @ENJQY1.

Other members of the community have pledged to avoid “Q” altogether, instead opting for keywords such as “17” or “17anon” (“Q” being the 17th letter of the alphabet). One Australian Facebook Page with 10,000 followers posts videos feeding QAnon ideas and includes “WWG1WGA” in nearly every post.

Facing the next challenge

Many of the major social platforms were slow to quash QAnon in its infancy, allowing it to gain a large online following, cross-pollinate with other conspiracy theory communities and spread across borders. (One exception may be Reddit, which banned the QAnon subreddit two years ago, a possible explanation for QAnon’s lack of meaningful presence on the platform.) 

Platforms such as Twitter, Facebook and YouTube have since realized the potential harm posed by QAnon-promoting accounts and moved to limit their organizational capabilities. Users searching for overtly “QAnon” Groups, Pages, accounts and hashtags in order to find like-minded Q supporters will find that this is no longer as easy as it was several months ago. 

But in the ongoing cat-and-mouse game, the next frontier in QAnon moderation may require further investment in detecting QAnon-related images, videos and keywords. Visuals in particular may pose detection challenges at scale, and while Facebook has indicated that its moderation efforts extend to images and video, it has not released details on how these efforts impact visual content specifically. First Draft also reached out to Twitter and YouTube for comment, but has not yet received a response.

Stamping out QAnon content on these sites entirely may be an impossibility now, as influencers within the community have demonstrated they can adapt to find new ways onto people’s feeds. But platforms can attempt to limit their impact by staying on top of evolving tactics, thinking ahead and outside the box.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.