The “broadcast” model no longer works in an era of disinformation
By Carlotta Dotto, Rory Smith and Chris Looft
After November 3, some believers in the conspiracy theory that the US presidential election was rigged joined together, undeterred by official assurances of a fair and secure vote, to crowdsource evidence of massive fraud. They organized under hashtags like #DominionWatch, staking out election facilities in Georgia and doxxing voting-machine technicians. The ad hoc group investigation recalled the offline spillover of the QAnon-promoted #saveourchildren, which led participants to flood the hotlines of child welfare services with questionable tips about what they claimed was a vast child-trafficking conspiracy.
Empowering members to be an active part of a conspiracy theory’s development is fundamental to the QAnon community, which has been likened to “a massive multiplayer online game,” complete with “the frequent introduction of new symbols and arcane plot points to dissect and decipher.” And as the year closes, the community model of conspiracy theories has also energized #theGreatReset, which encourages participants to connect the dots between geopolitical events to form a picture of a nefarious plot by a shadowy world government to enslave the human race.
2020 was the year that demonstrated conclusively that effective disinformation communities are participatory and networked, while quality information distribution mechanisms remain stubbornly elitist, linear and top-down.
In trying to explain the influence of false and misleading information online, researchers and commentators frequently focus on the recommendation algorithms that emphasize emotionally resonant material over facts. Algorithms may indeed lead some toward conspiracy theories, but the dominant yet deeply flawed assumption that internet users are passive consumers of content needs to be discarded once and for all. We are all participants, often engaging in amateur research to satisfy our curiosity about the world.
It’s critical we keep that in mind, both to understand how bad information travels, and to develop tactics to counter its spread. We need to move beyond thinking of disinformation as something broadcast by influential bad actors to unquestioning “audiences.” Instead, we’d do well to remember that ideas, including false and misleading information, circulate and evolve within robust ecosystems, a form of collective sense-making by an uncertain online public.
Networks of amateur researchers can be formidable evangelists for their theories. A team of researchers led by the physicist and data scientist Neil Johnson found in a May 2020 investigation of online anti-vaccine communities that while the clusters of anti-vaccine individuals were smaller than those formed by defenders of vaccination, there were many more of them, creating more “sites for engagement” — opportunities to persuade fence-sitters. In our research into the “Great Reset” conspiracy theory, we found the content underpinning the theory had spread widely across Facebook in many local and interest-based spaces, garnering considerable opportunities to persuade the curious.
Crucially, the networks powering conspiracy theories like QAnon and the anti-vaccine movement are democratized, encouraging mass participation. On the other hand, the fact checks deployed against conspiracy theories almost always come from verified sources that are less than accessible to the general public: government agencies, news outlets, scientists. Networked conspiracy communities are proving resilient against this top-down approach.
The conspiracy theory network in action
This year, the World Economic Forum’s founder, Klaus Schwab, launched a messaging campaign calling for sweeping political and economic changes to address the inequalities laid bare by the coronavirus pandemic — a “Great Reset.” The language used by Schwab and allies, including Prince Charles and Canadian Prime Minister Justin Trudeau, sparked a pervasive conspiracy theory that was advanced both by influential surrogates such as Fox News host Tucker Carlson and a diverse web of communities online.
To illustrate the networked dynamics that reinforce conspiracy theories as they did around the Great Reset, we examined that theory’s rise to prominence on Facebook, with an eye on the impact of the fact checks and debunks published by trusted sources in response.
We gathered 7,775 public Facebook posts that contained links and that mentioned “Great Reset” between November 16, when the conspiracy theory trended on Twitter, and December 6, and found that just a tiny fraction of the posts shared in that time included debunks or fact checks challenging the conspiracy theory.
Instead, the leading posts in our dataset advanced various aspects of the complex conspiracy theory. They warned of “globalist overlords,” a “socialist coup,” and an “organized criminal cabal” planning a “genocide.” The most-shared URL quoted Carlson referring to the Great Reset as a “chance to impose totally unprecedented social controls on the population in order to bypass democracy and change everything.” Fact checks appeared in just one per cent of the posts mentioning the Great Reset, and even then, some of the top posts linking to them came from accounts condemning the news outlets, such as in one post claiming “fake news and gaslighting,” further advancing the conspiracy theory narrative. The network of Groups and Pages, sharing these conspiracy theory URLs among one another, drove up the readership and engagement on those links, unhindered by the fact checks offered by traditional sources.
We took a look at the websites posted most often in our dataset and found that YouTube was the most popular domain, appearing in 1,945 posts — about a quarter of them — followed by BitChute, another video platform known for its far-right and conspiracy theory-related content.
Top domains in public Facebook posts mentioning the “Great Reset” between November 16 and December 6, 2020
After isolating the individual YouTube videos appearing in the dataset, we found that the 20 most-viewed videos all advanced a version of the conspiracy theory narrative. Together, as of December 17 they’ve been viewed nearly 9 million times on the platform. The widespread sharing of YouTube and BitChute links underscores how disinformation seamlessly skips among platforms, unhindered by moderation.
The prominent role of YouTube in the conspiracy theory makes sense, given what we know about the “echo chamber” of right-wing content on the platform. YouTube’s recommendation algorithm certainly plays a role in trapping viewers in echo chambers, also known as “information bubbles.” But more importantly, this content proliferates with the help of organic peer-to-peer networks that drive audiences to conspiracy theory URLs and videos.
Visually mapping the ecosystem of Facebook Pages and Groups posting links about the “Great Reset” and the URLs most frequently shared in the set, we were able to find further evidence of the power imbalance between the conspiracy theory and the fact checks.
In the network visualization below, Pages and Groups that frequently share links among one another are grouped more closely together. The Facebook Groups and Public Pages promoting the conspiracy theory — represented by purple dots in the diagram — form a dense cluster of highly interconnected accounts, pointing to an ad hoc network that drives up engagement figures on conspiracy theory content. Importantly, we can’t conclude that the density of this network points to deliberate coordination. Rather, it suggests that like-minded Groups and Pages amplify a narrative by widely sharing, often among one another, the URLs and videos that advance it.
On the other hand, the orange dots representing fact checks and debunks spread in relatively small and isolated community clusters, remaining marginalized from the concentrated group in the center. The most frequently posted fact check — the Daily Beast’s “The Biden Presidency Already Has Its First Conspiracy Theory: The Great Reset” — couldn’t penetrate the core cluster of Pages and Groups championing the “Great Reset” conspiracy theory, even though its URL appeared in 37 different posts. The fact checks are having trouble keeping up.
The network of Facebook Pages and Groups sharing links about the “Great Reset”
Combating networked disinformation
A new approach is needed, one that moves away from “chasing misinformation” and instead proactively challenges and pre-empts its emergence. Rather than focusing on what messaging to send, and who should send it, we need to focus on initiatives based on co-creation and participatory strategies with existing communities. Public health professionals have shown the value of participatory strategies in dealing with rumors, falsehoods and stigma around HIV and the Ebola outbreak. The appeal to expertise inherent to traditional fact checking limits trusted sources, however large their audiences, from drawing on the same kind of participant networks that power effective communications strategies — as well as conspiracy theories. It won’t be an easy task, but in 2021, we’ll have to transform the fact-checking strategy built around the model of a one-way broadcast to one that addresses the complexity of disinformation networks before more fence-sitters are pulled into the fray.
Note on methodology:
We used CrowdTangle’s API to collect 11,808 posts from Facebook groups and unverified Facebook pages that mentioned “great reset” or “greatreset” between November 16, 2020 and December 6, 2020. We then filtered for posts that included outward links to other websites and social media platforms. To understand the potential impact of fact-checks on the conspiracy theory we also identified all the URLs from reliable news organizations that debunked the conspiracy theory and ran those URLs through CrowdTangle’s links endpoint to gather all the Facebook posts that shared them. These two datasets, which included a total of 7,775 posts, were then merged for analysis using Pandas and Gephi. In Gephi we re-sized the nodes based on the in-degree values in order to highlight the URLs that were most frequently shared in this set. URLs with more shares are represented by larger nodes. We used the “ForceAtlas 2” layout to bring accounts that frequently share among each other closer together.
AI won’t solve the problem of moderating audiovisual media
In a 2018 testimony before the US Senate, Facebook chief Mark Zuckerberg said he was optimistic that in five to ten years, artificial intelligence would play a leading role in automatically detecting and moderating hate speech.
We got a taste of what AI-driven moderation looks like when Covid-19 forced Facebook to send home its human content moderators in March. The results were not encouraging: greater quantities of child pornography, news articles erroneously marked as spam and users who were temporarily unable to appeal moderation decisions.
But the conversation about automated moderation is still mostly focused on text, despite the fact that visuals, videos and podcasts are powerful drivers of disinformation. According to research First Draft recently published on vaccine-related narratives, photos and visuals accounted for at least half of the misinformation that was studied.
One issue with such an approach is that platforms geared toward audiovisual content, such as YouTube, TikTok and Spotify, have sidestepped a lot of scrutiny in comparison. Part of this is a structural issue, says Evelyn Douek, a lecturer at Harvard Law School. Watching visual media is more time-intensive, and the journalists and researchers who write about platform moderation spend more time on text-based platforms like Facebook and Twitter. “Text is easier to analyze and search than audiovisual content, but it’s not at all clear that that disproportionate focus is actually justified,” Douek says.
Unlike Zuckerberg or Twitter’s Jack Dorsey, YouTube’s CEO, Susan Wojcicki, has not been called in for questioning by the Senate Judiciary Committee, even though many of the company’s moderation policies have been opaque and the subject of criticism. For example, YouTube recently announced that it would remove videos claiming that voter fraud altered the outcome of the US presidential election, only to later tell some users how to post similar content to bypass such moderation. Others were quick to point out that policy enforcement seems to vary according to inconsistent semantic distinctions.
We know even less about moderation on TikTok, a platform that is expected to top one billion monthly active users in 2021 and which once instructed its moderators to remove content of people who have an “abnormal body shape” or poor living conditions.
When audiovisual disinformation is both widespread and difficult to define, how prepared are the platforms to moderate this type of content, or for researchers to understand the consequences of moderation? And are we as confident as Zuckerberg that AI will help?
Automated platform moderation comes in two main forms: “matching” technology that uses a database of already-flagged content to catch copies, and “predictive” technology that finds new content and assesses whether it violates platform rules. We know relatively little about predictive tech at the major platforms. Matching technology, on the other hand, has had some major successes for content moderation, but with important caveats.
One of the first models of moderating audiovisual content at scale was a matching technology called photoDNA, developed to catch sexual abuse of children. Adopted by most of the major platforms, photoDNA is largely viewed as a success, in part due to the relatively clear boundaries around what constitutes child sexual abuse material, but also because of the clear payoff in taking down this content.
Similar technology was later developed to identify content that might be connected to suspected terrorist activity and collect it in a database run by the Global Internet Forum to Counter Terrorism (GIFCT), a collaboration of technology companies, government, civil society and academia. Following the 2019 Christchurch shooting livestreamed across Facebook, there was renewed government support for GIFCT, as well as growing pressure for more social platforms to join.
Yet even GIFCT’s story highlights huge governance, accountability and free speech challenges for platforms. For one, the technology struggles with interpreting context and intent, and with looser definitions about what constitutes terrorist content, there is more room for false positives and oppressive tactics. Governments can use “terrorist” as a label to quiet political opposition, and in particular, GIFCT’s focus on the Islamic State and Al-Qaeda puts content from Muslims and Arabs at greater risk of over-removal.
Data from GIFCT’s recent transparency report also shows that 72 per cent of content in its database is “Glorification of Terrorist Acts,” a broad category that could include some forms of legal and legitimate speech. This underscores one of the biggest challenges for platforms looking to moderate audiovisual content at scale: Legal speech is a lot harder to moderate than illegal speech. Why? Because even though the boundaries of illegal speech are fuzzy, they are at least defined in law.
However, speech being legal does not make it acceptable, and the boundaries of acceptable legal speech are bitterly contested. This poses an extra challenge for moderating audiovisual misinformation. Unlike child sexual abuse material, misinformation is not a crime (some countries have criminalized misinformation, but in most cases its definition remains vague). It is also extremely difficult to define, even if you try to set parameters around certain categories as defined by “authoritative” sources, much like photoDNA and GIFCT try to match content against an existing database.
For example, in October, YouTube said it would remove any content about Covid-19 vaccines that contradicts consensus from local health authorities or the World Health Organization. But as we’ve learned, identifying consensus in a rapidly developing public health crisis isn’t easy. The WHO at first said face masks did not protect the wearer, amid a global shortage of PPE, before reversing course much later on.
Which is to say: Even though automated moderation has had mixed results with certain types of content, moderating at scale — particularly when it comes to visual disinformation — poses enormous challenges. Some of these are technological, such as how to accurately interpret context and intent. And some of these are part of a wider discussion about governance, such as whether platforms should be moderating legal speech and in what circumstances.
To be sure, this is not an argument for harsher content moderation or longer working hours for human content moderators (who are mostly contractors, and who already have to watch some of the most traumatizing content on the internet for less pay and fewer protections than full-time employees). What we do need is more research about visual disinformation and the platforms where it spreads. But first we’ll need more answers from the platforms about how they moderate such content.
Combating misinformation in under-resourced languages: lessons from around the world
2020 has taught us that virtually all communities struggle with the effects of misinformation. While those that speak majority languages have access to fact checks and verified information in their native tongues, the same is not always true of other communities. Minority languages are often under-resourced by both platforms and fact checking organizations, making it even more difficult to tackle misinformation and build up media literacy in these communities.
As part of its Rising Voices initiative, Global Voices’ Eddie Avila and First Draft’s Marie Bohner spoke to experts from around the world to see how local communities are tackling the threat of misinformation in their native languages. Among them are Rahul Namboori of Fact Crescendo in India, Endalkachew Chala of Hamline University from Ethiopia and Kpenahi Traoré of RFI from Burkina Faso, who shared their perspectives in tackling this unique challenge.
India is home to 22 official languages and at least 500 unofficial ones, making it especially difficult for fact checkers, journalists and educators to ensure everyone has access to verified information. The spread of false information has had especially dangerous repercussions in the country, as rumors have incited attacks and riots against ethnic and religious minorities, leaving dozens dead in the past few years.
It is in this context that Fact Crescendo is fighting false information, Namboori explained. The organization verifies information in seven regional languages, in addition to English and Hindi, with the help of localized teams to both find rumors and prevent their spread. Fact Crescendo’s regional teams are made up of local journalists who speak the local language and understand the cultural and political environments where they are operating in. Using tools such as Facebook’s CrowdTangle, the teams monitor hundreds of Groups and social media accounts to track misleading and false information. Fact Crescendo also uses WhatsApp tip lines and groups so their fact checkers can communicate directly with local communities and provide them with verified information in their own languages.
It is not just about how misinformation travels between major languages spoken in India such as English or Hindi, but also how misinformation from abroad makes its way into the country. False claims about the coronavirus from Italy or Spain jumped from Spanish and Italian into English and Hindi, before making its way into regional languages, Namboori said. A layer of localized context is added with each jump, making the misinformation that much more believable and allowing it to echo in linguistic communities with little or no access to reliable information.
Ethiopia is similarly diverse when it comes to languages, with three major ones and 86 others spoken in the east African country. “Most of these languages are under-resourced, and there is no fact checking in these languages,” said Endalkachew Chala. Even though several of these languages are heavily used on social media platforms, their speakers do not have easy access to verified or reliable information.
Recently, internet shutdowns in the northern Tigray region due to the conflict in the country have exacerbated the issue of under-resourced languages. It has led to the creation of “two universes of information, where people living in the Tigray region do not know what is going on,” said Chala. The people there only have access to information broadcast by regional media, which has led to a “disjointed” understanding of the situation as those within Tigray have a different perspective to those outside, Chala added.
Chala has also seen health-related misinformation proliferate across Ethiopia. False claims about the Covid-19 pandemic were rife in Ethiopia in several languages, spreading both within and across linguistic communities. The lack of fact checking organizations or access to reliable information in under-resourced languages allowed these rumors to permeate into smaller rural communities and linguistic minorities.
“There are people from across Ethiopia who are willing to verify and fact check,” said Chala, but that foreign funding almost always goes to those with political connections or speakers of dominant languages in urban centers. He added that platforms and foreign companies need to hire native speakers of minority languages, to help verify information for these communities so as not to rely on outsiders.
Elsewhere in Africa, speakers of the minority Bambara language in the western part of the continent are also dealing with similar resourcing issues. Although members of the Bambara communities receive their news from local media on television and radio and may therefore avoid the tsunami of online misinformation, they are also entirely dependent on traditional media outlets, said Kpenahi Traoré. The responsibility of these outlets to report factual information is therefore even greater because Bambara communities often do not have access to the internet to fact check what they hear. Although local organizations do not currently have the resources to transmit factual information to their communities, there are foreign organizations including RFI Mandenkan that fact check information in Bambara and other local languages, said Traoré.
In the Amazon, where indigenous communities predominantly communicate orally, it has been key that resources — particularly those around the coronavirus — were similarly “localized,” said Avila of Global Voices. “Who or what may be trusted in one culture may not be the same in another culture, and so keeping that in consideration is really important in the kinds of things we’re seeing online.”
While individuals and certain organizations are making an effort to provide speakers of under-resourced languages with factual information, linguistic minority populations are still at a distinct disadvantage to those who speak dominant languages. Even as the world has become more globalized through the internet, those with less connectivity or who communicate differently are left behind. Media outlets and social media platforms can invest in these local communities to build media literacy and so they can have the same access to verified information in their languages as everyone else.
And yet, these globe-spanning examples also underscore the need to address broader questions like literacy rates or internet connectivity when designing solutions to counter the problem of misinformation in minority languages. Understanding how linguistic communities communicate is key to building the required infrastructure to improve media literacy.
Identifying ‘data deficits’ can pre-empt the spread of disinformation
The 2020 rabbit hole: Why conspiracy theories draw people in
Conspiracy theories have simmered on the fringes of society for years. But in 2020 they found new audiences: Celebrity chef Pete Evans welcomed foodies to vaccine conspiracy theories in Australia, while UK health care workers started anti-vaccine Facebook Groups where some falsely claimed the Covid-19 vaccine was “poison.” Anti-5G figures including British conspiracy theorist David Icke became household names as people vandalized phone masts across Europe; yoga influencers and suburban women in the US adopted QAnon beliefs and wellness bloggers made them pretty in pastels.
The pandemic created the ideal framework for mass distrust of institutions, thrusting these years-old, baseless beliefs into the mainstream. According to studies and surveys, a significant number of people globally found a tidy home in conspiracy theories promising to connect the dots between 2020’s turbulent events and their lives.
How did people become exposed?
The channels for fringe theories and misinformation were already in action on social media as millions entered lockdown measures at the start of 2020. But the nature of the digital communication landscape, evolving strategies from conspiracy theory supporters, and the additional time spent online while in lockdown helped content challenging evidence-based views reach ordinary people on a whole new level.
With a simple click, friends could invite each other to join any of a growing number of online spaces opposing government restrictions, such as the Yellow Vest protest groups in France and Ireland, and anti-lockdown group Stand Up X in the UK. There, people were eventually exposed to further unfounded (and sometimes increasingly extreme) ideas, such as when Stand Up X partnered with conspiracy theory outlet Eyes Wide Open UK to organize nationwide rallies promoting QAnon-linked beliefs. Some spaces grew exponentially: Facebook Group “Stop 5G UK,” created in 2017, gained more than 28,000 followers between March and April, taking its membership to more than 56,600 before it was banned by the platform.
As platforms began to take action against some of these groups, conspiracy theory communities adapted to bans by creating channels on alternative networks, such as Telegram and Parler, advertising such spaces to users ahead of anticipated deletions. The network of conspiracy theories coalescing under the QAnon banner proved particularly skilled at evading moderation while recruiting members, infiltrating established anti-trafficking groups, adopting the agreeable slogan “save our children” and taking QAnon offline through rallies across the globe. Other conspiracy theorists sidestepped moderation by publishing newspapers and leaflets, reaching new audiences on a physical, local level.
Why are people susceptible?
Even before the pandemic hit, researchers had found that situations of anxiety, uncertainty and loss of control make people more susceptible to believing conspiracy theories — they provide answers and relief, while those usually drawn to them tend to be relatively untrusting and concerned for their personal safety. The threat of a deadly disease, fast-changing science and decreased mobility, however, made these attitudes more widespread, with more people probing for explanations as to why this was happening. As governments fumbled their coronavirus responses and trust in them diminished, conspiratorial outlets promising neatly packed explanations also became more enticing.
Searching for answers in times of great confusion and grief can send people down dark rabbit holes, as experienced by Dannagal Young, a social psychologist who turned to conspiracy theories after her husband’s terminal diagnosis. “These feelings of collective uncertainty, powerlessness, and negativity likely account for the popularity of Covid-related conspiracy theories circulating online,” she wrote.
Within ambiguous and terrible events, conspiracy theories increase perceptions of control by providing a channel for anger or fear, said Young. Psychologists have found that feelings of anger are likely to be followed by confidence and urges to take action. In the pandemic, creating an anti-mask Facebook Group, taking to the streets to protest or attacking phone masts may have been affirming reactions.
Increased social isolation, alienation and loneliness have also been key factors; several studies suggest links among conspiracy theories, social exclusion and ostracism. More time spent on social media globally meant internet users were more likely to find and fall into digital rabbit holes. One demonstrator at a QAnon rally in Minnesota told CNN reporter Donie O’Sullivan that she had more time to be on social media and “research” QAnon during lockdown — leaving her an ardent believer in the conspiracy theory.
Those who have suffered the worst financial losses during the pandemic may be more vulnerable — a University of Kent study suggested disadvantaged populations are more likely to find conspiracy beliefs appealing.
Conspiracy theories are often partisan, feeding from and into people’s political leanings. But they can also shape people’s outlooks and affect relationships. Reporting throughout the pandemic has shown how these theories have divided families. BBC disinformation reporter Marianna Spring documented how conspiracy theories broke up Sebastian Shemirani’s relationship with his mother, Kate Shemirani, a prominent British conspiracy theorist. Since she skyrocketed to notoriety on the coattails of the pandemic, Sebastian said his mother was “too far gone” to be able to repair their relationship.
It’s no coincidence that the conspiracy theories reaching new heights during the pandemic contain common themes around institutions and elites. They are built on similar grounds, with plenty encompassing old, antisemitic tropes.
Conspiracy theories are no longer neatly siloed ideas, as multiple theories coalesce into one worldview. What has developed is something more akin to a conspiracy theory mindset, in which people choose from a buffet of false ideas. Anyone, from a nurse to your hairdresser, is vulnerable.
With Covid-19 vaccines, potential mutations of the coronavirus, further economic fragility and US president-elect Joe Biden’s inauguration ahead, 2021 will provide more fodder for the networks of conspiracy theories that have expanded and merged this year. Journalists and researchers need to fill data deficits faster and consider how they can better connect factual information with audiences. Platforms should refine their approaches to countering the evolving tactics of conspiracy theorists — banning harmful groups long after they’ve already proliferated is not enough. Conspiracy theories should no longer be viewed as isolated strands — they need to be treated as an interconnected web of ideas. Otherwise, they will continue to promise a tempting nest full of answers for those asking questions.
The return to old-school methods to sow chaos
As millions of people around the world were under lockdown this year, social media became a lifeline for many. While researchers and journalists were focused on mis- and disinformation flourishing on the main social media platforms, information disruptors returned to old-school methods of sowing chaos and confusion through leaflets, billboards, emails, SMS and robocalls.
The pandemic became an opportunity for the dissemination of Covid-19 hoaxes and conspiracy theories through letterboxes straight into people’s homes. A leaflet sent out in the UK claimed that the government, the media and National Health Service representatives were attempting to “create the illusion of an unprecedented deadly pandemic” to justify “extreme lockdown measures.” People living near the Canberra Hospital in Australia received a flyer alleging that Covid-19 is being spread by the government through the water supply, and that a vaccination would contain a tracking device. Misleading claims about the virus were also printed on billboards and posters: An Indian example promoted essential oils to protect people from Covid-19. Two US billboards bore the message that “It’s NOT about a ‘VIRUS’! It’s about CONTROL” alongside an image of a crash-test dummy wearing a mask.
A billboard in India that promotes essential oils to protect people from Covid-19. Photo by Ali Abbas Ahmadi
People received emails from fraudsters pretending to be with the Ministry of Health in Colombia, alleging they had to have a mandatory Covid-19 test. Similar attempts to gain access to personal information were conducted over text messages and phone calls, such as in South Korea, which saw a rise in “smishing,” scam text messages that spread false information about Covid-19 cures and offered free masks in exchange for personal information.
The US presidential election was a greatest-hits compilation of the old-school genre, with unsolicited, misinformation-filled newspapers such as The Epoch Times sent to households across the country, unofficial “ballot boxes” erected on sidewalks, and robocalls telling people to “stay home, stay safe” on Election Day that reached millions.
As the social media platforms become more active in tackling false claims around politics and health, disinformation agents are searching for new ways to spread their messages.
Darren Linvill, an associate communications professor at Clemson University, told First Draft: “If you want to spread disinformation, you don’t go where everybody is watching. You go somewhere where nobody is looking.” Online and offline channels are not mutually exclusive to disinformation actors, who often use multiple platforms to spread untruths, Linvill said. “We frequently saw content from text messages that were screen-grabbed and shared on social media.”
For purveyors of disinformation, one advantage of offline distribution is that provenance can be obscured — physical copies don’t leave digital traces that could point people to the source. Amid worldwide protests against systemic racism this year, misleading flyers designed to undermine Black Lives Matter were circulated in the US and the UK. In both cases, it was unclear who created the leaflets. As Full Fact noted, “almost anybody can make a sticker that looks like an official one, whether they may support or oppose the goals of the group in question.”
And weeks before the US election, suspicious flyers threatening Trump supporters were sent to residents in New Hampshire. Photos of these flyers, whose origin and authenticity were unknown at the time, were uploaded and amplified by social media influencers and partisan groups. Some social media posts with high engagement falsely claimed residents in Kansas had received this letter. Kansas City police investigated the rumor and reported that no resident had received this message on paper, but it did appear on social media.
As mis- and disinformation researchers know, leaflets, billboards, emails, SMS and robocalls present logistical challenges as it is impossible to be everywhere at once. Unless these messages are flagged by the recipient, they can remain under the radar. That makes it challenging to determine how far these hoaxes are spreading and — if the authors choose to remain anonymous — who is behind them.
ProPublica senior technology journalist Jack Gillum reported on the impact of the robocall operation that reached at least 800,000 residents living in key states that may have affected voter turnout for the 2020 US election. “When it comes to robocalls, getting data for that is really difficult,” he said. “I didn’t know what data is easily available and that we can confirm that sort of stuff, so basically I had to rely on US government sourcing.”
Perhaps the most high-profile example in 2020 was the case of a fraudulent email targeting Democrat voters, sent before the presidential election. It prompted a press conference hosted by the FBI and the nation’s director of national intelligence. Experts say it was the work of Iranian hackers posing as the far-right Proud Boys group. Evie Sorrell, an undergraduate student at the University of Pennsylvania who lives in Philadelphia, was among those who received a threatening email telling her to vote for Trump.
“When I first got the email, I was like, ‘Huh, that’s really weird. Also pretty illegal,’” Sorrell said. “And then I realized that if it was real, then they might have information on me.”
Of the episode, including finding out that Iran might have been behind it, she said she felt violated. She speculated that her knowledge of internet culture and digital literacy skills might have put her at an advantage: “You could definitely be swayed to at the least not vote, or take it very seriously and vote for Trump because you’re worried for your life and safety.”
As Sorrell’s experience shows, many of these messages can feel uncomfortably intimate to the recipient, as they were sent directly to homes or mobile phone numbers. Linvill says, “They have the potential to be more persuasive, simply because they’re more personal. Because they’re sent to you directly, as opposed to messages on social media that you scroll down through and it’s one message in a list of messages.”
There are laws regulating false advertising and broadcasting materials, but these vary from country to country, as does enforcement. In January, the US government took additional steps to limit the scourge of illegal robocalls, putting the onus on phone service providers instead of consumers. But days before the US election, voters were still flooded with text messages containing damaging disinformation narratives, as The Washington Post reports. Peer-to-peer texting platforms used during elections are not as clearly covered by the anti-robocall rules, as the companies contend they are not an automated service.
As we look to 2021, it’s important to remember that misinformation is not just happening on the major social platforms. Journalists and researchers will need to devise ways to understand the complexities of scope and impact, beyond just hoping concerned citizens will report problematic emails and phone calls.
Online influencers have become powerful vectors in promoting false information and conspiracy theories
Celebrities used to be singers, actors and athletes we got to know through a little black box called television. The free and limitless access of social media has since stripped away the distance between celebrities and fans, allowing social interaction at the touch of a button. With visually appealing content and carefully crafted captions, celebrities can attract a massive global following eager to know their every next move and thought, whether they are established household names using new platforms to speak to fans (Madonna with her 15.6 million followers on Instagram) or budding superstars who shot to viral fame (Charli d’Amelio, a 16-year-old TikTok influencer who recently became the platform’s first creator to hit 100 million followers).
However, celebrities and influencers must be aware of their responsibility when it comes to how quickly they can share misinformation to their millions of fans. The damage can be exacerbated by media reports that repeat the misleading or false claims for clicks. Regardless of whether they spread misinformation intentionally or not, celebrities are complicit in information disorder.
This year has shown the role celebrities can play in amplifying misinformation. In January, singer Rihanna, who has close to 100 million followers, shared a misleading image of the Australian bushfires on Twitter. In April, actor Woody Harrelson shared the “negative effects of 5G” with his two million Instagram followers. And in July, rapper Kanye West told Forbes that he believed a coronavirus vaccine could “put chips inside of us.” West has more than four million followers on Instagram. The additional challenge is that oxygen can create more oxygen — a celebrity tweeting a rumor or falsehood can become a news story in itself, causing even more people to be exposed, as we saw when Harrelson and other celebrities’ posts about 5G tipped a fringe conspiracy theory into the mainstream. The groundless rumors about 5G resulted in real-world harm, with phone masts being burned to the ground in the UK.
This isn’t just a Western phenomenon. In India, Bollywood movie star Amitabh Bachchan has acquired a reputation for spreading false and misleading information online, such as claims that applauding could “destroy virus potency” and that homeopathy could “counter corona.” Both tweets were shared by hundreds of Bachchan’s 44.8 million followers. Meanwhile, in Australia, celebrity chef Pete Evans has promoted a plethora of conspiracy theories to his millions of followers, such as the “potential health risks” of 5G (similar false claims about 5G have also been shared across the world, such as in the UK and the Philippines) and an artist’s “map” that purportedly demonstrates the connections among various QAnon conspiracy theories. Multiple media outlets, including 7News and Daily Mail, repeated those baseless claims in their headlines, inadvertently reinforcing them. Only after Evans posted an image with a neo-Nazi symbol in November did he start to face repercussions, with brands and publishers banishing him.
Fans’ strong emotional connection to their idols and heroes means they are often predisposed to believe them and trust their messages. This trust disarms fans in the face of mis- and disinformation spread by the celebrities and influencers they follow, nudging them to research and possibly repeat false narratives. For example, Amitabh Bachchan’s tweet from March falsely claiming that houseflies spread the coronavirus was followed by a spike in searches for “housefly” and “मक्खी” (housefly in Hindi) in India, according to Google Trends data.
Google Trends data reflects a rise in searches for the words “housefly” and “मक्खी” in India shortly after actor Amitabh Bachchan falsely claimed houseflies can spread the coronavirus. (Graphic by Ali Abbas Ahmadi)
Kate Starbird, a professor at the University of Washington, goes a step further in her analysis, where she says fans engage in “participatory disinformation” because they are “inspired” by influencers to voluntarily create and expand similar, false narratives and conspiracy theories.
The role of emotion is often overlooked in media literacy campaigns even though it is a strong driver of shares on social media, and therefore a powerful force in disinformation campaigns. Being aware of one’s emotions and how they can be manipulated, as outlined in First Draft’s “psychology of misinformation” series, is one way fans can be better inoculated against misinformation.
The imbalance of power between celebrities and influencers and their fans is further accentuated by social media companies’ sometimes confusing and often inconsistent policies related to misinformation. According to both Facebook and Twitter, a verified badge is granted to accounts that are notable and authentic, but there appear to be no consequences when authentic, verified accounts share lies and half-truths. In addition, politicians are exempted from Facebook’s Third-Party Fact-Checking Program, so any political misinformation they share usually remains online.
2020 has also seen the continued rise of “celebrity journalists,” i.e., news anchors and commentators who have built sizeable followings because they tend to sensationalize their reporting and do not appear to be bound by traditional journalistic principles of balanced, fair and factual reporting. For instance, Indian news anchor Arnab Goswami has acquired a reputation for sensationalist reporting and for promoting hateful information and conspiracy theories — such as falsely accusing members of a Muslim missionary group of spreading Covid-19. In Australia, commentators such as Sky News’ Peta Credlin and Alan Jones are often misconstrued as journalists, with their views mistaken for facts. Similarly, in Hong Kong, commentators such as Lee Yee and Chip Tsao, along with members of the pro-democracy camp, have voiced support for US President Donald Trump over his hardline approach to the Chinese Communist Party. Opinion pieces from them are considered a representation of their publishers’ stance.
Celebrities and online personalities enjoy an outsized influence on social media, but 2020 has shown they are a disproportionately important vector in the disinformation ecosystem. The size of their direct audience on the platforms, combined with the recognition their online activities receive from the mainstream media, can make them dangerous players when they share false or misleading information. In 2021, let’s avoid giving oxygen to inaccurate and false information and be mindful about the amount of airtime given to influencers and celebrities, especially those who are repeat offenders.
“Do No Harm” — Assessing the impact of prioritizing US political disinformation over health misinformation in 2020
Since 2016, the “field of misinformation” has been disproportionately focused on political disinformation, with emphases on both Facebook and Twitter. Globally, the larger threat has been health and science misinformation on a range of platforms. But the field’s focus was not determined by the communities most affected by misinformation, nor by the relative harm of different types of misinformation. Instead, it was set by US-based university researchers, media outlets, philanthropic institutions and Silicon Valley-based platforms, whose obsession with election-related disinformation directed the focus of misinformation initiatives, interventions and research projects over the past four years.
The impact of this prioritization by news and research organizations has left the US, and other countries, ill-equipped for the pandemic, with health authorities having to play catch-up around the challenges of misinformation, a disproportionate focus on interventions designed to slow down misleading political speech, and journalists unprepared to report on scientific research. To prepare for the growing levels of distrust in science and expertise, alongside the flood of actual misinformation we expect to see in 2021, researchers, technologists, journalists, philanthropists and policymakers must refocus their attention to health and science communication, most notably around medicine and climate.
There are three recommendations that should be considered. The first is the need to educate journalists about science and research so they are able to adequately question press releases from academics, researchers and pharmaceutical companies when necessary. The second is a need to educate science and health professionals about the current information ecosystem. In this fragmented, networked world, the caution and discipline that define scientific discovery are being weaponized by bad actors. The third is the critical need to raise awareness about the harm done to communities of color globally, and how that harm has created a deep distrust of medical health professionals. The focus on misinformation should not cover up an urgent need to understand these dynamics.
1) The need to educate journalists about science and research
Preprints are scientific reports that are uploaded to public servers before the results have been vetted by other researchers, the process known as peer review. The purpose of pre-prints is to allow researchers to get a heads-up on new research, and to encourage others to try and replicate and build on the results. In 2020, these pre-prints reached a larger audience than usual because of Twitter bots such as bioRxiv and Promising Preprints, which automatically tweeted new publications, giving researchers and journalists immediate access to non-peer-reviewed Covid-19 studies. Unfortunately, these studies, often with small sample sizes or very preliminary research, were published by media outlets or re-shared on social media without the necessary caveats, amplifying early findings as fact.
For 2021, journalists and communication professionals in the field of misinformation should ensure they include necessary disclaimers when reporting on non-peer-reviewed research, and more frequently consider whether reporting on such early research benefits the public. Similarly, platforms should train fact checkers and internal content moderation teams on how to respond to health and science information. Many of the fact checkers in Facebook’s Fact-Checking Project are excellent at debunking political claims or viral misinformation. Few fact checkers have deep health and science expertise in-house, yet they are being increasingly asked to work in these fields.
2) The need to educate science and health communication professionals about the information ecosystem
The current information ecosystem is no longer structured in a linear fashion, dominated by gatekeepers using broadcast techniques to inform. Instead it is a fragmented network, where members of different communities use their own content distribution strategies and techniques for interacting and keeping one another informed.
Scientists and health communication professionals have been in the spotlight this year, and we have to learn the lessons from mistakes that have been made. These include the real-world impact of the equivocation about the efficacy of masks or the dangers of airborne transmission. There is also the need to reflect on the impact of different language choices on different communities. We need to recognize the ways in which the complexity and nuance of scientific discovery lead to confusion, and often inspire people to seek out answers on the internet, leaving them vulnerable to the conspiracy theories that provide simple, powerful explanations. We also need to communicate simply and increasingly visually, rather than via long blocks of text and pdfs.
3) The need to raise awareness about the harm done to communities of color
Explaining methodology and experimental limitations will not address institutional trust concerns ingrained in Black communities. Starting in the 1930s and concluding in 1972, the United States Public Health Service collaborated with the Tuskegee Institute, a historically Black college, to study syphilis in Black men. Those who participated were never informed of their diagnosis and did not receive the free healthcare they were promised. Additionally, doctors declined to treat the participants with penicillin, despite knowing it could cure the disease.
Of the original 399 participants, 28 died of syphilis, 100 died of related complications, 40 of their wives became infected, and 19 of their children were born with congenital syphilis, creating generational harm. These concerns have spread to online spaces, where users fear that Black people will be used as “guinea pigs” when Covid-19 vaccinations arrive.
Earlier this year, concerns were raised again over allegations of forced sterilization and hysterectomies of undocumented women in a for-profit Immigration and Customs Enforcement detention center, building off a long history of unwanted medical testing and eugenics programs. Injustices such as these can lead to increased distrust in both government and the medical health system in Black, Latinx and Indigenous communities. Several science communication initiatives have focused on Covid-19 misinformation in 2020, but health professionals must begin 2021 by acknowledging, appreciating and discussing mistrust.
As research in West Africa later showed, efforts by the World Health Organization, the Red Cross and other global organizations to curb Ebola misinformation that didn’t take into account “historical, political, economic, and social contexts” were ineffective. Communication around health protocols such as hand washing did not result in behavioral changes because people did not view the action as a priority. Instead, they turned to trusted local sources, such as religious leaders, for direction. This occurred in the United States in the midst of the first wave of Covid-19 in the spring, where some pastors preached conspiracy theories to their congregations.
In 2018, the Ebola epidemic spread both disease and disinformation in the Democratic Republic of Congo. Citizens blamed foreigners and Western doctors for the spread of the virus, using social media platforms such as Facebook and WhatsApp. Many pushed back against safety precautions, with rumors leading to attacks on hospitals and health care workers.
Researchers, journalists and policymakers must take into account cultural and religious tradition, apprehension toward the medical health industry and government, and the role trusted local leaders play when building effective science communication strategies.
But…remember health and political speech are not mutually exclusive
Back in March, as the social platforms took what looked like decisive action to tackle misinformation, partnering with the WHO, creating information hubs and cracking down on Covid-19-related conspiracies, many observers applauded. But it was clear that the platforms felt a sudden freedom to act around health and science misinformation. The WHO could be the arbiter of truth, unlike fact checkers trying their best to referee political speech. Health misinformation felt like an easier challenge to solve.
By April, the growth of “excessive quarantine” and anti-lockdown communities online demonstrated the naïveté of these conclusions. Health misinformation cannot be disentangled from political speech.
2020 has taught us that we should be focused on the tactics, techniques and characteristics of rumors, falsehoods and conspiracy theories. The same tactics researchers were documenting around elections emerged this year with a vengeance in the context of health and science. So in 2021, let’s learn the lessons collectively, rather than letting political psychologists decide whether misinformation can sway elections, and separately, infodemic managers at public health bodies decide whether memes influence mask wearing.
Understanding how established misleading narratives and strategies can be modified and repurposed to drive politicized agendas can help clarify and focus research, language and sourcing around medicine and climate communication in the new year.
How we prepared future journalists for the challenges of online misinformation
Diara J. Townes is First Draft’s community engagement lead for the US bureau who developed, strategized, and guided the US 2020 Student Network from June to November, coordinating support, training, and output between volunteers and the organization.
The coronavirus pandemic and the lockdowns that came with it disrupted life for nearly everyone, and student journalists were no exception. After campuses, offices and businesses began shutting down in March, students and recent graduates across the US began to receive word that the summer internships they were counting on would be canceled.
Katrina Janco, a recent graduate of the University of Pennsylvania, applied for a summer internship in journalism. “I got a message a day or so later saying, ‘I’m sorry we aren’t taking anybody because of COVID,’” she said.
Meanwhile, the infodemic — the overload of mis- and disinformation around the coronavirus pandemic — was challenging newsrooms across the country. A looming election threatened to compound the information disorder already faced by journalists. To meet these challenges, First Draft launched a program that would support both student journalists and newsrooms: the US 2020 Student Network.
The First Draft Student Network, with summer and fall sessions that ran between June and November, was an all-volunteer effort with a collaborative objective. More than 40 students — most working in the US but several of them based abroad — researched, tracked and verified online mis- and disinformation, supporting the research needs of First Draft’s community of local and national newsrooms and journalists.
“I think one of the most interesting things about working with First Draft is learning about how misinformation works and how misinformation spreads,” said Lauren Hakimi, a junior at Hunter College. “I feel like when I become a professional journalist and if I were to report on elections, then I’ll know how to do it because I had this experience at First Draft.”
To prepare students for the network, First Draft’s investigative research team trained them on critical topics, including tactics bad actors use to manipulate information, digital verification tools, how to assess a social media post’s spread and virality, and the role of media in the amplification of unconfirmed rumors. The training sessions were recorded to make the learning experience accessible to all who participated.
Volunteers kept track of their research in personalized documents and shared their insights in Slack and on Junkipedia, a growing misinformation database from the Algorithmic Transparency Institute at the National Conference on Citizenship. Their research tasks, which we created and assigned alongside First Draft’s community engagement intern Isabelle Perry, evolved as newsroom needs changed, a frequent occurrence in 2020. Perry checked in on the students’ work every week, noting which assignments piqued student interest and which standout posts were shared into First Draft’s wider Slack community.
At the outset of the project, Megan Fletcher from The University of Texas at Austin uncovered a tweet that said the US Postal Service would not deliver ballots to election officials unless they were sent in envelopes with two stamps. The user captioned the post with, “If you are voting by absentee ballot, you need two forever stamps! The envelope does not make that clear. Remember, two stamps!!” Fletcher’s find was an early example of a prominent post illustrating confusion around mail-in voting, alerting First Draft’s research team to a nascent potential misinformation narrative. As we saw leading up to — and after — the election, the record number of mail-in ballots cast by Americans went on to become the subject of a staggering degree of misinformation.
Misinformation was also driven by fears of political violence months before the election. Ahead of a rally by President Donald Trump in Tulsa, Oklahoma, in late June, Ngai Yeung from the University of Southern California shared tweets from users warning about competing groups of protesters and pro-Trump bikers convening on the rally, stoking fears of a violent clash. The scuffles that ensued at the rally were a reminder of the value of social media newsgathering and verification skills in a context of heightened political tension.
The impact of student research occasionally reached beyond the scope of the US. Sarah Baum from Hofstra University contributed to First Draft’s regular briefings to the United Nations Verified campaign with her findings on the #ExposeBillGates Global Day of Action, when social media actors around the world promoted debunked falsehoods about Gates and vaccines.
While research by students was showcased in a weekly newsletter, some of the students’ work was also featured in First Draft’s US 2020 newsletter, which reaches hundreds of journalists across the country. This gave students the opportunity to see how their research informed local media’s understanding of online misinformation.
“I wanted to do something where I felt like I was helping people,” Baum said. “Battling disinformation about a deadly pandemic or about elections is something that I think is a very important public service.”
Over the summer, students shared more than 200 posts in Slack. When the Student Network relaunched in September after a post-summer break and with a new focus on election misinformation, a dozen students contributed more than 100 posts through November 6 and supported both local and national newsrooms as part of ProPublica’s Electionland project.
“The First Draft Student Network was an incredible opportunity for our students to develop critical skills in social newsgathering and verification in a real-world setting,” said Dr. Carrie Brown, the director of the social journalism program at the Craig Newmark Graduate School of Journalism at CUNY. Students in her program joined the Student Network to support First Draft’s partnership with ProPublica’s Electionland project.
“Even though we couldn’t physically gather in a newsroom due to Covid-19, they were able to be a meaningful part of election coverage. At such an important moment in this country’s history, several of them told me they were grateful to have something to do that would make an impact besides just watching Steve Kornacki.”
Verification has long been at the heart of what journalists do,” she continued. “But understanding how to identify and combat misinformation in a responsible way that avoids inadvertently amplifying it is a critical skill for any journalism student today, and it has helped many of our alumni get hired.”
The skills the volunteers sharpened this summer have already lent themselves to a few emerging journalists. Lauren Hakimi from Hunter College was selected as a Hearken Election SOS fellow, drawing on the skills she developed as a Student Network volunteer to support a newsroom in Michigan.
“Before First Draft, I didn’t know anything about monitoring, service journalism, or how misinformation spreads,” she said in an email. “I’m so grateful to have had the First Draft experience and I certainly won’t forget it.”
Francesca D’Annunzio, from The University of Texas at Austin, began work as a full-time reporter covering election problems and misinformation in North Texas in September. “A lot of people think that non-citizens are consistently heading to the polls — even in our county. And they are worried about voter fraud via mail ballots,” she shared via Slack, highlighting narratives she saw taking shape over the summer.
First Draft and its partners were humbled by the energy, ingenuity and adaptability of the Student Network volunteers. For many, 2020 didn’t go as planned, but we hope that the emerging journalists who collaborated with us still feel called to work in this challenging but critical profession.
Isabelle Perry contributed research and reporting to this article.
We haven’t announced our plans for Summer 2021 student programs but interested students should follow @FirstDraftNews for any news to come on spring and summer internships.