First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

What the false narrative around ‘CDC 6%’ can teach us about reporting Covid-19 stats

Over the last weekend in August, more than 40,000 Twitter users shared what, at a glance, would have been a bombshell: “this week the CDC quietly updated the Covid number to admit that only 6% of all the 153,504 deaths recorded actually died from Covid. That’s 9,210 deaths.” For those skeptical of the dangers of the coronavirus, this was evidence they’d been right all along – and that their government had been lying to them to support a restrictive lockdown. 

Unfortunately, the statistic was misleading. The CDC’s data only showed that most patients had a range of complications (also known as comorbidities), many directly caused by Covid-19, when they died. The statistic’s considerable spread is a prime example of how easy it is to spin data out of context, and the role the media can unwittingly play in amplifying misinformation. 

On August 26, in a weekly update on provisional counts from death certificate data, the US Centers for Disease Control and Prevention said that for 6 per cent of deaths involving Covid-19, no other causes of death were mentioned. Even though this statistic had been visible on the CDC website for over a month, social media users and news organizations shared it as if it were a new finding, igniting a wave of claims that 94 per cent of the deaths purportedly from Covid-19 were really from other causes.

“It is absolutely common practice for there to be multiple causes and multiple diagnoses for a person’s death,” said Stephen Kissler, a postdoctoral fellow at the Harvard T.H. Chan School of Public Health who uses mathematical models to study the spread of infectious diseases.

This is particularly true of Covid-19, which kills in many ways. It causes pneumonia, aggravates hypertension and leads to respiratory failure, among others. That is why it’s unsurprising that these causes are listed in the CDC’s comorbidities data for 94 per cent of people, explained Christin Glorioso, a physician and research scientist at MIT. “Saying someone died of pneumonia and not Covid is a misconception, because Covid causes the pneumonia,” said Glorioso.

Kissler said that having a detailed record of all the health conditions at time of death is important for researchers who are analyzing the data. “Essentially, doctors are trying to be specific about not only the fact that a person had Covid, but what specifically about Covid led to the person’s death,” Kissler said. That detail means doctors are being thorough, not that Covid-19 is less deadly than previously thought, Kissler said.

How an old statistic took on new life

One of the first examples we found of a reporter citing these weekly figures was in a July 6 op-ed for Pennsylvania’s Pottstown Mercury. Writer Jerry Shenk cited similar figures from late June and concluded that people were dying “with the virus rather than from it”. On August 7, Twitter user @old_guys_rule2 posted the 6% statistic with the comment, “COVID-19: The largest PSYOPS campaign ever conducted on the American People.” 

Later in August, the statistics began to travel more widely. On August 24, BlazeTV host Steve Deace made a post to his Facebook Page highlighting it, which was reshared more than 1,200 times. Five days later, the narrative made it to the top of Gab’s trending page, and the Facebook account DrElizabeth Hesse DC claimed that the CDC had “quietly updated the Covid number to admit that only 6 per cent of all the 153,504 deaths recorded actually died from Covid.” 

On the same day, Twitter account @littllemel, a member of the QAnon community, tweeted a screenshot of that post, which was later retweeted by President Trump— and more than 40,000 other accounts. Facebook and Twitter removed the posts by DrElizabeth Hesse DC and @littllemel from their platforms.

Many users questioned the framing of the statistic, including in one tweet shared at least 2,900 times, but it caught the attention of several local newsrooms, which reported it without providing important context. Several local news stations ran headlines similar to one from Cleveland’s Fox 8: “New CDC report shows 94% of COVID-19 deaths in US had contributing conditions.” 

The Atlanta Journal-Constitution published an article with a similar headline. It later removed the article, then published one debunking the mischaracterized statistic and acknowledging that it had “briefly” hosted an article mischaracterizing the data. 

The cost of getting it wrong

Setting aside the false conclusions drawn from it, the CDC’s update hardly represented a newsworthy change concerning Covid-19’s relationship to comorbidities. “This [data] has been there. We have always said that an underlying condition was going to make you more susceptible to infection,” said Bruno Rodriguez, a virology Ph.D candidate at NYU Langone Health.  

One particular danger of the media treating this statistic as if it were new, Kissler said, is promoting an idea that science is haphazard and rife with internal contradictions, when in fact a review of the CDC’s past data shows considerable consistency. A review of past versions of the CDC data shows that the agency actually reported the 6 per cent figure as far back as July 8. As early as May 12, the number was 7 per cent. 

Kissler and Glorioso worry that the reporting also promotes an idea that Covid-19 isn’t deadly just because it kills people with health problems at higher rates. More than 45 per cent of Americans have a risk factor for Covid-19. “That’s a lot of us,” said Kissler, “and I think there’s something dangerous about this disregard for anyone but the absolutely fit and the totally healthy.”

Misinformation often benefits from messaging that is simple, easy to process, and plays into confirmation bias. This is all the more prevalent when it is based on a kernel of truth. In this way, ostensibly straightforward statistics like “only 6% of death certificates list Covid-19 as the only cause” and “94% of Covid-19 deaths involve underlying conditions,” when presented without the proper context, provide quick relief to those inclined to believe that the coronavirus is not deadly. 

For more information, read our series on the psychology of misinformation. 

“The thing that concerns me most about this sort of thing is that it erodes trust in the practical interventions we’re trying to take to keep people safe,” Kissler said. “We do ourselves a disservice in either lessening or overblowing the severity of the disease.” 

Tips for reporting on Covid-19 statistics 

  • Check provenance and primary sources: you can use tools like the Wayback Machine to look at changes to websites like the CDC’s and investigate whether they are really new.
  • For sources, reach out to medical professionals who are working in infectious disease epidemiology.
  • See if those medical professionals have recent publications listed with their institution. This will help you gauge whether they are still active in the field and have the latest information.
  • Exercise caution with data that seems to contradict previously available information about the pandemic.
  • Beware of taking a single nugget of information and elevating it as a new finding.

Keenan Chen contributed research to this article. 

‘An unquestionable truth’: Religious misinformation in the coronavirus pandemic

By Jaime Longoria, Daniel Acosta Ramos and Madelyn Webb

On July 30, a Mexican pastor named Oscar Gutierrez broadcast what would become one of the most-watched videos on Facebook about chlorine dioxide solution, an industrial bleach he promotes as a cure and preventive treatment for Covid-19.

“Chlorine dioxide is dangerous — but for whom? For the pharmaceutical companies and corrupt governments,” said Gutierrez in the broadcast on his Facebook Page “Pastor Oscar Gutierrez,” which has almost 220,000 followers. He is also a participant in Facebook’s Stars program, which allows content producers to receive payment directly from their audience, which means Gutierrez’s videos and live broadcasts have ostensibly been evaluated and approved via Facebook’s Community Standards. Each ”star” received translates to $0.01 in revenue going directly to the creator

Gutierrez went on to claim the solution, known as CDS or “miracle mineral solution” (MMS), is being suppressed so that microchips could be introduced via a vaccine to control people’s DNA. “At least try it because you won’t die,” Gutierrez said later in the video. It has been viewed more than 2 million times and marked by Facebook as false information.

Harmful misinformation about the coronavirus abounds in Latin American Christian communities, with figures such as Gutierrez pushing unproven and potentially dangerous treatments and capitalizing on fear to promote anti-vaccine sentiment. The trusted position of these religious leaders can legitimize potentially dangerous ideas for a large audience via independent Christian news networks and social media.

Read more: The psychology of misinformation: Why it’s so hard to correct 

“A religious leader has a relationship of power where the truth that they transmit is one that, be it about a political or moral decision, is delivered from a position of ascendency,” said Nicolás Iglesias Schneider, coordinator of GEMRIP, an organization focused on the public role of faith and religion. 

“When a leader has a truth that is immutable, it is a truth that is unquestionable because it is endorsed by a deity or is a word sent by God,” said Iglesias. “For the religious faithful in the context of a pandemic where there is less opportunity to check with neighbors and family and where there is less social interaction, people are more vulnerable and are likely to become more radicalized.”

The post labeled as “False Information” by Facebook has more than 2 million views. Screenshot by author.

Latin American Christian communities aren’t the only religious groups to fall victim to misleading claims or outright misinformation about the pandemic. In June, Spanish cardinal Antonio Cañizares Llovera declared attempts to find a vaccine the “work of the devil” that would involve “aborted fetuses” in a filmed Mass shared around the world. Church leaders in Australia raised similar concerns recently, apparently unaware that the practice of using cell lines grown from a fetus in 1972 has been commonplace in vaccine development for decades. 

In India, Hindu religious and political leaders have promoted cow urine as a cure for Covid-19, inspired by the sacred status of cows in Hinduism, and declared the coronavirus would leave India once a controversial temple was completed. Claims that a polio vaccine contained pork products or toxic ingredients, often circulated by Muslim clerics, have damaged the fight against the disease in Muslim-majority Pakistan.

But it is the diversity of coronavirus misinformation in Latin America, its potential for real-world harm and its connection with the Latinx diaspora throughout the United States that make it such a cause for concern.

Magic beans and consecrated oil

Several prominent religious figures have marketed unproven treatments and cures, a common misinformation trope even before the coronavirus pandemic. So-called “snake oil” remedies are so common in part because they often produce a profit, and those hawked in Latin American Christian communities are no exception. 

Valdemiro Santiago, an evangelical pastor who leads the Universal Church of God’s Power, is being investigated by the Federal Public Prosecutor of Brazil for selling beans that he claimed cured coronavirus for 1,000 Brazilian real each (about $180). In a YouTube video, he claimed that a medical report had detailed the recovery of a terminally ill patient thanks to the beans. 

Similarly, Sílvio Ribeiro, pastor of Catedral Global do Espírito Santo in Porto Alegre in Brazil, is being investigated by Porto Alegre police under suspicion of “charlatanism,” according to Police Delegate Laura Lopes. On March 1, he held a live event that was advertised and broadcast across social media. The flyer for the ceremony read, “Come because there will be anointing with consecrated oil … to immunize against any epidemic, virus or disease!” 

Em Porto Alegre, a igreja anunciou um culto chamado “O Poder de Deus contra o Coronavírus” em que prometia imunização por meio de um “óleo consagrado” https://t.co/uU7EGwwO4r

— BBC News Brasil (@bbcbrasil) March 2, 2020

“Faced with illness and the possibility of death, it is common for human beings to feel hopeless and helpless,” Angela Rotunno, coordinator of the Operational Support Center for the Defense of Human Rights, told the newspaper Estadão. This emotional fragility drives away rationality and, as a consequence, makes it easy to believe in any promise of protection or cure. It is what is happening at the moment. Unscrupulous people try to take advantage of this discouragement.”

The Mexican pastor Gutierrez has in turn popularized the work of Andreas Kalcker, a notorious anti-vaccination advocate who claims to be a German scientist and has been a proponent of CDS as a treatment for Covid-19 in Latin America. The bottles of CDS that the pastor promotes on his Facebook broadcasts are labeled with Kalcker’s name and the Bible verse John 10:10, in which Jesus says he provides his flock life and abundance. 

❌Cuidado con el vídeo que afirma que el CDS o dióxido de cloro (derivado del MMS) cura el coronavirus: no hay ninguna prueba y puede ser peligroso #CoronavirusFacts https://t.co/9dFWIfccsR

— MALDITA CIENCIA (@maldita_ciencia) April 6, 2020

Platform of the Antichrist

Beyond phony and potentially dangerous cures, misinformation narratives and conspiracy theories found in global anti-vaccine communities have been adopted enthusiastically by some religious figures in Latin American communities. 

For GEMRIP coordinator Iglesias, this demonstrates the boundlessness of misinformation and its ability to transcend national borders. “There is almost nothing that is strictly from Uruguay or Argentina or the Southern Cone,” he said, “because in reality these all-encompassing narratives, including conspiracy narratives, have become so widespread.”

Pastor couple Miguel and María Paula Arrázola, directors of Iglesia Ríos De Vida in Cartagena, Colombia, shared a May 6 live broadcast with their 394,000 Instagram followers on Miguel’s account where their guest Ruddy Gracia, another evangelical pastor, said that “behind the compulsory vaccine there is a chip called ID2020 made by Bill Gates.” The aim, according to Gracia, would be to create a global registry of all those who have been inoculated, registering them via the microchip. “That is the beginning of the platform of the Antichrist, how he will bring about the mark of 666 and will result in you not being able to get a passport, travel, have a license, buy or sell without that chip.” María Paula agrees, saying that is the reason she would refuse vaccination. Gates has become a conspiracy theorist dog whistle, often invoked to promote anti-vaccine narratives

The Facebook page for Pastor Vienni claimed “a world leader will rise that will say no one can buy or sell, enter or leave a nation if it doesn’t have the mark (the micro chip (666)”. Screenshot by author.

Another example came later that month when Argentine pastors Fernando and Viviana Vienni shared a post calling for opposition to the vaccine, which they claimed is a vehicle for a secret implant originating from supposed Freemasons such as Gates. “They have created a sickness (coronavirus) and via this virus they will say they found the solution!!!” they wrote. “The coronavirus, the microchip (5G) and the vaccine is all a test of the end times!!!!! We are already in those times!!!”

The ID2020 conspiracy theory cited by some religious figures is a common thread among the more extreme elements of the anti-vaccination community. These conspiracy theories have spread among online communities, adapted to new audiences and combined with other narratives. 

Throughout May and June, the same text combining many of these conspiracy theories was copied and pasted in thousands of public posts across Facebook, including the claim that Catholicism would be replaced by a new Satanic religion called “El Crislam.”

The Facebook Page that received the most interactions for its post on the subject bore the name of Yiye Ávila, an influential Puerto Rican televangelist and author known for preaching about the apocalypse. Ávila died in 2013. It is not clear whether the page is official, but it has almost 500,000 followers.

Ávila’s grandson, Miguel Sánchez-Ávila, appears to have followed in his grandfather’s footsteps. A large Facebook page and smaller YouTube channel featuring Miguel promote similarly apocalyptic posts and videos.

“Any conspiracy theory, be it about the origin or solution to the coronavirus, espoused by a powerful charismatic religious actor becomes a very strong truth for the individual who receives it,” said Iglesias. “They incorporated it to their belief system in a much less critical way.” 

People in despair

A misleading Covid-19 narrative unique to religious communities, though certainly not unique to Latin American Christian communities, is the emphasis on worship center closures because of the virus. Church leaders decry the shutting of places of worship, citing various other facilities allowed to remain open, and bemoan their “nonessential” status. Though pushback from religious figures eager to reopen does not usually contain misinformation as such, it contributes to the belief that the virus is less dangerous than we are being told. 

In an interview in São Paulo newspaper Estadão, Pentecostal pastor Silas Malafaia expressed his frustration with the closures, saying: “Are people going to die of the coronavirus? Yes. But if there is social chaos, much more will die. Churches are essential to assist people in despair, anguished, depressed, who will not be attended to in hospitals.” 

Churches around the world have been blamed for contributing to increased transmission of the virus, yet religious leaders continue to promote the idea that church gatherings are not dangerous, or that the risk is worth it, as Malafaia implied. Other religious groups have challenged local authorities’ orders by holding clandestine services. 

On July 12, an evangelical pastor was arrested in the Chilean province of Arica for holding religious services in an open space with more than 50 people present. In April, Claudia Pizarro, mayor of La Pintana commune south of Santiago de Chile, closed the “Impacto de Dios” church after its pastor, Ricardo Cid, carried out daily services with 30 to 50 people. 

“These are irresponsible attitudes not only of the pastor but of all those who participated,”  Pizarro said after the incident. “It is a lack of culture, of criteria, an irresponsibility that this continues to happen.”

Refusal to take the virus as a serious threat, particularly when that refusal comes from a person with influence, is itself a form of dangerous misinformation. When Cid was asked to take a Covid-19 test, he refused. “I don’t have it, because I know I don’t,” he said. “My God would never allow it. Jesus never became infected, even from leprosy.”

The role of the Christian media

Misinformation spreading in Latin American Christian communities has the benefit of an independent Christian media that can amplify narratives that may not be published in other press. 

CBN Latino, the Spanish-language branch of the massively popular Christian Broadcasting Network with regional branches in Mexico, Guatemala and Costa Rica, has more than 94,000 followers on Facebook and a WhatsApp call line. “Club 700 Hoy,” the Spanish-language version of “The 700 Club,” CBN’s most popular program, has over 193,000 Facebook followers. 

Though CBN’s eponymous outlet rarely publishes outright false information, “The 700 Club” has a history of promoting conspiracy theories and misinformation. CBN’s other properties often provide space for people to express skepticism about certain scientifically supported topics such as climate change or evolution

Christian outlets work in tandem with religious leaders, retweeting and sharing one another’s content to maintain a media ecosystem editorially independent from the secular press that might  otherwise weed out misinformation narratives. 

In late June, the Spanish-language Christian outlet Bibliatodo published a story about a hailstorm in China where the ice was supposedly shaped like the coronavirus. Though there is no evidence the photo of the oddly shaped hail is fake, the article cites Israel Breaking News connecting the hailstorm to the end times and quoting a rabbi claiming the storm was godly intervention. 

The website Bibliatodo Noticias reported that hailstones allegedly in the shape of the coronavirus were and related it to the “seventh plague” in the Bible. Screenshot by author.

The pandemic has also pushed religious groups to experiment with various forms of mass communication. It is no longer exclusively the neo-Pentecostals and fundamentalists who have taken to broadcasting their services live to audiences outside their communities.  

Iglesias notes that an increase in the reliance on social media, including the use of streaming, Zoom and YouTube during the pandemic, has broadened the reach of religious messaging. “Because of this, religious speech and its political impact are no longer limited only to the temple and a direct audience,” he said. And these religious figures have amassed large online followings, some with incredible speed. 

In Colombia, Miguel Arrázola, pastor of the Ríos De Vida church, increased his Facebook page interactions by 233 per cent from the last week of February to the last week of March, according to data from Facebook-owned social monitoring tool CrowdTangle, coinciding with the announcement of lockdown measures.

Gutierrez, the Mexican pastor, created his Facebook page on May 7, in the midst of the pandemic. In just three months, he attracted more than 219,000 followers. His most popular video promoting chlorine dioxide has reached some 2.2 million views and 57,000 shares, drawing so much attention that Facebook labeled the post as false information. In total, Gutierrez’s videos have been viewed around 10 million times. 

Community outreach

Previous research has shown that the proliferation of religious-based misinformation is not limited to Latin American Christian communities. Religious communities of all types are susceptible to misinformation, and public health outreach in faith communities is an important factor in addressing the pandemic. 

For Iglesias, the need to provide science-based information and teach critical thinking is paramount. Still, the challenge comes with confronting misinformation among a distressed population where individuals are more likely to cling to faith, a substance or a religion. 

“I think what is important is to prioritize information that is scientifically validated,” Iglesias said. “And to that end, civil society … can promote the responsible use of information.”

The real world consequences of this form of false information have recently been highlighted by two poisonings and two reported deaths in Argentina from CDS, despite its proponents, such as Pastor Gutierrez, insisting it is safe.

Juan Andrés Ríos, 51, died August 11 after ingesting a liter and a half of the prepared solution in two days in an attempt to treat symptoms that were reportedly similar to those of Covid-19. 

“My brother didn’t know whether or not he had coronavirus, but he had symptoms,” said Ríos’s sister in an interview with Radio10. “In the desperation to cure himself, he drank chlorine dioxide. No one forced him to do so, but he made that decision after watching a video that said it cured coronavirus.” Ríos bought the CDS via Facebook, according to his sister.

Another death occurred August 15, when a 5-year-old was brought to a hospital without vital signs after his parents had administered a “preventative” dose of CDS. The child died of multiple organ failure, resulting in an investigation by the Neuquén prosecutor’s office. 

While religious communities provide a dangerous vector for misinformation, they also present an opportunity to combat it. Addressing misinformation emanating from Christian sources in Latin America could ripple throughout the communities their congregations come from. When that misinformation is potentially so dangerous, working to inoculate populations against it is more necessary than ever.

This article is part of a series tracking the infodemic of coronavirus misinformation.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Coronavirus: How pro-mask posts boost the anti-mask movement

When masks became mandatory in a number of countries in July, it sparked a lively conversation online. Videos of furious individuals hurling abuse at mask-wearing shop assistants went viral and anti-mask protests were covered in nightly TV news bulletins. A cursory look at social media and much of the coverage might make it seem that there was a sizable movement opposed to mask wearing, locked in furious debate with those criticizing or mocking them. 

“#NoMasks ……… Now our high streets and shops will be dehumanising, dystopian hell-holes,” one UK user wrote after a government announcement making face coverings compulsory in shops. “Why is everyone losing their shit about having to cover their faces to protect vulnerable people? Those saying #NoMasks need to stop being selfish,” posted another.

Yet on closer examination, it becomes clear that what looks like a two-sided debate is in many countries a small minority provoking a backlash that ends up amplifying their messaging, raising their profiles and possibly introducing more people to a range of conspiracy theories. 

Read more: First Draft’s guide to responsible reporting and ethics in coronavirus coverage

There is undoubtedly a small and vocal community of people opposed to mask-wearing, often spreading misinformation and conspiracy theories related to face coverings and the pandemic. But the structures of social media, and the focus on these online movements by the media, means these stances have been given an inflated weight. First Draft research has found that while the anti-mask movement is loud, those who broadly support mask-wearing and reject the misinformed narratives appear to outnumber those who challenge mandatory mask orders.

This dynamic became clear in the UK on July 14 when the hashtag #NoMasks, used by accounts opposed to mask wearing, trended on Twitter, creating the impression that there was a groundswell of opposition to mandatory masks.

First Draft analysis of more than 8,000 Twitter accounts posting the hashtag found the majority of people using it on the day were in fact promoting the use of face masks, with pro-mask messages far more common than anti-mask ones.

Surveys of the public from the UK Office for National Statistics suggest that 96 per cent of people in the country are wearing masks in shops, with adherence increasing weekly. Yet a glance at Twitter’s trending topics in mid-July seemed to suggest a surge of activity opposing masks.

“If someone’s on Twitter they could think this means lots of people are against masks,” said Jonathan Bright, a senior research fellow at the Oxford Internet Institute.

In fact, on the day the hashtag was trending, First Draft research shows the majority of posts retweeted and liked on the #NoMasks hashtag were supporting the use of masks. Analyzing a sample of all the tweets using the #NoMasks hashtag that were retweeted at least ten times each on July 14, First Draft found that posts lambasting others for refusing to wear masks attracted 88 per cent of the total retweets and nearly 90 per cent of the total likes. 

These posts were retweeted more than 43,000 times and received more than 166,000 likes in 24 hours alone. More than seven per cent of the tweets featured the hashtags #wearadamnmask, #wearamasksavealife and #wearamask, the most common hashtags that indicated opposition or support for masks, while “selfish” and “idiots” were among the top accompanying words in all tweets on July 14.

The snapshot from the UK reflects what First Draft has seen in other countries, where a noisy online minority opposed to wearing masks provokes a majority who are angry about the refusal to follow protective rules and guidelines. Wearing a face covering has become a potent political symbol, marking a divide between very different attitudes to the pandemic and measures introduced to control it. 

There are, of course, mainstream political positions that provide reasons for opposing mandatory mask wearing. While concerns around stringent restrictions and directions lean into conspiratorial thinking, many of those rejecting masks appear to be genuinely concerned that mandates are an assault on their civil liberties.

Uncertainty over health guidance may also have contributed to differences in attitudes, particularly as official advice has evolved. On June 5, the World Health Organization reversed its stance that there was insufficient evidence suggesting masks prevented coronavirus spread, advising that face coverings be worn where social distancing in public is not possible.

Unclear guidelines have also plagued lockdown restrictions and mask adherence in the US and Europe. “[UK] government guidance on masks has been a bit all over the place,” said Bright, adding that even reasonable people have questioned their efficacy.

Yet those most active in opposing mask wearing, and often in spreading anti-mask misinformation online, appear to come from increasingly motivated, political fringe communities — unofficial groups holding shared beliefs or interests different to those of the wider population, and those building face coverings into larger existing conspiracy theories.

Out on the fringe

From claims that masks contain 5G antennas to assertions that mandatory mask orders pave the way for mandatory vaccinations, mask recommendations and rules have prompted a range of conspiracy theories. Many of the communities rejecting mask mandates are uniting over beliefs that measures are unnecessary or too strict, creating a patchwork of communities believing in anti-lockdown, anti-state and anti-vaccine narratives.

Analyzing use of the #NoMasks hashtag during July 2020, First Draft found that posts promoting misinformation about masks gained little traffic and remained mostly within fringe communities. But while their views remain fringe, backlash against them is boosting their profiles. 

The hashtag #kbf, an acronym for anti-lockdown movement Keep Britain Free, was the second most common after #covid19 in the 33,450 tweets First Draft collected in July. It accounted for just three per cent of the total tweets, highlighting the limited impact of some of the most active anti-mask campaigners. Yet its message will have received a boost from its inclusion under the trending hashtag. The group’s founder, Simon Dolan, has raised more than £225,000 for a legal challenge against the UK lockdown and what he claims is increasing government control.

Other commonly associated hashtags, such as #plandemic and #scamdemic, reference the viral “Plandemic” video, which featured several false claims about the virus and vaccines, conspiracy theories and misinformation that the pandemic is a hoax propagated by world leaders. #nonewnormal, a phrase adopted by groups that reject mandatory regulations, appeared in 437 tweets.

“The anti-mask movement is clearly trying to build off an existing audience base which is skeptical about the existence of Covid,” said Chloe Colliver, a disinformation researcher and head of digital policy at the Institute for Strategic Dialogue (ISD).

Anti-mask disinformation in the UK “takes some leaves out of the US playbook,” where face masks have been a contentious political issue, said Colliver. 

#freedom appeared in 419 tweets alongside #nomask, in what Colliver said was a sign that libertarian and so-called “culture war” language has been imported from the US to Europe in recent years, first through the 2016 Brexit vote and now the coronavirus pandemic.

Some of the most active groups in the UK opposing the use of masks started life mobilizing against lockdown measures. Facebook Group Stand Up X, which hosts anti-mask protests across Britain, has attracted more than 22,000 members since its creation on July 5. Other than Stand Up X, most groups sharing anti-mask content in the UK have fewer than 10,000 members each.

Pointing to past research suggesting members of far-right groups are more likely to be climate change skeptics, Bright notes how harmful online communities have historically found common ground. This is more likely during times of crisis, which typically evoke feelings of uncertainty and lack of control

“[The pandemic] is fertile ground for them to cluster together,” said Bright.

Two posters advertizing anti-mask rallies in the UK. Screenshot by author

The ‘sanitary dictatorship’

France is one of the countries where we see conspiratorial mindsets among anti-mask movements similar to those in the UK. A French-language movement opposing masks is coalescing around claims that stringent measures are part of a wider, long-term move to a totalitarian system.

Civil liberty concerns ostensibly form a core part of online discussions opposed to masks in France. First Draft’s analysis revealed that, between accounts tweeting the French hashtag “stop masks,” the most common hashtags in their Twitter bios included #jesuispatriote (“I am a patriot”) and #libredechoisir (“free to choose”).

Fifty per cent of tweets containing the French hashtag for “stop masks” between July 20 — the day masks became compulsory in France — and August 4 also included the hashtag #stopdictaturesanitaire (“stop the sanitary dictatorship”). Within tweets containing the hashtag #stopmasques, the top 20 most associated hashtags included #liberté, #statescandal, #stopauxracailles (“stop the thugs or hoodlums”), #fakepandemic and  #reveillezvous (“wake up”).

Mandates on masks have also spawned anti-vaccine disinformation in France. A blog post claiming that all events surrounding the pandemic were designed to enable mass vaccination was shared on Facebook more than 9,400 times, according to CrowdTangle, Facebook’s content tracking tool. The narrative has also taken off within memes claiming masks are a “test” for stricter measures, such as mandatory vaccines — a link that has also advanced through UK anti-lockdown groups.

Despite some instances of anti-mask misinformation flourishing, most Facebook Groups dedicated to the subject in France, as in the UK, have relatively small followings. “Regroupement contre le port du masque obligatoire” (“Consolidation against wearing a compulsory mask”) has more than 13,400 members, while three similar groups have between 1,000 and 5,500 members.

On trend

Twitter’s trending hashtags section presents the most obviously problematic way in which these fringe groups and their often-false claims are being thrust in front of a wider audience. Twitter states in its FAQs that trends are determined by an algorithm and tailored to users based on their activity. They are shaped by the consistency and reach of conversations around a given topic in a particular location, a Twitter spokesperson told First Draft via email. But this varies between trends, so it’s often unclear how many tweets have been posted for a theme to qualify.

“You don’t need that many tweets for it to be a trending topic,” said Oxford Internet Institute’s Bright. “The trending algorithm doesn’t take this into account at all.”

Twitter added that the company wants trends to promote “healthy discussions,” so it periodically prevents certain content from trending. This includes graphic or profane posts, hate speech and other content that violates Twitter’s rules.

The ISD’s Colliver suggested a proactive way for platforms to address misleading use of hashtags is consistent human review. “That wouldn’t necessarily stop a hashtag from being included if it has a more nuanced conversation behind it, but it would at least enable hashtags directly related to disinformation and conspiracy theories to be labeled or demoted from the trending section.”

The anti-mask conversation is a “perfect example” of the need for newsrooms reporting on social media trends to inspect more closely, she said. “Not digging into the nuance of the debate can be extremely misleading in terms of the scale or nature of these online conversations.”

When harmful narratives circulate primarily in niche communities and pique low levels of interest, reporting on them can amplify rumors that would otherwise remain on the fringes. A precise analysis of trends could help journalists and researchers discover whether a given narrative has passed a certain threshold of shares and crossed over into the mainstream.  

Instead, the amplification of anti-mask chatter by both social media platforms and the media is giving the impression that the trend is more ubiquitous than it actually is. In reality, it appears that most people support the wearing of masks — so much so that they’re happy to tweet about it and ridicule those who don’t.

Data visualizations by Ali Abbas Ahmadi and Carlotta Dotto.

This article is part of a series tracking the infodemic of coronavirus misinformation.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

 

Note on methodology

We gathered 33,450 Twitter posts containing the #NoMasks hashtag in July 2020 and 1,660 posts with the French hashtag #StopMasques between July 20 and August 4. Both scrapes contained only original tweets, excluding retweets and comments. Some tweets might have been taken down since July, making it difficult to ascertain how comprehensive the sample is.

We analyzed the trend over time and selected the most common hashtags in the tweets. Here is the repository with data and code to reproduce the analysis.

The next step of analysis focused on the tweets posted on July 14 — the day the British government announced that masks would be compulsory in shops — to track when, how and through which accounts #NoMasks went viral.

After filtering out the false positive tweets that contained US- or Australian-focused messages, we manually analyzed the 222 resulting posts that gained at least a little traffic — more than 10 retweets — on July 14. 

We then tagged them based on whether they were  “Pro-Masks” or “Anti-Masks” and analyzed both groups by top associated hashtags and by the total number of retweets and likes per hour.

Vaccine nationalism: How geopolitics is shaping responses to the pandemic

On August 11, the same day Russian President Vladimir Putin announced that a vaccine for the coronavirus was “ready”, Kremlin Health Minister Mikhail Murashko hailed the development as “a huge contribution. . . to the victory of humankind over the novel coronavirus”.

It was a grand statement that appealed to global togetherness in the face of a global challenge. And yet, Russia’s approach to the vaccine and other treatments for the coronavirus exemplifies how some governments and populations are responding to the pandemic with the very opposite of an internationalist outlook.

In June, the United States’ bought up the entire global supply of remdesivir, which can be used to treat Covid-19, ensuring no other country had access to the drug for three months. India initially planned to release a vaccine on its Independence Day on August 15, before retracting the statement. A herbal remedy in Madagascar fuelled a wave of nationalist sentiment opposing the World Health Organization across the content. 

Online discussions have often mirrored these events, particularly on the topic of a future vaccine. In some cases, national narratives have encouraged the rejection of international cooperation and been used to promote domestic agendas. In other examples, geopolitical interests and strategic national ties have fueled enthusiasm for or skepticism about specific vaccines or treatments. 

On August 6, the WHO director-general, Tedros Adhanom Ghebreyesus, warned of “vaccine nationalism” and what could be lost if countries failed to work together to suppress the coronavirus:

“Sharing vaccines or sharing other tools actually helps the world to recover together, and the economic recovery can be faster, and the damage from Covid-19 could be lessened.” 

This view is shared by experts such as Ana Santos Rutschman, an assistant professor at the St. Louis University School of Law.

“The pandemic is not a national problem, it’s not geographically contained,” she told First Draft. 

“Nationalism just refers to behaviors and techniques, and often simple things [such] as contractual agreements, that allocate these public goods according to sovereign, geopolitical and geographical lines.”

Rutschman said this may pose a problem for those hoping to stop the spread of the virus, and could lead to vulnerable populations being excluded from potentially life-saving treatments.

“The first one is the public health angle: You don’t fix a pandemic by focusing on vaccine supplies to a restricted number of countries on a priority basis,” said Rutschman. “The chessboard is the entire world and policy can’t really be as skewed as nationalism would have it.”

Throughout the pandemic, our team around the world has been monitoring how geopolitics and nationalism have become sources of information disorder around cures and treatments for Covid-19. Here’s a breakdown of some of these examples.

Russia: The first vaccine?

A scientist works inside a laboratory of the Gamaleya Research Institute of Epidemiology and Microbiology during the production and laboratory testing of a vaccine against the coronavirus disease (COVID-19), in Moscow, Russia August 6, 2020. Picture taken August 6, 2020. The Russian Direct Investment Fund (RDIF)/Handout via REUTERS

Moscow’s rollout plan is much faster than that of other vaccine candidates. The Russian military claimed that the vaccine was “ready” in July, and that Moscow planned to vaccinate medical staff alongside the large-scale Phase III trials, according to The Moscow Times.  On August 11, President Vladimir Putin announced that the country had become the first to register a coronavirus vaccine. 

Russia’s long-standing relationships and interactions with other parts of the world appear to have affected the way news about its vaccine has been received. India and parts of Latin America have historical ties with Russia and in these regions the vaccine news has largely been celebrated uncritically online. Meanwhile, in Western democracies that have largely viewed Moscow through the antagonistic lens of the Cold War, there has been a greater degree of skepticism. 

One of the clearest examples comes from the different responses from traditional media. Reports about Moscow’s vaccine development from Indian news organizations such as ABP News and NDTV were generally more celebratory in their tone, while US counterparts such as The New York Times and the BBC were more cautious — highlighting safety concerns around the speed of the roll-out.

The contrast is even clearer on social media, with both greater excitement about Russian progress on a vaccine and a generally celebratory tone. According to data from social media monitoring tool CrowdTangle, there were more than 1,800 posts by Indian Facebook Pages mentioning “Russia” or “Russian” and “vaccine” in the month since July 10, which were shared more than 155,000 times. Multiple text-on-image posts that have been widely shared call the development “good news.”

Even posts that indicate uncertainty over when the vaccine will be available in India are optimistic and claim that the tests have been successful, with one wishing Moscow “Good luck.” A number of Indian accounts were also tweeting about the vaccine, posting memes celebrating Russia’s announcement.

First Draft’s Laura Garcia, a journalist from Mexico, said that Moscow was actively promoting the vaccine across Central and South America before it was ready, “making sure that people in Latin America knew that they were developing a vaccine.” Russia’s promotion of its own tools to tackle the coronavirus has not been restricted to a vaccine.

On July 10, the Russian embassy in Guatemala presented a drug called Avifavir as a possible treatment for coronavirus and offered it up to Latin American countries. The medication has not yet been fully approved for treating the coronavirus, as it is in the final stages of testing, but some media outlets in Bolivia and Mexico reported that it was a “done deal” and that Latin America will be the first to benefit from Russia’s largesse.

The old Cold War dynamic influences discussions in the region, with the rush to produce and supply a vaccine being portrayed as a “new arms race between the US and Russia,” said Garcia. Given America’s failure to contain the virus, a narrative is coalescing around Russia as the “cool” agent, with Putin sweeping in to provide a solution to the pandemic. 

India: ‘Messianic’ Modi

Members of All India Hindu Mahasabha offer cow urine to a caricature of the coronavirus as they attend a gaumutra (cow urine) party, which according to them helps in warding off coronavirus disease (COVID-19), in New Delhi, India March 14, 2020. REUTERS/Danish Siddiqui

Misinformation about the coronavirus has run rife in India, which has one of the largest and fastest-growing outbreaks, surpassing two million cases in early August. A consistent theme of the falsehoods, rumors, unproven treatments and conspiracy theories has been nationalism, specifically the Hindu nationalism that much of the support for the ruling BJP party is based on. 

Niranjan Sahoo, a senior fellow with the Observer Research Foundation’s Governance and Politics Initiative, said Prime Minister Narendra Modi has distracted the country with nationalistic sentiments “in the middle of a pandemic when the major issue is to save lives.”

“People think that he is really in control of the pandemic management, and that everything is going fine […] even when the reality is different.”

Sahoo adds that Modi’s Hindu nationalist rhetoric, combined with his “mastery” of social media, allows him to “peddle any kind of narrative” and has emboldened individuals to spread misinformation linked to religion and nationalism. 

This is evident in the discussions surrounding unproven drugs and traditional remedies that have been promoted in India during the pandemic. Traditional Indian Ayurvedic treatments and alternative remedies have always been popular, but are being extensively promoted over social media as cures for Covid-19. 

In March and April, Hindu activists, including national and regional lawmakers, promoted the consumption of cow urine as a coronavirus cure, inspired by the sacred status of cows in Hinduism. Since the beginning of March, the claim has been shared widely on Facebook, Twitter and WhatsApp, and has included a video of members of a religious body purportedly consuming cow urine. On Facebook alone, there have been hundreds of thousands of shares on posts mentioning variations on the words “cow urine” and “coronavirus” in public Groups and Pages, written in Hindi or English, most of which were posted in March, according to data from CrowdTangle.  

In late June, a popular yogi, Baba Ramdev, released an Ayurvedic medicine called Coronil, falsely claiming that it had a “100 per cent recovery rate within 3-7 days.” The news was widely shared on both Facebook and Twitter, and drew multiple posts of support for the treatment because it was Indian-made, such as the below tweet expressing pride at a medicine made using “Hindu techniques.”  

Indian nationalism has also seeped into the promotion of modern medical interventions to halt the coronavirus. In early July, the Indian Council of Medical Research (ICMR), the body responsible for the country’s response to the pandemic, claimed that India’s coronavirus vaccine candidate would be released by August 15 — India’s Independence Day — in a move that Sahoo said was a “mix of populism and vaccine nationalism.” The statement was later retracted by the ICMR after several doctors and professionals questioned the deadline, with The Wire reporting that it gave Modi “an opportunity to win political points.”

Most recently, multiple lawmakers, including former union cabinet ministers such as Jaskaur Meena and Arjun Ram Meghwal, and members of parliament such as Pragya Thakur, have claimed that the coronavirus would disappear once a controversial temple to the Hindu deity Lord Ram was constructed. The temple, built on the site of a mosque that was destroyed by a mob in 1992, is a powerful symbol of the Hindu-nationalist message promoted by Modi. 

Madagascar: Organic remedies

FILE PHOTO: Madagascar’s President Andry Rajoelina and his wife Mialy at Iavoloha Palace in Antananarivo, Madagascar September 7, 2019. REUTERS/Yara Nardi/File Photo

In April this year, Andry Rajoelina, the president of Madagascar, promoted an unproven treatment that he claimed would kill the coronavirus. The herbal concoction, called Covid-Organics, is produced from a plant called artemisia that has antimalarial properties.

Rajoelina’s tweet announcing the launch of the drink on April 20 was widely viewed and shared, spurring a months-long online conversation about the product. His later tweets, which included images of himself alongside the leaders of Senegal and Guinea-Bissau, included statements such as “Long live Africa and long live its natural wealth!” and “It’s a united Africa which fights #Covid-19!” Dozens of accounts expressed their pride at an African-made remedy in the comments, and asked others to believe in a cure that had been developed on the continent. 

A number of countries in Africa expressed their interest in the beverage, while the WHO urged caution. In an interview with France 24, Rajoelina claimed that the WHO doubted the remedy because it was made in Africa. “I think the problem is that [the drink] comes from Africa and they can’t admit … that a country like Madagascar … has come up with this formula to save the world,” he said

This statement was soon used to create a fabricated quote suggesting that Madagascar was leaving the WHO, and that Rajoelina had encouraged other African countries to follow suit. Hundreds of Facebook posts repeated the false claim, including some verified accounts, generating thousands of interactions. 

“The regime believed that it could surf on this new African pride acquired by Malagasies, by presenting this artemisia-based product, Covid-Organics, and hoping to solve the global problem that is the coronavirus pandemic,” Tsiresena Manjakahery, Agence France Presse’s correspondent in Madagascar, told First Draft.

Sociologist Marcel Razafimahatratra has told AFP that Covid-Organics created only “serious illusions” that could prolong the pandemic, and divided the medical community, and even worse, the country.

“It only further strengthens the cult of personality [around the president], that will neither bring a solution to the fight against Covid-19 nor ‘solve’ the current economic problems and above all, will have an impact on political life in the near future,” said Razafimahatratra. 

United Kingdom: Brexit divisions

A woman walks past a closed shop covered in social distancing signs following the outbreak of the coronavirus disease (COVID-19) in Chester, Britain, August 10, 2020. REUTERS/Phil Noble

The decision to leave the European Union continues to be one of the key fault lines in the UK, and it permeates many online conversations around the pandemic, the topic of a vaccine against the coronavirus included.

Reports that the UK had opted out of the EU vaccine program in July gained widespread traction in online “Remainer” communities that support staying in the EU, fueled by criticism of Brexit and the government. Actor David Schneider, a prominent pro-EU voice, pointed to other recent decisions made by the British government not to participate in EU schemes, including one for PPE procurement, calling Brexit a “suicide cult.”

Yet the news also led to the expression of nationalist sentiment from pro-Brexit sources, with some claiming the UK would develop its own vaccine first and celebrating the development. Leave.EU, the campaign pushing for the UK to leave the EU during the referendum, claimed on its Facebook Page that “Remainers are trotting out that same tired old propaganda — better to be enslaved than emancipated.”

The early positive results of the Oxford trial released July 20 further fueled nationalist narratives. A tweet from anonymous pro-Conservative Party and pro-Brexit Twitter figure “Mason Mills” claimed: “There’s a reason we are called GREAT Britain” while Facebook Page “Brexit News” used the news to crow: 

“So Brexit Britain have [sic] a working vaccine for Covid. Created in Oxford.

“Possibly the most important research so far this century!

“So come on remainers, explain how leaving the EU is destroying British research, On our insignificant little island?

“Has anyone else worked it out yet, remainers are wrong! YET AGAIN”

Likewise, “The Bruges Group,” a pro-Brexit think tank, pointed to the development to say: “We are so much better off out [of the EU].”

Britain is set to leave the European Union at the end of 2020 and, with no trade deal currently in sight, there is a question mark over future deals for buying and selling goods and services — including vaccines.

Nationalist responses to a global problem

The symbolism of being the first country to develop an effective medical solution to the pandemic cannot be understated and with such a huge boost to reputation on the line, it is perhaps not surprising that each national victory on that path — real or imagined  — generates a fevered response. 

The many thousands of responses to each new development, filtered through national politics and geopolitical concerns, demonstrate the strength of this interplay among politicians, governments and the public. Nationalistic narratives are meant to elicit an emotional response, to encourage people to rally around the flag regardless of the circumstances. In many cases, it is working. 

Indian Prime Minister Modi’s messaging, for example, has helped drive his “messianic” popularity. His approval rating was at 74 per cent at the end of June — the highest of any democratically elected leader — according to a Morning Consult poll, despite the alarming rise of coronavirus cases in the country at the time. When leaders can juice their popularity by shaping medical responses to the pandemic to fit their own agenda, decision making will often be led by politics more than science.

That focus on national responses to what is a global problem, influenced by shifting goals, alliances and animosities, could hamper our ability to recover from the pandemic. 

As the WHO’s Tedros warned, mitigating the worst effects of a disease being felt across the world requires international cooperation. “For the world to recover faster, it has to recover together.”

This article is part of a series tracking the infodemic of coronavirus misinformation.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

‘Fake news’ laws, privacy & free speech on trial: Government overreach in the infodemic?

The surfeit of misinformation online during the pandemic has prompted some governments to implement extraordinary measures in an attempt to establish control amid the chaos. Criminalizing the dissemination of “false news,” expanding existing penalties for spreading misinformation, and increasing surveillance are among the actions some authorities have taken. Human rights and media observers warn that such remedies are worse than the problem they seek to alleviate, and that freedom of expression, privacy and the right to protest are disintegrating under the pretext of safeguarding public health.

But governments’ concerns aren’t without reason. Over the past six months, conspiracy theories, bogus cures and partisan finger-pointing online have spilled over into real-world harm: more than 700 dead from alcohol poisoning in Iran, Muslims attacked in India, telecommunications infrastructure vandalized in the UK, and untold numbers of people sick or dead from a virus they thought wasn’t serious. This on top of more than 700,000 deaths from Covid-19 and the gutting of economies around the globe.  

Hungary, Romania, Algeria, Thailand and the Philippines are among the countries that have instituted new laws or invoked emergency decrees giving authorities the power to block websites, issue fines or imprison people for producing or spreading false information during the pandemic. In Cambodia and Indonesia, social media users have been arrested after allegedly posting false news about the coronavirus. In Egypt, a journalist who had been critical of the government’s response to the pandemic and was detained for “spreading fake news” contracted the virus in custody and died before he could be tried. Even in South Africa, where freedom of expression is a constitutional right, politicians criminalized the publication of any statement made “with the intention to deceive any other person” about Covid-19, government measures to address the disease or — in a sign of the country’s grim experience with HIV/AIDS — a person’s infection status.

In a March 2020 statement, United Nations human rights experts urged governments to “avoid overreach of security measures” in responding to the pandemic, and said that emergency powers should be “proportionate, necessary and non-discriminatory,” and not be used to quash dissent.

It’s an exhortation that many are not heeding.

Julie Posetti, global director of research at the International Center for Journalists, said legislation is being misused to justify crackdowns on legitimate speech in a number of countries. 

“There are circumstances where journalists have been detained and fined, for example, in reference to reportage that has been critical of government and that is deemed to be ‘fake news’ because it doesn’t suit the government,” Posetti told First Draft.

But she said that even well-intentioned laws could “inadvertently catch legitimate communication in the net,” effectively criminalizing journalism and undermining fundamental rights.

Whistleblowers have also come under attack, notably the Chinese ophthalmologist Li Wenliang, who was reprimanded for “spreading rumors” about the outbreak in Wuhan before dying from Covid-19, only to receive a posthumous apology. 

“If you can’t have doctors, nurses and other healthcare workers speaking publicly about failures of the system where it’s in the public interest to do so, because they’re afraid of being jailed on so-called ‘fake news’ laws because the government equates criticism with fakery, you have a really serious problem,” Posetti said.

Nani Jansen Reventlow, a human rights lawyer and founding director of the Digital Freedom Fund, said laws governing misinformation affect private individuals as much as journalists. So does increased surveillance, such as contact-tracing software.

“Using apps to track people’s movement has a chilling effect on people being able to share information because everyone knows where they’ve been,” she said.

She cited South Korea’s app as an example of a coronavirus tracker having a particularly deleterious impact on people’s privacy, with individuals able to monitor one another through technology that was found to have serious security flaws.   

In addition to affecting whistleblowers and source confidentiality for journalists, increased surveillance could also affect people’s willingness to exercise their right to assemble.

“Will you actually go to a protest if you know you’re going to be monitored? That particularly applies to those of us who are in a more vulnerable position when it comes to law enforcement,” Jansen Reventlow said. “You never know how it’s going to backfire.”

Internet shutdowns preceding the pandemic — such as those in Indian-administered region of Kashmir and the Rohingya refugee camps in Cox’s Bazar, Bangladesh — have also impeded access to essential information about the virus and response.

Jansen Reventlow fears emergency measures that stifle civil society and press freedom, such as those implemented in Hungary, will outlast the pandemic. “There is a danger that those emergency powers will not be turned back anytime soon,” she said.

Then there’s the question of whether laws against misinformation achieve their professed purpose.

Posetti said there was a lack of empirical evidence as to whether such laws impeded the distribution of false and misleading information. “But what we can say from prior research is that these sorts of laws do, in fact, chill a broad range of public communication, and that is where the problem lies.”

It’s a concern Jansen Reventlow shares. “The only thing it’s going to do is make it easier for public authorities to clamp down on things they don’t like.”

Instead, she said, governments should consistently and proactively provide accurate, timely information about the situation and the basis for policy decisions, so that people aren’t left to speculate in a vacuum. 

“It’s about finding the right balance. But a thorough debate about where that balance should be found is pretty absent at the moment,” she said.

This article is part of a series tracking the infodemic of coronavirus misinformation.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Why we need a Google Trends for Facebook, Instagram, Twitter, TikTok and Reddit

Everyone needs access to credible information during a pandemic. Without it, people die.

We are especially vulnerable when we want to know something — such as how to treat Covid-19 — but no credible information exists. At the beginning of the pandemic, confusion about symptoms, causes and treatments reined. Viral posts claimed a runny nose was not a sign of the disease, or that garlic, alcohol or sunlight were good preventative measures. A range of medicines have been tried and tested, including chloroquine and hydroxychloroquine, favipiravir, remdesivir, azithromycin and dexamethasone. Some were found to be effective, others less so. 

If more speculation or misinformation exists around these terms than credible facts, then search engines often present that to people who, in the midst of a pandemic, may be in a desperate moment. This can lead to confusion, conspiracy theories, self-medication, stockpiling and overdoses. 

These invisible moments of vulnerability are known as data voids: when there are high levels of demand for information on a topic, but low levels of credible supply. Data voids were first defined by Michael Golebiewski and danah boyd in 2019, and describe vulnerabilities that emerge from search engines like Google.

When it comes to data voids, a distinction is usually drawn between search engines and social media platforms. Whereas the primary interface of search engines is the search bar, the primary interface of social media platforms is the feed: algorithmic encounters with posts based on general interest, not a specific question you’re searching to answer.

It’s therefore easy to miss the fact that data voids exist here, too: Even though search isn’t the primary interface, it’s still a major feature. And with billions of users, they may be creating major social vulnerabilities.

Suggested searches for ‘vaccine’ on Facebook. Screenshot by author.

If we are to respond to information needs as they emerge, and understand whether they are causing harm, we need a way to monitor them.

Important work has been undertaken in this direction. The International Fact-Checking Network (IFCN) has visualized its members’ fact checks related to coronavirus to help us understand where one form of credible information is being supplied. Amazon’s web-ranking company Alexa has created a dashboard to monitor English-language articles relating to coronavirus that have been shared on Twitter and Reddit. Other examples, such as gathers.co, have created a feed of relevant articles. Each of these efforts speaks to a societal need that has yet to be achieved: tracking the flow of credible information in real time.

But while there have been efforts to track the supply of credible information, usually in the form of fact checks or news articles, what we haven’t seen are attempts to bring supply together with demand: what people want to know right now, and what information they’re getting.

First Draft spent recent months building a dashboard to monitor data voids in partnership with the University of Sheffield, looking to find a way to identify where the demand for credible information far outstrips the supply. The results of that research will be published soon, but more urgent is fully understanding the threat these data voids pose to our recovery from the pandemic. 

Social media platforms are search engines

YouTube has famously described itself as “the world’s second most popular search engine.” Despite being a clever marketing tool, the statement is an honest one: People search for information on social media as well as search engines. 

With billions of users among them, social media platforms are a primary source of information for many people. But just how much, we don’t know.

Suggested searches for ‘vaccine’ on Instagram (left) and TikTok (right). Screenshot by author.

YouTube allows the public to investigate search interest on its platform, tucked away within Google Trends. Given that interest on YouTube fluctuates independently to interest on Google, it’s important for us to monitor both. 

Interest in ‘coronavirus’ on Google Images, Google web search, Google News, and YouTube, April 29-July 25. Source: Google Trends. Screenshot by author.

But we have no such picture on Facebook, Instagram, Twitter, TikTok, Reddit and so on. Despite search not being the primary interface of these platforms, it’s clear that, with billions of users, a large part of our picture of data voids is missing.

What we need from platforms

1. We need a Google Trends for Facebook, Instagram, Twitter, TikTok and Reddit

We have no idea what people are searching for on social media platforms, or what results those platforms are putting in front of people. Clearly, the platforms think search-based misinformation vulnerabilities exist on their platforms, because they intervene in certain search results to promote official information. 

Twitter (above), Facebook, Google and other networks are putting links to credible information at the top of search results although they are often not as eye-catching as posts from users. Screenshot by author.

However, they don’t provide the transparency to know what people are searching for, how this changes by location, how trends or spikes are emerging in real time, and what information they’re putting in front of people in the search results.

Facebook and Instagram

Information about trends and posts on Facebook and Instagram is accessible via CrowdTangle, the Facebook-owned analytics tool that shows which URLs and posts are resonating. Interest can, to some extent, be inferred from this information.

But there are a couple of issues. First, CrowdTangle only covers public posts, which only amounts to a small portion  of what’s happening on Facebook. Second, it doesn’t tell us anything about searches on the platform and the connected results. 

With billions of users, and likely many more billions of searches, we’re missing a big part of the picture that could be provided without compromising user privacy.

Twitter

Twitter already has a trends feature, but there is no dashboard to explore multiple locations. You can only see trends in your location as an individual user, or access the data via its API as a developer.

However, Twitter’s API does not provide information on search interest. Trends refer only to popular hashtags and keywords within tweets, giving a picture of what people feel inclined  — and able — to express publicly. Seeking information via search is a very different kind of data point, and we need to monitor those searches, as well as what tweets are featuring prominently in the results.

TikTok

While there are unofficial API wrappers and analytics tools for TikTok, to our knowledge there is no ability to track search trends or results.

Reddit

Reddit’s API allows users to query trending subreddits, but lacks information on trending searches.

Suggested searches for ‘vaccine’ on Reddit. Screenshot by author.

Bing, Yahoo and Duck Duck Go

Bing represents 13 per cent of the US desktop search market, which amounts to many millions of users and many more searches. Google has set the standard for search engine analytics, but Bing, Yahoo and Duck Duck Go lack the same transparency.

2. We need Google Trends to be more precise

Google is doing important work on addressing data voids. Not only has it set the standard for search engine analytics with Google Trends, but it also is working on directly addressing data voids with Question Hub, a tool designed to identify “content gaps” and work with fact checkers to fill them. This is important work. 

However, a few small changes would greatly improve its effectiveness.

Google Trends should allow Boolean queries 

It’s a small change, but a big impact.

First, we’ll be able to more accurately search for a population’s interest in a topic by aggregating interest in terms in multiple languages. In our research into data voids in Greece, we wanted to search for “coronavirus OR κορωνοϊός.” We weren’t able to do this, so we had to use the English term in every country.

Second, we’ll be able to track topics rather than words. Instead of just searching for the word “vaccines,” we could search for:

(vaccines OR vaccine OR vaccination OR vaccinations) AND (unsafe OR injury OR rushed OR OR dangerous OR…)

This would mean we could monitor hesitancy around specific narratives, such as vaccine safety.

Google Search, Google Scholar, Google Alerts and other Google tools already accept Boolean queries. Trends needs to as well.

We need more alerts

We can’t spend all day staring at Google Trends, no matter how fascinating the insights. So we need email alerts when there are spikes and breakouts. Currently, you can only get these on a weekly basis, and often this is too late. We need alerts as and when they occur.

Email alerts from Google Trends are only provided on a weekly or monthly basis. Screenshot by author.

3. We need to connect interest with results

Let’s say lots of people are searching for information about vaccine safety. The next question is: What results are they getting? Which news stories are featuring prominently? Which are being clicked on?

We need richer information about results if we want to be able to determine data voids. A table showing top and rising results for search terms would greatly increase our ability to monitor where people are being sent, and hold platforms to account for the information they expose to their users.

Where we go from here

We need different kinds of information as a pandemic progresses. At first we encounter lots of questions about origin. Then we see more claims about remedies and treatments, many of which can cause harm. Eventually, the world turns its eyes to a vaccine.

This is precisely where we’re turning our attention now. It will be critical to monitor the emergent information needs around vaccines and respond to harmful information about vaccines’ safety, morality, freedom, necessity, effectiveness and so on.

We need to track vaccine-related data voids in order to save lives. With changes to Google Trends, we will be able to better track interest in narratives, and direct our responses toward them.

We also need to see social media platforms take action. Early indicators, such as suggested searches for “vaccine” on Facebook, TikTok, Instagram and Reddit, show what’s possible when we’re unable to analyze search interest, results and voids.

We hope some of these actions can be taken in the coming months as we confront the next chapter in this infodemic: harmful information about vaccination.

With special thanks to Pedro Noel, Carlotta Dotto and Rory Smith, who contributed to projects and discussions that led to these recommendations.

Disclosure: The Google News Initiative and the Facebook Journalism Project provided some of First Draft’s funding for 2020, but third parties have no input on editorial decisions or production. Find out more about First Draft’s funding, mission and values our About page.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Misinformation in your backyard: Insights from 5 US states ahead of the 2020 election

This is an edited extract from First Draft’s new report on local misinformation in the United States ahead of the 2020 election. Download the full report (PDF): “Misinformation in your backyard.”

“All politics is local,” so the saying goes. The same could be said for misinformation. The 2016 elections brought stories of Macedonian teens pulling quick profits and Russian agents seeding polarization across the United States. But 2020 is teaching us that whatever the origins of a rumor, misleading meme or photo, it is the local twist and organic amplification that give it power — often leading to impact offline.

In five states that will be key in the upcoming US election — Colorado, Florida, Michigan, Ohio and Wisconsin — First Draft has collected dozens of examples of information disorder playing out via private Facebook Groups, text messages and other platforms.

The case studies, authored by Shana Black, Serena Maria Daniels, Sandra Fish, Howard Hardee and Damon Scott, are:

  • Colorado: Voter fraud claims about mail-in ballots
  • Florida: Viral photos misinterpreted
  • Michigan: Data mining and transparency in advertising
  • Ohio: Doxing and harassment of health officials
  • Wisconsin: Conspiracy theories about government surveillance

In an echo of national trends, local influencers and elected officials — state representatives, sheriffs and political candidates — play a key role in amplifying and spreading misleading or harmful information about the pandemic and other issues. Confusion among the public, whether about the process of mail-in voting or the efficacy of mask-wearing, proves fertile ground for creating confusion and encouraging distrust.

While local news organizations enjoy more public trust than national sources, and are well-positioned to provide information to counter information disorder, they are under increasing financial stress. Even before the economic burden of the pandemic, local newsrooms had already been contracting and shutting down, driven in part by the migration of advertising dollars to social media platforms, resulting in local news deserts. And even in their previous financially stable state, newsroom staff lacked diversity. According to recent research by Gallup and the Knight Foundation, more than two-thirds of Americans think it is important for the media to represent the diversity of the US population, but nearly 40 per cent think the media is doing a poor job with diversity efforts.

All these trends have worsened during the pandemic. The Poynter Institute is keeping a continually updated list of newsrooms that have cut services and staff in recent months. One estimate puts the number of news jobs lost at 36,000, even though the audience has increased from a public seeking answers to local questions.

First Draft has dedicated its 2020 US program to training local reporters and increasing resources for combating local information disorder. In a tour of 14 states, First Draft extended its training on responsibly tracking and countering local misinformation to more than 1,000 local reporters.

In March, First Draft launched the Local News Fellows project, supported by Democracy Fund, training and investing in five part-time reporters embedded in their communities to serve as central resources in their state. The driving concept: In today’s challenging environment, many local newsrooms lack the resources to devote staff members to tracking local information disorder. But through collaboration, they can share resources and encourage on-the-ground efforts, bringing newsrooms together. The material in this report was all sparked by their daily monitoring of local online conversations.

First Draft has prepared case studies on five examples from Colorado, Florida, Michigan, Ohio and Wisconsin. They are small snapshots of information disorder in these particular states, but they also paint a broad picture of how the same themes and tactics cross state borders and flourish nationally.

Download the full report.

This part of First Draft’s US 2020 project was made possible by the generous donation from and vision of the Democracy Fund.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Tracking the infodemic: Charting six months of coronavirus misinformation

“The 2019-nCoV outbreak and response has been accompanied by a massive ‘infodemic’ — an over-abundance of information — some accurate and some not — that makes it hard for people to find trustworthy sources and reliable guidance when they need it.”

 – WHO Situation Report, February 2, 2020.

The World Health Organization held the world’s first infodemiology conference at the end of June. Rather than the usual mixture of seminars, lanyards and networking that accompanies most industry events, experts dialed in from their homes and offices to discuss the details of the problem over video chat. 

In his keynote speech, Saad Omer, director of the Yale Institute for Global Health, urged viewers to think of the policy and practice of combating the “human catastrophe” of the pandemic “from a consequentialist perspective.”

“What we do is mainly judged by the outcomes that it generates, the outcomes of the activity itself,” he explained.

“I would submit that in a pandemic, where we are dealing… with the fierce urgency of now. That’s the imperative we should follow.”

The mainstream conversation about misinformation developed in the wake of the 2016 US election and has largely focused on politics, where its consequences are often relative, subjective or unknowable. Did Russian “fake news” swing it for Trump? It’s a question many studies have tried and failed to answer.

The consequences of misinformation about the coronavirus have, for some, been death. At least 44 people in Iran died in the early weeks of the pandemic from drinking bootleg alcohol in the mistaken belief it would protect them. Gary Lenius, a man in his sixties from Phoenix, died after drinking fish tank cleaner he mistakenly believed contained hydroxychloroquine, the drug promoted by President Donald Trump, which has since been dismissed by scientists as ineffective against Covid-19

In Texas, a man in his thirties was reported in early July to have attended a “Covid party” held by someone who had been diagnosed with Covid-19 to test whether others would become infected and to “see if the virus is real,” according to Jane Appleby, chief medical officer of Methodist Healthcare in San Antonio.

“Just before the patient died they looked at their nurse and they said, ‘I think I made a mistake. I thought this was a hoax and it’s not,’” said Appleby.

We don’t know how this man had been led to believe the pandemic was a hoax, but we can assume that he found those arguments more compelling than the news reports, death tolls  and stories of tragedy, which have dominated 2020.

We recently published a series of articles delving into the psychology behind this behavior, and it is at the core of the infodemic problem. The coronavirus poses a serious threat to life but, seven months since doctors first warned the WHO of a dangerous new type of coronavirus, we still don’t know everything about Sars-Cov-2 to give the virus its scientific name. That combination of uncertainty and fear has meant many people have tried to find their own answers and, in some cases, it’s killing them.

“Humans have an emotional relationship to information,” said Claire Wardle, First Draft’s US director, at the infodemiology conference. 

“It doesn’t matter how educated you are, we are all susceptible to information that reinforces our worldview, and we are all drawn to that kind of information because it makes us feel something.”

In March, First Draft identified the distinct types of misinformation accompanying the pandemic, summarized most simply as the origin, spread, symptoms, treatments and responses. In the subsequent months these mistruths have snowballed, as existing conspiracy theories about vaccinations or new technology or global elites coalesce around the coronavirus and the attempts to mitigate its effects.

It is six months since that first announcement by the WHO declaring a crisis in information that could derail the fight against a highly infectious virus that kills indiscriminately. At First Draft, those months have been spent tracking the infodemic, investigating the sources and evolution of the misinformation that has circulated so perniciously, working with our network of CrossCheck partners to support newsrooms, and publishing courses, resources, webinars and guides to help as many people as possible protect themselves from the harms of coronavirus misinformation.

Throughout August, First Draft will publish a series of articles highlighting some of the key ways in which the infodemic has produced real-world consequences, how we can understand it, and what we can do about it.

The difference between the facts and the truth” is the first in this series, published on our Medium publication, Footnotes. Tommy Shane, First Draft’s head of impact and policy, addresses the different modes of understanding that underpin how we approach knowledge and what that can tell us about conspiracy theories, misinformation and the worrying growth of those who resist measures to halt the spread of the coronavirus.

Much like the pandemic, we need to understand the infodemic if we are going to address it. We are still a long way from a solution for either. This series will hopefully be another step along the path to fixing the second of those problems, one that could have a significant impact on the first.

This article is part of a series tracking the infodemic of coronavirus misinformation.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

The psychology of misinformation: How to prevent it

The psychology of misinformation — the mental shortcuts, confusions, and illusions that encourage us to believe things that aren’t true — can tell us a lot about how to prevent its harmful effects. It’s what affects whether corrections work, what we should teach in media literacy courses, and why we’re vulnerable to misinformation in the first place. It’s also a fascinating insight into the human brain.

In the third part of this series on the psychology of misinformation, we cover the psychological concepts that are relevant to the prevention of misinformation. As you’ll have seen from the psychology of correcting misinformation, prevention is preferable to cure. 

Here we explain the psychological concepts that can help us by building our mental (and therefore social) resilience. What you’ll find is that many of the resources we need to slow down misinformation are right there in our brains, waiting to be used.

This is the third in our series on the psychology of misinformation. Read the first, “The psychology of misinformation: Why we’re vulnerable”, and the second, “The psychology of misinformation: Why it’s so hard to correct”.

Skepticism

Skepticism is an awareness of the potential for manipulation and a desire to accurately understand the truth. It is different from cynicism, which is a generalized distrust.

Skepticism involves more cognitive resources going into the evaluation of information, and as a result can lower susceptibility to misinformation. It can be contrasted with ‘bullshit receptivity’ and contributes to Gordon Pennycook and David Rand’s thesis that susceptibility to misinformation derives not from motivated reasoning (persuading yourself something is true because you want it to be), but from a lack of analytic thinking.

What to read next: Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication” by Briony Swire and Ullrich K.H. Ecker, published in  Misinformation and Mass Audiences in 2018. 

Emotional skepticism

Emotional skepticism is an awareness of potential manipulation through your emotions. It might involve taking a moment to calm down before sharing a shocking but false post.

Despite emotion being a strong driver of shares on social media, and therefore a powerful driver in disinformation campaigns, it is often overlooked in media literacy campaigns. More research is needed to understand what techniques can cultivate emotional skepticism, and how this can slow down the sharing of misinformation.

What to read next:Reliance on emotion promotes belief in fake news” by Cameron Martel, George Pennycook, and David G. Rand, (preprint) in 2019.

Alertness

Alertness is a heightened awareness of the effects of misinformation.

In 2010, misinformation researcher Ullrich Ecker and colleagues found that warning people about the effects of misinformation, such as the continued influence effect, can make them more alert. By being alert to them, the effects of misinformation are reduced.

What to read next: “Explicit warnings reduce but do not eliminate the continued influence of misinformation” by Ullrich K.H. Ecker, Stephan Lewandowsky, and David T.W. Tang, published  in Memory and Cognition 38, 1087–1100 in 2010.

Analytic thinking

Analytic thinking, also known as deliberation, is a cognitive process that involves thoughtful evaluation rather than quick, intuitive judgements.

Taking more than a few more seconds to think can help you spot misinformation. Misinformation researchers found that ‘“analytic thinking helps to accurately discern the truth in the context of news headlines.”

What to read next: “Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines” by Bence Bago, David G. Rand and George Pennycook, (preprint) in 2019.

Friction

Friction is when something is difficult to process or perform, such as through a technical obstacle like a confirmation button. It is the opposite of fluency. 

Introducing friction can reduce belief in misinformation. Lisa Fazio, a researcher based at Vanderbilt University, has found that if you create friction in the act of sharing, such as by asking people to explain why they think a headline is true before they share it, they’re less likely to spread misinformation.

What to read next:Pausing to consider why a headline is true or false can help reduce the sharing of false news” by Lisa Fazio, Harvard Kennedy School Misinformation Review, in 2020.

Inoculation

Inoculation, also known as ‘prebunking’, refers to techniques that build pre-emptive resistance to misinformation. Like a vaccine, it works by exposing people to examples of misinformation, or misinformation techniques, to help them recognize and reject them in the future.

Inoculation has been found to be effective in reducing belief in conspiracy theories and increasing belief in scientific consensus on climate change.

What to read next: “Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence” by John Cook, Stephan Lewandowsky, and Ullrich K.H. Ecker, published in PLOS ONE 12 (5) in 2017.

Nudges

Nudges are small prompts that subtly suggest behaviors. The concept emerged from behavioral science and in particular the 2008 book “Nudge: Improving Decisions About Health, Wealth, and Happiness.”

When it comes to building resilience to misinformation, nudges generally try to prompt analytic thinking. A recent study found that nudging people to think about accuracy before sharing misinformation significantly improves people’s discernment of whether it is true. 

What to read next: “Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention” by George Pennycook, Jonathan McPhetres, Yunhao Zhang, Jackson G. Lu, and David G. Rand, (preprint) in 2020.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Reporting on coronavirus: How to use WhatsApp to find stories and communities

With over two billion users worldwide, WhatsApp has become increasingly fertile ground for the spread of misinformation, and therefore an important space for journalists and researchers to monitor. However, as a closed and encrypted platform where users have a higher expectation of privacy than in public online spaces, there are practical and ethical challenges to accessing conversations there.

First Draft director Claire Wardle, research manager Rory Smith, and ethics and standards editor Victoria Kwan were joined by Fabricio Benevenuto, professor of computer science at the Federal University of Minas Gerais in Brazil, to discuss these issues in two recent webinars. Here are some of the main takeaways.

Ethics and transparency

Although administrators of WhatsApp groups may share an invitation link on their public social media, group members may not be aware that their chat is effectively open for anyone to join. Nor will they necessarily notice a message that a new user has joined. It’s important to consider group members’ expectations of privacy, as well as how much information you will provide about yourself and your reasons for joining the group.

“You should be honest and transparent about what you’re doing all the time because it has real impact on your newsroom or institution,” Smith said. “And it can have real repercussions – people don’t want to be spied on.”

If you work in a newsroom or institution such as a university, consider any codes of conduct or ethics approval processes before diving in. Benevenuto’s research projects on the spread of misinformation on WhatsApp underwent an institutional review board process. When his team gathered data in India, their WhatsApp profile stated that they were researchers. They also anonymized the data they collected so content could not be linked back to specific users. 

Regardless of any formal requirements, make sure you consider the safety of group members when recording data for research or stories. 

Safety

Keeping users’ names, phone numbers, and even a group’s name confidential is important, Smith said: “Not every organization will agree on the same approach, but in my mind, protecting the identity of group members is priority number one.” 

There could be exceptions if there is a public interest in the disclosure, however. For example, if a researcher or journalist discovered that a group was sharing harmful content, they could raise it with the platform as part of the reporting process. A misinformation campaign led by a politician or a corporation could also fit the criteria for public interest journalism and deciding who or what to name and how.

“It’s a talk you’d have to have with your newsroom and investigative team, but I think there [would be] a strong public good argument for using that non-anonymized data,” Smith said. 

The stakes might differ depending on the group and subject, Kwan noted. If joining a chat whose members are political activists in Hong Kong, “the level of transparency and disclosure I should engage in is much higher because of what is at stake for them,” she said. Activists could reasonably hold concerns for their safety in an environment where there were ramifications for criticizing authorities, for example. “Whereas if I were monitoring white supremacists who were planning on spreading misinformation or conspiracy theories, I might not want to reveal to the administrator who I was.” 

If a journalist thought disclosing their identity would put them at risk but there was a strong public interest case for reporting on the group, there could be an argument for privileging the journalist’s privacy over transparency – but this is a difficult ethical issue that should be discussed with editors and colleagues. 

And it’s important to think about your own security. “Don’t make the mistake I did and use your own phone unless you want to get thousands of toxic messages a day,” Smith said. He recommended designating a burner phone exclusively for WhatsApp monitoring. “If you encounter problematic people who start threatening you, at least they don’t have your personal details.”

WhatsApp for research

Some researchers have developed tools for the programmatic collection and analysis of messages, such as the WhatsApp Monitor built by Benevenuto’s team.

The monitor has allowed academics and reporters to track the spread of misinformation in Brazil, India and Indonesia on a far larger scale than could be done through manual monitoring. 

WhatsApp for journalists

You can find links to WhatsApp groups using Twitter’s “advanced search” option or the search tools on CrowdTangle, if your newsroom has access to the platform. Whichever you use, start by launching WhatsApp Web/Desktop so you can see the WhatsApp interface on your computer screen.

How to find WhatsApp groups on Twitter or CrowdTangle

  1. Use the advanced search feature on Twitter or either of the search functions on CrowdTangle to build your search using keywords. In the search box “all of these words” (available on Twitter and through the dashboard search on CrowdTangle), type “chat.whatsapp.com”.
  2. Fine-tune your search with additional parameters such as language or time frame, or by tweaking the keywords. 
  3. Results will show content that include links to WhatsApp groups. Click the link of a group you want to join, and it will open in WhatsApp Web.

Check out First Draft’s webinar page and YouTube channel to see all the recent webinars concerning coronavirus reporting.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

First Draft welcomes new US board

First Draft is excited to announce the 2020-21 US Board of Directors. We are incredibly grateful that Sam Gregory, Jesse Ma, Mike Miller, Ifeoma Ozoma, Chris Perry, Vivian Schiller and Carla Zanoni have all agreed to serve on our board.  

Each member comes to the board with a wide range of expertise that we know will be invaluable as we make plans for our future.

Sam Gregory is the program director at WITNESS and an award-winning technologist, media-maker and advocate. For 20 years, Sam has enabled people to use the power of the moving image and participatory technologies to create human rights change. Currently co-chair of the Partnership on AI’s Expert Group on AI and the Media, he focuses on emerging threats linked to AI and mis/disinformation including deepfakes and synthetic media.

Jesse Ma is an adjunct professor of law at Fordham University School of Law. With more than a decade of digital media and startup experience, Jesse specializes in strategic planning and helping clients navigate a constantly evolving legal and business environment. Previously, he was head of partnerships at the South China Morning Post, general counsel at Upworthy, and counsel at Gawker Media.

Michael Miller is a program co-director for the Social Science Research Council’s Just Tech program and senior program officer for its Media & Democracy program. He received a PhD in political science from the City University of New York, where his research explored how mechanisms of censorship, surveillance and propaganda vary across media types in China. He is a 2018-2020 Mellon/ACLS Public Fellow. Prior to joining the SSRC he was an adjunct professor of political science at Hunter College and Hostos Community College of the City University of New York.

Ifeoma Ozoma was most recently Pinterest’s public policy and social impact manager. She led the company’s authoritative vaccine-related search experience, which was lauded by the World Health Organization and The Washington Post’s editorial board. She’s also a member of the Brookings Institution’s Transatlantic Working Group on Disinformation, and a member of The Washington Post’s Technology 202 Network. Before joining Pinterest, she was on the Public Policy teams at Google and Facebook. She received a BA in political science from Yale University.

Chris Perry is the chief innovation officer at Weber Shandwick, a leading global communications and marketing solutions firm. With over 20 years of digital and media experience, Chris specializes in helping clients decode the rapidly changing media environment. His writing and work have been featured in Forbes, Fortune, The New York Times and The Washington Post. Chris authors the Media/Genius newsletter, which focuses on media at the intersection of content and intelligence.

Vivian Schiller is the executive director for Aspen Digital, a program of the Aspen Institute focusing on technology, media, innovation and cybersecurity. Previously, she was global chair of news at Twitter, senior vice president and chief digital officer at NBC News, and president and chief executive of NPR, senior vice president and general manager of NYTimes.com and has held other executive roles in media.

Carla Zanoni is an award-winning journalist, writer and media strategist. Born in Argentina and a longtime New Yorker, she is TED’s first head of audience development, where she focuses on content and programming strategy, analytics, social media and community development. Carla was the first global audience and analytics editor to be named on the masthead of The Wall Street Journal and is a graduate of Columbia University’s School of General Studies and School of Journalism. 

This Board of Directors supports First Draft News Inc., an independent 501(c)(3) organization, responsible for work completed in the United States. First Draft’s global work is managed by a separate limited liability company, based in London, and its work is guided by an Advisory Board. We are honored to have this group steer our global strategy:

  • Daniel Bramatti, editor of Estadão Data and Estadão Verifica
  • Liz Carolan, executive director of Digital Action
  • Phil Chetwynd, global news Director of Agence France-Presse
  • Sam Dubberley, manager of the Digital Verification Corps at Amnesty International
  • Sameer Padania, director of Macroscope
  • Adam Rendle, a partner at Taylor Wessing in the IP/IT group
  • Dan Shefet, founder of Cabinet Shefet

We are grateful to have the global Advisory Board and the US Board of Directors as part of our team during this critical time for First Draft.

The psychology of misinformation: Why it’s so hard to correct

The psychology of misinformation — the mental shortcuts, confusions, and illusions that encourage us to believe things that aren’t true — can tell us a lot about how to prevent its harmful effects. It’s what affects whether corrections work, what we should teach in media literacy courses, and why we’re vulnerable to misinformation in the first place. It’s also a fascinating insight into the human brain.

In the second part of this series on the psychology of misinformation, we cover the psychological concepts that are relevant to corrections, such as fact checks and debunks. One key theme that will resurface is the central problem of correction: Once we’re exposed to misinformation, it’s very hard to get it out of our heads.

If you want a primer on the psychology of correction, we particularly recommend Briony Swire-Thompson’s “Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication”.

This is the second in our series on the psychology of misinformation. Read the first, “The psychology of misinformation: Why we’re vulnerable”, and the third, “The psychology of misinformation: How to prevent it“.

The continued influence effect

The continued influence effect is when misinformation continues to influence people even after it has been corrected. In short, it is the failure of corrections. 

Sometimes called “belief echoes”, this is the most important psychological concept to understand when it comes to corrections. There is consensus that once you’ve been exposed to misinformation it is very, very difficult to dislodge from your brain.

Corrections often fail because the misinformation, even when explained in the context of a debunk, can later be recalled as a fact. If we think back to dual process theory, quicker, automatic thinking can mean we recall information, but forget that it was corrected. For example, if you read a debunk about a politician falsely shown to be drunk in a manipulated video, you may later simply recall the idea of that politician being drunk, forgetting the negation.

Even effective corrections, such as ones with lots of detail that affirm the facts rather than repeat the misinformation, can wear off after just one week. In the words of Ullrich Ecker, a cognitive scientist at the University of Western Australia, “the continued influence effect seems to defy most attempts to eliminate it.”

Most crucially, it means that when it comes to misinformation, prevention is preferable to cure.

What to read next: Misinformation and Its Correction: Continued Influence and Successful Debiasing” by Stephan Lewandowsky, Ullrich K.H. Ecker, Colleen M. Seifers, Norbert Schwarz and John Cook, published in Psychological Science in the Public Interest, 13 (3), 106–131 in 2012.

Mental models

A mental model is a framework for understanding something that has happened. If your house is on fire, and you see a broken Molotov cocktail, you might reasonably build a mental model that the fire was caused by an attack. If a fireman corrects you, saying that it wasn’t caused by the Molotov cocktail in front of you, you’re left with a gap in your mental model — specifically, what caused the fire.

This means that corrections need to also fill the gap that they create, such as with an alternative causal explanation. This is tricky, though: Replacing a mental model is not always possible with the available information.

What to read next: “Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication” by Briony Swire and Ullrich K.H. Ecker, published in Misinformation and Mass Audiences in 2018.

The implied truth effect

The implied truth effect is when something seems true because it hasn’t been corrected. 

This is a major problem for platforms. When corrections, such as fact checks, are applied to some posts but not all of them, it implies that the unlabeled posts are true.

Gordon Pennycook and colleagues recently presented evidence that the implied truth effect exists when misinformation is labeled on some social media posts but not others.

What to read next: The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warning” by Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand, published in Management Science in 2020.

Tainted truth effect

The tainted truth effect is where corrections make people start to doubt other, true information. The risk is that corrections and warnings create generalized distrust of what people read from sources such as the media.

As with the implied truth effect, the tainted truth effect (also known as the “spillover effect”) is a potential problem with labeling misinformation on social media: It can make people start to doubt everything they see online.

What to read next: “Warning against warnings: Alerted subjects may perform worse. Misinformation, involvement and warning as determinants of witness testimony” by Malwina Szpitalak and Romuald Polczyk, published in the Polish Psychological Bulletin, 41(3), 105-112 in 2010.

Repetition

Repetition causes misinformation to embed in people’s minds and makes it much harder to correct.

There are a couple of reasons for this. First, if you hear a statement more than once, you’re more likely to believe it’s true. Repetition can also make a belief seem more widespread than it is, which can increase its plausibility, leading people to the false conclusion that if that many people think it’s true, there’s a good chance it is.

What to read next: “Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus” by Kimberlee Weaver, Stephen M. Garcia, Norbert Schwarz, and Dale T. Miller, published in Journal of Personality and Social Psychology, 92, 821–833 in 2007.

Illusory truth effect

The illusory truth effect occurs when familiarity makes something seem true when it isn’t. 

This can occur with false news headlines even with a single exposure. Exposure can even increase the plausibility of headlines that contradict people’s world views.

What to read next: “Prior exposure increases perceived accuracy of fake news” by Gordon Pennycook, Tyrone D. Cannon, and David G. Rand, published in Journal of Experimental Psychology: General 147(12):1865‐1880 in 2018

The backfire effect

The backfire effect is the theory that a correction can strengthen belief in misinformation. It has been broken down into the overkill backfire effect, worldview backfire effect, and familiarity backfire effect, each of which we explain here.

The backfire effect is by far the most contested psychological concept in misinformation, and, while famous, has not been found to occur as a norm, and some are doubtful it exists at all. Reviewing relevant literature, Full Fact found it to be an exception rather than the norm. More recently, researchers have concluded that ‘fact-checkers can rest assured that it is extremely unlikely that their fact-checks will lead to increased belief at the group level.”

However, it still permeates the public consciousness. Somewhat ironically, it has been a difficult myth to correct.

What to read next: “The backfire effect: Does it exist? And does it matter for factcheckers?” by Amy Sippit, published in Full Fact in 2019.

Overkill backfire effect

The overkill backfire effect is when misinformation is more believable than overly complicated correction, leading the correction to backfire and increase belief in the misinformation. A correction can be too complicated because it’s difficult to understand, too elaborate, or because there are simply too many counterarguments.

A recent study found no evidence of a backfire from too many counterarguments.

What to read next: “Refutations of Equivocal Claims: No Evidence for an Ironic Effect of Counterargument Number” by Ullrich K.H. Ecker, Stephan Lewandowsky, Kalpana Jayawardana, and Alexander Mladenovic, published in Journal of Applied Research in Memory and Cognition in 2018.

Worldview backfire effect

The worldview backfire effect is when a person rejects a correction because it is incompatible with their worldview, and in doing so strengthens their original belief.

Although, like all backfire effects, there is a lack of robust evidence for its existence, the advice given to mitigate it is still relevant and worth noting. For example, one study advises to affirm people’s worldviews when making a correction. Self-affirmation can help, too: One study found that people are more likely to accept views that challenge their worldviews after being asked to write about something about themselves they were proud of.

What to read next:Searching for the backfire effect: Measurement and design considerations” by Briony Swire-Thompson, Joseph DeGutis, and David Lazer, (preprint) in 2020.

Familiarity backfire effect

The familiarity backfire effect describes the fact that corrections, by repeating falsehoods, make them more familiar and therefore more believable.

Briony Swire-Thompson, associate research scientist at Northeastern University, and colleagues found no evidence of a familiarity backfire effect: “corrections repeating the myth were simply less effective (compared to fact affirmations) rather than backfiring.”

What to read next: “The role of familiarity in correcting inaccurate information” by Briony Swire, Ullrich K.H. Ecker and Stephan Lewandowsky, published in Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(12), 1948-1961 in 2017.

Look out for part three and stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.

Introducing an SMS course to prepare for US election misinformation

One of the things we’ve learned from running so many face-to-face and online training sessions in the last five years is that people find it hard to carve out the time in their busy working lives. As November draws near, we’re trying a new format to help people be prepared for election misinformation in a way that fits into their daily schedules.

Protection from deception,” our free two-week text message course delivers daily nuggets of training via SMS.

It’s designed to be quick and easy enough to appeal to everyone. Every day, at a time of your choosing, you will receive a little nugget of learning by text message, with some extra video and article links if you want to dive deeper.

The course will give you the knowledge and understanding you need to protect yourself and your community from online misinformation. You’ll learn why people create and share false and misleading content, commonly used tactics for spreading it, what you can do to outsmart it, and how to talk to family and friends about it.

We’re hoping to translate the course into multiple languages, but want to give it a thorough test in English first.

The course is focused on preparation, as there is a growing body of research that shows the importance of inoculating audiences against the tactics and techniques used by those creating and disseminating disinformation. Coronavirus has shown us how damaging misinformation can be, and in the US, with the election around the corner, it’s time to prepare everyone for what they might face online. Understanding the psychology of misinformation is important in fighting it, so please share this far and wide

Versión en español

English version

 

For a sneak preview of  the course, check out this video on how emotional skepticism can help protect vulnerable communities.

How to use network analysis to explore social media and disinformation

Network analysis has become an important tool for a disinformation expert. An increasing number of journalists and researchers are using the practice to analyze the social web and gain insight into the hidden networks and communities that drive information — and disinformation — online. 

Take, for instance, open-source intelligence expert Benjamin Strick’s Bolivian Info Op Case Study for Bellingcat, or his uncovering of a pro-Chinese government information operation on Twitter and Facebook. Thanks to the analysis of these networks, the author was able to disclose coordination and manipulation, and to shed light on some of the most common tactics behind a disinformation campaign.

Online connections can influence how political opinions take shape, so analyzing these networks has become fundamental.

But there are challenges. Not all datasets identify relationships and it’s up to the journalist or researcher to define these connections. Sometimes you might end up with a visualization that, despite its beauty or complex nature, reveals nothing of interest about your data. 

Although they might seem to be impressive visual content to share with your audience, network visualizations are first and foremost great tools for exploration. They are by no means conclusive charts.

And even though it’s an increasingly common tool in the field, network analysis is a detailed process that challenges attempts to penetrate it by the world’s top academics, and its application requires caution.

A little graph theory

But first of all, how do you define a network? In graph theory, a network is a complex system of actors (nodes) interconnected by some sort of relationship (edges). 

A relationship could mean different things — especially on social media, which are fundamentally made of connections. Network analysis means focusing on understanding the connections rather than the actors.

One of the first examples of online networks was the ‘Bacon number.’ The idea emerged in 1994 when the actor Kevin Bacon said in an interview that he had worked with “everyone in Hollywood.” Since then, Google can calculate Bacon numbers for any actor in the world and show the connections to the first node. Actors who have worked directly with Kevin Bacon have a Bacon 1, and so on.

On Facebook, networks take shape based on friendships, Pages, and groups in common.

On Twitter, however, you can also investigate things like hashtags, retweets, mentions, or quotes as well as whether users follow each other. 

For example, two accounts on Twitter are the nodes of a network. One retweets the other, and this is called the edge. If they retweet each other multiple times, the weight of their relationship will be higher.

In this example, one account (first node) retweets (edge) another account (second node). Screenshot by author.

The weight of a relationship is only one of the attributes of nodes and edges.

Another important attribute is the degree (or connectivity) of a node, which describes the number of connections. 

A fourth is called the direction, which helps us understand the nature of a relationship. When an account retweets another, it creates what is called a directed connection. In directional networks, there are in-degree values, for when an account retweets or mentions another, and out-degree values, when an account is retweeted or mentioned by another. But other times connections don’t have directions, such as two friends on Facebook, for example.

The in-degree shows the inwards connections, where B, C, D and E are the sources, and A is the target (A was tagged by B). Screenshot by author.

In the out-degree, A is the source and B, C, D and E are the targets (A tagged B, etc). Screenshot by author.

The density shows how well-connected the graph is, dividing the number of connections the nodes have by the total possible connections a node could have. The closeness is the position of a node within the network (or how close it is to all other nodes in the network); a high value might indicate an actor holds authority over several clusters in the network. 

The position of the nodes is calculated by complex graph layout algorithms that allow you to immediately see the structure within the network. 

The benefit of network visualizations is to look at a system as whole, not only at a part of it. The challenge is to identify what kinds of relationships you want to investigate, and to interpret what they mean.

Tools for analyzing and visualizing networks

Several tools are available for sketching and analyzing the networks that you produce in the course of your investigations, and some allow you to gather the necessary data directly. 

Here are some of the best free tools:

Neo4j is a powerful graph database management technology, well-known for being used by the International Consortium of Investigative Journalists (ICIJ) for investigations such as the Panama Papers.

Gephi is an open-code program that allows you to visualize and consult graphs. It doesn’t require any programming knowledge. It is stronger for visualizations than for analysis, but it can handle relatively large datasets (the actual size will always depend on your infrastructure).

If you are familiar with coding, you might want to explore Python’s network analysis library NetworkX, or R’s igraph package. Sigma is a great library dedicated to graph drawing if you are comfortable with JavaScript.

The trick about visualization in general is to find the clearest way to communicate the data. 

Prepare your data

First you will need data to analyze, of course, and you can obtain network data in several ways. 

The easiest way is to download the Twitter Streaming Importer plug-in directly on Gephi. It connects to the Twitter API and streams live data in a Gephi-friendly format based on words, users, or location, allowing you to navigate and visualize the network in real time.

But if you want to use Twitter historical data, you need to use some scrapers — read our tutorial on how to collect Twitter using Python’s tweepy — and convert the scraped data into a format-friendly file for network visualizations using the tool Table 2 Net.

The tricky part can be identifying which column in the data is for nodes and which is for edges. Not all datasets include connections, and you need to make sure the columns selected contain mentions, hashtags, and accounts, in a clean, tidy format.

Once you have selected the columns to create your network, you can download the network file as .GEXF, the Gephi file format.

Alternatively, you can use the programming language Groovy to generate a GEXF network graph from a JSON file in your terminal. You can choose from among mentions, retweets, and replies. Type on your terminal the following command: groovy export.groovy [options] <input files> <output file>

Explore a bit of Gephi

In this example, let’s use a sample that collects mentions of Bill Gates on Twitter, as he has been a constant subject of misinformation and conspiracy theories throughout the pandemic. 

Once you upload the GEXF file on Gephi, this is how the network will look:

Network data loaded into Gephi at first glance. Screenshot by author.

You have to play around with filters, parameters, and layouts to visually explore your network and turn this amorphous mass of dots into a meaningful shape.

The program will also give some initial information about the network. In the @BillGates file, for example, there are 19,036 nodes (individual accounts) and 207,780 edges (connections between accounts) — a fairly high number of edges.

Let’s have a quick overview of the menu options in the software at the top:

  • Overview is where you work on the network’s functionalities;
  • Data Laboratory is a display of your dataset in a table format;
  • Preview is where you can customize your final network visualization.

Here are a few quick actions to start investigating the network:

What is the average degree of connections?

The average degree will show the average number of connections between the nodes. Run ‘Average degree’ under Statistics on the right-hand side to receive a ‘Degree Report’ of the network.

Run ‘Average degree’ under Statistics to receive a ‘Degree Report’ of the network. Screenshot by author.

The average number of connections between the accounts that mentioned @BillGates is 1.092. In other words, an account has, on average, mentioned one other account in this sample. 

Which are the main nodes in the network? 

You now want to discover which nodes have the highest number of connections. These could be either direct or undirected connections. For example, it could be interesting to see which Twitter accounts a divisive handle retweets the most, or the top accounts that mention a divisive account.

Change the size of the nodes in Gephi by clicking on ‘Ranking’ on the left and by choosing ‘Degree’ as an attribute. Set the minimum and maximum sizes, and the nodes’ dimensions will change according to their number of connections.

In the Bill Gates example, we are interested in knowing which accounts tag him the most (inward connections). To do that, choose the ‘out-degree’ attribute, which will increase the dimensions of the accounts that mentioned Gates the most.

To improve the network’s visibility, choose the layout Force Atlas 2, which uses an algorithm to expand the network and better visualize the communities that are taking shape, as shown below.

Change the size of the nodes in Gephi by clicking on ‘Ranking’ on the left. Screenshot by author.

Are the actors connected to different communities?

The ‘modularity’ algorithm is often used to detect the number of clusters, or communities, within a network, by grouping the nodes that are more densely connected. 

Hit the Run button next to Modularity under the Statistics panel. Doing so will yield a modularity report, which often has quite interesting details for analysis.

At this point, use the partition coloring panel to color the nodes based on modularity class and apply a palette of colors to them.

Use the partition coloring panel to color the nodes based on modularity class and apply a palette of colors to them. Screenshot by author.

The next step is to learn something from the graph you have created. By right-clicking on a node, you can see its position in the Data Laboratory, along with the newly created columns displaying degrees and modularity value. While the visualisation helps looking at the global picture, here you can manually explore the data.

You can also apply text to the graph and visualize the names of the nodes.

In the preview tab, you can choose different options to visualize the network. You can add labels, change the background color, or play with sizes of nodes and edges.

If the graph is too crowded, use filters to show fewer nodes. For example, you can filter by number of in-degree or out-degree connections, depending on what you are interested in highlighting.

In the preview tab, you can choose different options to visualize the network. Screenshot by author.

Remember that data visualizations are a great way to make complicated topics more accessible and engaging, but they can be misleading if they are not well-designed and easy to understand.

Make sure your network graph is visualizing exactly what you are trying to express. Add clear annotation layers to aid in reading it. And don’t try to use them at all costs. Sometimes a simpler chart that conveys its meaning more effectively might be worth thousands of nodes.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

The US protests have shown just how easily our information ecosystem can be manipulated

Those who study misinformation are often asked to attribute misleading content to particular actors — whether foreign governments, domestic hoaxers, or hyper-partisan media outlets. Actors deliberately promoting disinformation should not be ignored; however, the recent US protests have demonstrated that focusing on specific examples of disinformation can fail to capture the complexity of what is occurring. 

Following the killing of George Floyd, who died after three Minneapolis police officers kneeled on his handcuffed body, hundreds of thousands of demonstrators took to the streets around the world. At the center of their demands was justice for Black people who have died in police custody, and alternative criminal justice systems.

In the US, several accompanying narratives were heavily discussed online. Piles of bricks near protest sites had social media users across the ideological spectrum speculating as to their origin; false rumors spread that protesters in DC were targeted by an internet blackout. Elsewhere, images of police officers who joined marches or kneeled with protesters were distributed to and published by several news outlets uncritically and without examination of law enforcement’s motivation. These examples — along with the deeper investigations into the involvement of the white supremacist “Boogaloo” movement, or questions around what exactly is happening in the Seattle Autonomous Zone — all distracted from the reason for the protests. 

Demonstrators relax inside of an area being called the “City Hall Autonomous Zone” that has been established to protest the New York Police Department and in support of “Black Lives Matter” near City Hall in lower Manhattan, in New York City, U.S., June 30, 2020. REUTERS/Lucas Jackson

When demonstrators began to organize in early June, one of the narratives debated online was the emphasis on “outside agitators” infiltrating the protests. Social media was full of posts that claimed undercover police officers, white nationalist militia members, or organized “antifa” members were responsible for instigating property damage or violence at the protests. Users posted photos of graffiti, claiming the wording was suspicious, and questioned whether it was intentionally placed to sow division. 

But focusing on protest attendees who do not care about addressing police brutality distracted from the demands of organizers who do. The “outside agitators” here served a double purpose in attacking and undermining the protests. In the first instance, it defamed the protesters by attributing some of the violent behavior and destruction to their cause. And in the second, it distracted from that cause by turning attention away from the reasons behind it.

Misinformation researchers have coined the term “source hacking” to describe the process by which “media manipulators target journalists and other influential public figures to pick up falsehoods and unknowingly amplify them to the public.” The nature of the news cycle and the way news is reported mean many outlets could not avoid covering these narratives. Internal and external pressures, both financial and professional, would not allow it. And some of the accounts spreading these narratives, but by no means all, would have had malicious intent. 

What these narratives demonstrated is that the story of the swirling misinformation surrounding the protests is not one with a central villain or organized network of insidious actors. Instead it is a story of how the modern information landscape, made up of news media, social media, and the people who consume media, is vulnerable to manipulation that influences the ways in which events are shaped and discussed. The “source hacking” that occurred in many of these instances was an organic side effect of the complex information landscape, rather than an intentional ploy. This is perhaps even more difficult for decision makers to navigate, and requires careful consideration on the part of news outlets and journalists to determine how to most effectively center the audience’s needs in what is reported. 

After all of our social media monitoring during the protests, it is not possible to blame the “outside agitator” narrative on one bad actor. Our analysis is still ongoing, but as with any moment of shared online attention, bots and sock puppet accounts were very likely to have been pushing out content related to those narratives of protest infiltration. And journalistic mistakes were made: There are examples of outlets poorly framing or mis-contextualizing rumors, giving “outside actors” more legitimacy than the evidence indicated. But identifying insidious networks and media missteps is futile without a simultaneous examination of how our current information landscape is so easily influenced by these disturbances. 

Social media platforms and their algorithms, editorial decision making, and determinations about what to post and share on an individual level all contribute to the visibility of certain narratives, and they work unintentionally in synchrony — often to undesirable results. For example, news outlets used valuable resources investigating a 75-year-old police brutality victim’s ties — or lack thereof — to “antifa,” thanks in large part to the promotion of this false rumor by President Donald Trump. This is just one example from the protests when news outlets spent many hours having to investigate and debunk claims from politicians, police authorities and video evidence from the streets. When newsrooms, particularly local newsrooms where staff are being laid off and furloughed, are focused on this type of work, they are less able to focus on stories that reflect the experiences and needs of their communities. And yet it is difficult to argue that topics exploding on social media are not newsworthy. The feedback loop between social media and traditional media is broken, and the protests exhibit how damaging that has become.  

While the process of misinformation monitoring — locating a piece of false or misleading content and finding the originator of it — is still useful, and fact checkers play an important role in maintaining a healthy information ecosystem, what is becoming increasingly clear is that we must tackle the problem with misinformation from a macro perspective. It’s no longer enough to tackle each ‘atom’ of misinformation. Misleading narratives sometimes flourish in the modern information ecosystem because of a confluence of circumstances, not because of a well-executed plan. Mitigating misinformation whack-a-mole style will be ineffective if we do not address the infrastructural problems that define the way people receive, process, and share information with their own networks. For journalists, this means carefully examining the urge to report on and debunk specific pieces of misinformation. Any effort to do so should be balanced by robust solutions journalism, emphasizing social issues affected or manipulated by misinformation. 

Historically, media criticism has focused on gatekeepers. Do legacy news outlets foster sources that understand the communities on which they report? Are they irresponsibly or unintentionally amplifying particular voices? These questions are still relevant, but the internet has dramatically widened the pool of who can disseminate information to the public, and media scholars must adjust their lens. As a result, it’s more difficult to understand the reasons behind poor story framing. An individual Twitter user sharing information about suspicious protest attendees or promoting the “outside agitators” narrative does not obfuscate the reason for the protest by itself. And yet, as part of a groundswell covered by prominent news outlets, their tweets likely contributed to that happening. The questions journalists need to ask are not only “who is responsible” or “how do we stop misleading narratives.” Now, perhaps more than ever, newsrooms need to think about how we ensure our audiences and journalists are prepared to navigate a media landscape so susceptible to gaming and manipulation.

Jacquelyn Mason, Diara J. Townes, Shaydanay Urbani, and Claire Wardle contributed to this report.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.