What the false narrative around ‘CDC 6%’ can teach us about reporting Covid-19 stats
Over the last weekend in August, more than 40,000 Twitter users shared what, at a glance, would have been a bombshell: “this week the CDC quietly updated the Covid number to admit that only 6% of all the 153,504 deaths recorded actually died from Covid. That’s 9,210 deaths.” For those skeptical of the dangers of the coronavirus, this was evidence they’d been right all along – and that their government had been lying to them to support a restrictive lockdown.
Unfortunately, the statistic was misleading. The CDC’s data only showed that most patients had a range of complications (also known as comorbidities), many directly caused by Covid-19, when they died. The statistic’s considerable spread is a prime example of how easy it is to spin data out of context, and the role the media can unwittingly play in amplifying misinformation.
On August 26, in a weekly update on provisional counts from death certificate data, the US Centers for Disease Control and Prevention said that for 6 per cent of deaths involving Covid-19, no other causes of death were mentioned. Even though this statistic had been visible on the CDC website for over a month, social media users and news organizations shared it as if it were a new finding, igniting a wave of claims that 94 per cent of the deaths purportedly from Covid-19 were really from other causes.
“It is absolutely common practice for there to be multiple causes and multiple diagnoses for a person’s death,” said Stephen Kissler, a postdoctoral fellow at the Harvard T.H. Chan School of Public Health who uses mathematical models to study the spread of infectious diseases.
This is particularly true of Covid-19, which kills in many ways. It causes pneumonia, aggravates hypertension and leads to respiratory failure, among others. That is why it’s unsurprising that these causes are listed in the CDC’s comorbidities data for 94 per cent of people, explained Christin Glorioso, a physician and research scientist at MIT. “Saying someone died of pneumonia and not Covid is a misconception, because Covid causes the pneumonia,” said Glorioso.
Kissler said that having a detailed record of all the health conditions at time of death is important for researchers who are analyzing the data. “Essentially, doctors are trying to be specific about not only the fact that a person had Covid, but what specifically about Covid led to the person’s death,” Kissler said. That detail means doctors are being thorough, not that Covid-19 is less deadly than previously thought, Kissler said.
How an old statistic took on new life
One of the first examples we found of a reporter citing these weekly figures was in a July 6 op-ed for Pennsylvania’s Pottstown Mercury. Writer Jerry Shenk cited similar figures from late June and concluded that people were dying “with the virus rather than from it”. On August 7, Twitter user @old_guys_rule2 posted the 6% statistic with the comment, “COVID-19: The largest PSYOPS campaign ever conducted on the American People.”
Later in August, the statistics began to travel more widely. On August 24, BlazeTV host Steve Deace made a post to his Facebook Page highlighting it, which was reshared more than 1,200 times. Five days later, the narrative made it to the top of Gab’s trending page, and the Facebook account DrElizabeth Hesse DC claimed that the CDC had “quietly updated the Covid number to admit that only 6 per cent of all the 153,504 deaths recorded actually died from Covid.”
On the same day, Twitter account @littllemel, a member of the QAnon community, tweeted a screenshot of that post, which was later retweeted by President Trump— and more than 40,000 other accounts. Facebook and Twitter removed the posts by DrElizabeth Hesse DC and @littllemel from their platforms.
Many users questioned the framing of the statistic, including in one tweet shared at least 2,900 times, but it caught the attention of several local newsrooms, which reported it without providing important context. Several local news stations ran headlines similar to one from Cleveland’s Fox 8: “New CDC report shows 94% of COVID-19 deaths in US had contributing conditions.”
The Atlanta Journal-Constitution published an article with a similar headline. It later removed the article, then published one debunking the mischaracterized statistic and acknowledging that it had “briefly” hosted an article mischaracterizing the data.
The cost of getting it wrong
Setting aside the false conclusions drawn from it, the CDC’s update hardly represented a newsworthy change concerning Covid-19’s relationship to comorbidities. “This [data] has been there. We have always said that an underlying condition was going to make you more susceptible to infection,” said Bruno Rodriguez, a virology Ph.D candidate at NYU Langone Health.
One particular danger of the media treating this statistic as if it were new, Kissler said, is promoting an idea that science is haphazard and rife with internal contradictions, when in fact a review of the CDC’s past data shows considerable consistency. A review of past versions of the CDC data shows that the agency actually reported the 6 per cent figure as far back as July 8. As early as May 12, the number was 7 per cent.
Kissler and Glorioso worry that the reporting also promotes an idea that Covid-19 isn’t deadly just because it kills people with health problems at higher rates. More than 45 per cent of Americans have a risk factor for Covid-19. “That’s a lot of us,” said Kissler, “and I think there’s something dangerous about this disregard for anyone but the absolutely fit and the totally healthy.”
Misinformation often benefits from messaging that is simple, easy to process, and plays into confirmation bias. This is all the more prevalent when it is based on a kernel of truth. In this way, ostensibly straightforward statistics like “only 6% of death certificates list Covid-19 as the only cause” and “94% of Covid-19 deaths involve underlying conditions,” when presented without the proper context, provide quick relief to those inclined to believe that the coronavirus is not deadly.
For more information, read our series on the psychology of misinformation.
“The thing that concerns me most about this sort of thing is that it erodes trust in the practical interventions we’re trying to take to keep people safe,” Kissler said. “We do ourselves a disservice in either lessening or overblowing the severity of the disease.”
Tips for reporting on Covid-19 statistics
- Check provenance and primary sources: you can use tools like the Wayback Machine to look at changes to websites like the CDC’s and investigate whether they are really new.
- For sources, reach out to medical professionals who are working in infectious disease epidemiology.
- See if those medical professionals have recent publications listed with their institution. This will help you gauge whether they are still active in the field and have the latest information.
- Exercise caution with data that seems to contradict previously available information about the pandemic.
- Beware of taking a single nugget of information and elevating it as a new finding.
Keenan Chen contributed research to this article.
‘An unquestionable truth’: Religious misinformation in the coronavirus pandemic
By Jaime Longoria, Daniel Acosta Ramos and Madelyn Webb
On July 30, a Mexican pastor named Oscar Gutierrez broadcast what would become one of the most-watched videos on Facebook about chlorine dioxide solution, an industrial bleach he promotes as a cure and preventive treatment for Covid-19.
“Chlorine dioxide is dangerous — but for whom? For the pharmaceutical companies and corrupt governments,” said Gutierrez in the broadcast on his Facebook Page “Pastor Oscar Gutierrez,” which has almost 220,000 followers. He is also a participant in Facebook’s Stars program, which allows content producers to receive payment directly from their audience, which means Gutierrez’s videos and live broadcasts have ostensibly been evaluated and approved via Facebook’s Community Standards. Each ”star” received translates to $0.01 in revenue going directly to the creator.
Gutierrez went on to claim the solution, known as CDS or “miracle mineral solution” (MMS), is being suppressed so that microchips could be introduced via a vaccine to control people’s DNA. “At least try it because you won’t die,” Gutierrez said later in the video. It has been viewed more than 2 million times and marked by Facebook as false information.
Harmful misinformation about the coronavirus abounds in Latin American Christian communities, with figures such as Gutierrez pushing unproven and potentially dangerous treatments and capitalizing on fear to promote anti-vaccine sentiment. The trusted position of these religious leaders can legitimize potentially dangerous ideas for a large audience via independent Christian news networks and social media.
“A religious leader has a relationship of power where the truth that they transmit is one that, be it about a political or moral decision, is delivered from a position of ascendency,” said Nicolás Iglesias Schneider, coordinator of GEMRIP, an organization focused on the public role of faith and religion.
“When a leader has a truth that is immutable, it is a truth that is unquestionable because it is endorsed by a deity or is a word sent by God,” said Iglesias. “For the religious faithful in the context of a pandemic where there is less opportunity to check with neighbors and family and where there is less social interaction, people are more vulnerable and are likely to become more radicalized.”
Latin American Christian communities aren’t the only religious groups to fall victim to misleading claims or outright misinformation about the pandemic. In June, Spanish cardinal Antonio Cañizares Llovera declared attempts to find a vaccine the “work of the devil” that would involve “aborted fetuses” in a filmed Mass shared around the world. Church leaders in Australia raised similar concerns recently, apparently unaware that the practice of using cell lines grown from a fetus in 1972 has been commonplace in vaccine development for decades.
In India, Hindu religious and political leaders have promoted cow urine as a cure for Covid-19, inspired by the sacred status of cows in Hinduism, and declared the coronavirus would leave India once a controversial temple was completed. Claims that a polio vaccine contained pork products or toxic ingredients, often circulated by Muslim clerics, have damaged the fight against the disease in Muslim-majority Pakistan.
But it is the diversity of coronavirus misinformation in Latin America, its potential for real-world harm and its connection with the Latinx diaspora throughout the United States that make it such a cause for concern.
Magic beans and consecrated oil
Several prominent religious figures have marketed unproven treatments and cures, a common misinformation trope even before the coronavirus pandemic. So-called “snake oil” remedies are so common in part because they often produce a profit, and those hawked in Latin American Christian communities are no exception.
Valdemiro Santiago, an evangelical pastor who leads the Universal Church of God’s Power, is being investigated by the Federal Public Prosecutor of Brazil for selling beans that he claimed cured coronavirus for 1,000 Brazilian real each (about $180). In a YouTube video, he claimed that a medical report had detailed the recovery of a terminally ill patient thanks to the beans.
Similarly, Sílvio Ribeiro, pastor of Catedral Global do Espírito Santo in Porto Alegre in Brazil, is being investigated by Porto Alegre police under suspicion of “charlatanism,” according to Police Delegate Laura Lopes. On March 1, he held a live event that was advertised and broadcast across social media. The flyer for the ceremony read, “Come because there will be anointing with consecrated oil … to immunize against any epidemic, virus or disease!”
Em Porto Alegre, a igreja anunciou um culto chamado “O Poder de Deus contra o Coronavírus” em que prometia imunização por meio de um “óleo consagrado” https://t.co/uU7EGwwO4r
— BBC News Brasil (@bbcbrasil) March 2, 2020
“Faced with illness and the possibility of death, it is common for human beings to feel hopeless and helpless,” Angela Rotunno, coordinator of the Operational Support Center for the Defense of Human Rights, told the newspaper Estadão. This emotional fragility drives away rationality and, as a consequence, makes it easy to believe in any promise of protection or cure. It is what is happening at the moment. Unscrupulous people try to take advantage of this discouragement.”
The Mexican pastor Gutierrez has in turn popularized the work of Andreas Kalcker, a notorious anti-vaccination advocate who claims to be a German scientist and has been a proponent of CDS as a treatment for Covid-19 in Latin America. The bottles of CDS that the pastor promotes on his Facebook broadcasts are labeled with Kalcker’s name and the Bible verse John 10:10, in which Jesus says he provides his flock life and abundance.
— MALDITA CIENCIA (@maldita_ciencia) April 6, 2020
Platform of the Antichrist
Beyond phony and potentially dangerous cures, misinformation narratives and conspiracy theories found in global anti-vaccine communities have been adopted enthusiastically by some religious figures in Latin American communities.
For GEMRIP coordinator Iglesias, this demonstrates the boundlessness of misinformation and its ability to transcend national borders. “There is almost nothing that is strictly from Uruguay or Argentina or the Southern Cone,” he said, “because in reality these all-encompassing narratives, including conspiracy narratives, have become so widespread.”
Pastor couple Miguel and María Paula Arrázola, directors of Iglesia Ríos De Vida in Cartagena, Colombia, shared a May 6 live broadcast with their 394,000 Instagram followers on Miguel’s account where their guest Ruddy Gracia, another evangelical pastor, said that “behind the compulsory vaccine there is a chip called ID2020 made by Bill Gates.” The aim, according to Gracia, would be to create a global registry of all those who have been inoculated, registering them via the microchip. “That is the beginning of the platform of the Antichrist, how he will bring about the mark of 666 and will result in you not being able to get a passport, travel, have a license, buy or sell without that chip.” María Paula agrees, saying that is the reason she would refuse vaccination. Gates has become a conspiracy theorist dog whistle, often invoked to promote anti-vaccine narratives.
Another example came later that month when Argentine pastors Fernando and Viviana Vienni shared a post calling for opposition to the vaccine, which they claimed is a vehicle for a secret implant originating from supposed Freemasons such as Gates. “They have created a sickness (coronavirus) and via this virus they will say they found the solution!!!” they wrote. “The coronavirus, the microchip (5G) and the vaccine is all a test of the end times!!!!! We are already in those times!!!”
The ID2020 conspiracy theory cited by some religious figures is a common thread among the more extreme elements of the anti-vaccination community. These conspiracy theories have spread among online communities, adapted to new audiences and combined with other narratives.
Throughout May and June, the same text combining many of these conspiracy theories was copied and pasted in thousands of public posts across Facebook, including the claim that Catholicism would be replaced by a new Satanic religion called “El Crislam.”
The Facebook Page that received the most interactions for its post on the subject bore the name of Yiye Ávila, an influential Puerto Rican televangelist and author known for preaching about the apocalypse. Ávila died in 2013. It is not clear whether the page is official, but it has almost 500,000 followers.
Ávila’s grandson, Miguel Sánchez-Ávila, appears to have followed in his grandfather’s footsteps. A large Facebook page and smaller YouTube channel featuring Miguel promote similarly apocalyptic posts and videos.
“Any conspiracy theory, be it about the origin or solution to the coronavirus, espoused by a powerful charismatic religious actor becomes a very strong truth for the individual who receives it,” said Iglesias. “They incorporated it to their belief system in a much less critical way.”
People in despair
A misleading Covid-19 narrative unique to religious communities, though certainly not unique to Latin American Christian communities, is the emphasis on worship center closures because of the virus. Church leaders decry the shutting of places of worship, citing various other facilities allowed to remain open, and bemoan their “nonessential” status. Though pushback from religious figures eager to reopen does not usually contain misinformation as such, it contributes to the belief that the virus is less dangerous than we are being told.
In an interview in São Paulo newspaper Estadão, Pentecostal pastor Silas Malafaia expressed his frustration with the closures, saying: “Are people going to die of the coronavirus? Yes. But if there is social chaos, much more will die. Churches are essential to assist people in despair, anguished, depressed, who will not be attended to in hospitals.”
Churches around the world have been blamed for contributing to increased transmission of the virus, yet religious leaders continue to promote the idea that church gatherings are not dangerous, or that the risk is worth it, as Malafaia implied. Other religious groups have challenged local authorities’ orders by holding clandestine services.
On July 12, an evangelical pastor was arrested in the Chilean province of Arica for holding religious services in an open space with more than 50 people present. In April, Claudia Pizarro, mayor of La Pintana commune south of Santiago de Chile, closed the “Impacto de Dios” church after its pastor, Ricardo Cid, carried out daily services with 30 to 50 people.
“These are irresponsible attitudes not only of the pastor but of all those who participated,” Pizarro said after the incident. “It is a lack of culture, of criteria, an irresponsibility that this continues to happen.”
Refusal to take the virus as a serious threat, particularly when that refusal comes from a person with influence, is itself a form of dangerous misinformation. When Cid was asked to take a Covid-19 test, he refused. “I don’t have it, because I know I don’t,” he said. “My God would never allow it. Jesus never became infected, even from leprosy.”
The role of the Christian media
Misinformation spreading in Latin American Christian communities has the benefit of an independent Christian media that can amplify narratives that may not be published in other press.
CBN Latino, the Spanish-language branch of the massively popular Christian Broadcasting Network with regional branches in Mexico, Guatemala and Costa Rica, has more than 94,000 followers on Facebook and a WhatsApp call line. “Club 700 Hoy,” the Spanish-language version of “The 700 Club,” CBN’s most popular program, has over 193,000 Facebook followers.
Though CBN’s eponymous outlet rarely publishes outright false information, “The 700 Club” has a history of promoting conspiracy theories and misinformation. CBN’s other properties often provide space for people to express skepticism about certain scientifically supported topics such as climate change or evolution.
Christian outlets work in tandem with religious leaders, retweeting and sharing one another’s content to maintain a media ecosystem editorially independent from the secular press that might otherwise weed out misinformation narratives.
In late June, the Spanish-language Christian outlet Bibliatodo published a story about a hailstorm in China where the ice was supposedly shaped like the coronavirus. Though there is no evidence the photo of the oddly shaped hail is fake, the article cites Israel Breaking News connecting the hailstorm to the end times and quoting a rabbi claiming the storm was godly intervention.
The pandemic has also pushed religious groups to experiment with various forms of mass communication. It is no longer exclusively the neo-Pentecostals and fundamentalists who have taken to broadcasting their services live to audiences outside their communities.
Iglesias notes that an increase in the reliance on social media, including the use of streaming, Zoom and YouTube during the pandemic, has broadened the reach of religious messaging. “Because of this, religious speech and its political impact are no longer limited only to the temple and a direct audience,” he said. And these religious figures have amassed large online followings, some with incredible speed.
In Colombia, Miguel Arrázola, pastor of the Ríos De Vida church, increased his Facebook page interactions by 233 per cent from the last week of February to the last week of March, according to data from Facebook-owned social monitoring tool CrowdTangle, coinciding with the announcement of lockdown measures.
Gutierrez, the Mexican pastor, created his Facebook page on May 7, in the midst of the pandemic. In just three months, he attracted more than 219,000 followers. His most popular video promoting chlorine dioxide has reached some 2.2 million views and 57,000 shares, drawing so much attention that Facebook labeled the post as false information. In total, Gutierrez’s videos have been viewed around 10 million times.
Previous research has shown that the proliferation of religious-based misinformation is not limited to Latin American Christian communities. Religious communities of all types are susceptible to misinformation, and public health outreach in faith communities is an important factor in addressing the pandemic.
For Iglesias, the need to provide science-based information and teach critical thinking is paramount. Still, the challenge comes with confronting misinformation among a distressed population where individuals are more likely to cling to faith, a substance or a religion.
“I think what is important is to prioritize information that is scientifically validated,” Iglesias said. “And to that end, civil society … can promote the responsible use of information.”
The real world consequences of this form of false information have recently been highlighted by two poisonings and two reported deaths in Argentina from CDS, despite its proponents, such as Pastor Gutierrez, insisting it is safe.
Juan Andrés Ríos, 51, died August 11 after ingesting a liter and a half of the prepared solution in two days in an attempt to treat symptoms that were reportedly similar to those of Covid-19.
“My brother didn’t know whether or not he had coronavirus, but he had symptoms,” said Ríos’s sister in an interview with Radio10. “In the desperation to cure himself, he drank chlorine dioxide. No one forced him to do so, but he made that decision after watching a video that said it cured coronavirus.” Ríos bought the CDS via Facebook, according to his sister.
Another death occurred August 15, when a 5-year-old was brought to a hospital without vital signs after his parents had administered a “preventative” dose of CDS. The child died of multiple organ failure, resulting in an investigation by the Neuquén prosecutor’s office.
While religious communities provide a dangerous vector for misinformation, they also present an opportunity to combat it. Addressing misinformation emanating from Christian sources in Latin America could ripple throughout the communities their congregations come from. When that misinformation is potentially so dangerous, working to inoculate populations against it is more necessary than ever.
This article is part of a series tracking the infodemic of coronavirus misinformation.
Vaccine nationalism: How geopolitics is shaping responses to the pandemic
On August 11, the same day Russian President Vladimir Putin announced that a vaccine for the coronavirus was “ready”, Kremlin Health Minister Mikhail Murashko hailed the development as “a huge contribution. . . to the victory of humankind over the novel coronavirus”.
It was a grand statement that appealed to global togetherness in the face of a global challenge. And yet, Russia’s approach to the vaccine and other treatments for the coronavirus exemplifies how some governments and populations are responding to the pandemic with the very opposite of an internationalist outlook.
In June, the United States’ bought up the entire global supply of remdesivir, which can be used to treat Covid-19, ensuring no other country had access to the drug for three months. India initially planned to release a vaccine on its Independence Day on August 15, before retracting the statement. A herbal remedy in Madagascar fuelled a wave of nationalist sentiment opposing the World Health Organization across the content.
Online discussions have often mirrored these events, particularly on the topic of a future vaccine. In some cases, national narratives have encouraged the rejection of international cooperation and been used to promote domestic agendas. In other examples, geopolitical interests and strategic national ties have fueled enthusiasm for or skepticism about specific vaccines or treatments.
On August 6, the WHO director-general, Tedros Adhanom Ghebreyesus, warned of “vaccine nationalism” and what could be lost if countries failed to work together to suppress the coronavirus:
“Sharing vaccines or sharing other tools actually helps the world to recover together, and the economic recovery can be faster, and the damage from Covid-19 could be lessened.”
WHO Dir. General: 'Vaccine nationalism is not good…for the world to recover faster, it has to recover together.’ pic.twitter.com/beAmpeRxbU
— NowThis (@nowthisnews) August 9, 2020
This view is shared by experts such as Ana Santos Rutschman, an assistant professor at the St. Louis University School of Law.
“The pandemic is not a national problem, it’s not geographically contained,” she told First Draft.
“Nationalism just refers to behaviors and techniques, and often simple things [such] as contractual agreements, that allocate these public goods according to sovereign, geopolitical and geographical lines.”
Rutschman said this may pose a problem for those hoping to stop the spread of the virus, and could lead to vulnerable populations being excluded from potentially life-saving treatments.
“The first one is the public health angle: You don’t fix a pandemic by focusing on vaccine supplies to a restricted number of countries on a priority basis,” said Rutschman. “The chessboard is the entire world and policy can’t really be as skewed as nationalism would have it.”
Throughout the pandemic, our team around the world has been monitoring how geopolitics and nationalism have become sources of information disorder around cures and treatments for Covid-19. Here’s a breakdown of some of these examples.
Russia: The first vaccine?
Moscow’s rollout plan is much faster than that of other vaccine candidates. The Russian military claimed that the vaccine was “ready” in July, and that Moscow planned to vaccinate medical staff alongside the large-scale Phase III trials, according to The Moscow Times. On August 11, President Vladimir Putin announced that the country had become the first to register a coronavirus vaccine.
Russia’s long-standing relationships and interactions with other parts of the world appear to have affected the way news about its vaccine has been received. India and parts of Latin America have historical ties with Russia and in these regions the vaccine news has largely been celebrated uncritically online. Meanwhile, in Western democracies that have largely viewed Moscow through the antagonistic lens of the Cold War, there has been a greater degree of skepticism.
One of the clearest examples comes from the different responses from traditional media. Reports about Moscow’s vaccine development from Indian news organizations such as ABP News and NDTV were generally more celebratory in their tone, while US counterparts such as The New York Times and the BBC were more cautious — highlighting safety concerns around the speed of the roll-out.
The contrast is even clearer on social media, with both greater excitement about Russian progress on a vaccine and a generally celebratory tone. According to data from social media monitoring tool CrowdTangle, there were more than 1,800 posts by Indian Facebook Pages mentioning “Russia” or “Russian” and “vaccine” in the month since July 10, which were shared more than 155,000 times. Multiple text-on-image posts that have been widely shared call the development “good news.”
Even posts that indicate uncertainty over when the vaccine will be available in India are optimistic and claim that the tests have been successful, with one wishing Moscow “Good luck.” A number of Indian accounts were also tweeting about the vaccine, posting memes celebrating Russia’s announcement.
First Draft’s Laura Garcia, a journalist from Mexico, said that Moscow was actively promoting the vaccine across Central and South America before it was ready, “making sure that people in Latin America knew that they were developing a vaccine.” Russia’s promotion of its own tools to tackle the coronavirus has not been restricted to a vaccine.
On July 10, the Russian embassy in Guatemala presented a drug called Avifavir as a possible treatment for coronavirus and offered it up to Latin American countries. The medication has not yet been fully approved for treating the coronavirus, as it is in the final stages of testing, but some media outlets in Bolivia and Mexico reported that it was a “done deal” and that Latin America will be the first to benefit from Russia’s largesse.
The old Cold War dynamic influences discussions in the region, with the rush to produce and supply a vaccine being portrayed as a “new arms race between the US and Russia,” said Garcia. Given America’s failure to contain the virus, a narrative is coalescing around Russia as the “cool” agent, with Putin sweeping in to provide a solution to the pandemic.
India: ‘Messianic’ Modi
Misinformation about the coronavirus has run rife in India, which has one of the largest and fastest-growing outbreaks, surpassing two million cases in early August. A consistent theme of the falsehoods, rumors, unproven treatments and conspiracy theories has been nationalism, specifically the Hindu nationalism that much of the support for the ruling BJP party is based on.
Niranjan Sahoo, a senior fellow with the Observer Research Foundation’s Governance and Politics Initiative, said Prime Minister Narendra Modi has distracted the country with nationalistic sentiments “in the middle of a pandemic when the major issue is to save lives.”
“People think that he is really in control of the pandemic management, and that everything is going fine […] even when the reality is different.”
Sahoo adds that Modi’s Hindu nationalist rhetoric, combined with his “mastery” of social media, allows him to “peddle any kind of narrative” and has emboldened individuals to spread misinformation linked to religion and nationalism.
This is evident in the discussions surrounding unproven drugs and traditional remedies that have been promoted in India during the pandemic. Traditional Indian Ayurvedic treatments and alternative remedies have always been popular, but are being extensively promoted over social media as cures for Covid-19.
In March and April, Hindu activists, including national and regional lawmakers, promoted the consumption of cow urine as a coronavirus cure, inspired by the sacred status of cows in Hinduism. Since the beginning of March, the claim has been shared widely on Facebook, Twitter and WhatsApp, and has included a video of members of a religious body purportedly consuming cow urine. On Facebook alone, there have been hundreds of thousands of shares on posts mentioning variations on the words “cow urine” and “coronavirus” in public Groups and Pages, written in Hindi or English, most of which were posted in March, according to data from CrowdTangle.
In late June, a popular yogi, Baba Ramdev, released an Ayurvedic medicine called Coronil, falsely claiming that it had a “100 per cent recovery rate within 3-7 days.” The news was widely shared on both Facebook and Twitter, and drew multiple posts of support for the treatment because it was Indian-made, such as the below tweet expressing pride at a medicine made using “Hindu techniques.”
Indian nationalism has also seeped into the promotion of modern medical interventions to halt the coronavirus. In early July, the Indian Council of Medical Research (ICMR), the body responsible for the country’s response to the pandemic, claimed that India’s coronavirus vaccine candidate would be released by August 15 — India’s Independence Day — in a move that Sahoo said was a “mix of populism and vaccine nationalism.” The statement was later retracted by the ICMR after several doctors and professionals questioned the deadline, with The Wire reporting that it gave Modi “an opportunity to win political points.”
Most recently, multiple lawmakers, including former union cabinet ministers such as Jaskaur Meena and Arjun Ram Meghwal, and members of parliament such as Pragya Thakur, have claimed that the coronavirus would disappear once a controversial temple to the Hindu deity Lord Ram was constructed. The temple, built on the site of a mosque that was destroyed by a mob in 1992, is a powerful symbol of the Hindu-nationalist message promoted by Modi.
Madagascar: Organic remedies
In April this year, Andry Rajoelina, the president of Madagascar, promoted an unproven treatment that he claimed would kill the coronavirus. The herbal concoction, called Covid-Organics, is produced from a plant called artemisia that has antimalarial properties.
Rajoelina’s tweet announcing the launch of the drink on April 20 was widely viewed and shared, spurring a months-long online conversation about the product. His later tweets, which included images of himself alongside the leaders of Senegal and Guinea-Bissau, included statements such as “Long live Africa and long live its natural wealth!” and “It’s a united Africa which fights #Covid-19!” Dozens of accounts expressed their pride at an African-made remedy in the comments, and asked others to believe in a cure that had been developed on the continent.
A number of countries in Africa expressed their interest in the beverage, while the WHO urged caution. In an interview with France 24, Rajoelina claimed that the WHO doubted the remedy because it was made in Africa. “I think the problem is that [the drink] comes from Africa and they can’t admit … that a country like Madagascar … has come up with this formula to save the world,” he said.
This statement was soon used to create a fabricated quote suggesting that Madagascar was leaving the WHO, and that Rajoelina had encouraged other African countries to follow suit. Hundreds of Facebook posts repeated the false claim, including some verified accounts, generating thousands of interactions.
“The regime believed that it could surf on this new African pride acquired by Malagasies, by presenting this artemisia-based product, Covid-Organics, and hoping to solve the global problem that is the coronavirus pandemic,” Tsiresena Manjakahery, Agence France Presse’s correspondent in Madagascar, told First Draft.
Sociologist Marcel Razafimahatratra has told AFP that Covid-Organics created only “serious illusions” that could prolong the pandemic, and divided the medical community, and even worse, the country.
“It only further strengthens the cult of personality [around the president], that will neither bring a solution to the fight against Covid-19 nor ‘solve’ the current economic problems and above all, will have an impact on political life in the near future,” said Razafimahatratra.
United Kingdom: Brexit divisions
The decision to leave the European Union continues to be one of the key fault lines in the UK, and it permeates many online conversations around the pandemic, the topic of a vaccine against the coronavirus included.
Reports that the UK had opted out of the EU vaccine program in July gained widespread traction in online “Remainer” communities that support staying in the EU, fueled by criticism of Brexit and the government. Actor David Schneider, a prominent pro-EU voice, pointed to other recent decisions made by the British government not to participate in EU schemes, including one for PPE procurement, calling Brexit a “suicide cult.”
Yet the news also led to the expression of nationalist sentiment from pro-Brexit sources, with some claiming the UK would develop its own vaccine first and celebrating the development. Leave.EU, the campaign pushing for the UK to leave the EU during the referendum, claimed on its Facebook Page that “Remainers are trotting out that same tired old propaganda — better to be enslaved than emancipated.”
The early positive results of the Oxford trial released July 20 further fueled nationalist narratives. A tweet from anonymous pro-Conservative Party and pro-Brexit Twitter figure “Mason Mills” claimed: “There’s a reason we are called GREAT Britain” while Facebook Page “Brexit News” used the news to crow:
“So Brexit Britain have [sic] a working vaccine for Covid. Created in Oxford.
“Possibly the most important research so far this century!
“So come on remainers, explain how leaving the EU is destroying British research, On our insignificant little island?
“Has anyone else worked it out yet, remainers are wrong! YET AGAIN”
Likewise, “The Bruges Group,” a pro-Brexit think tank, pointed to the development to say: “We are so much better off out [of the EU].”
Britain is set to leave the European Union at the end of 2020 and, with no trade deal currently in sight, there is a question mark over future deals for buying and selling goods and services — including vaccines.
Nationalist responses to a global problem
The symbolism of being the first country to develop an effective medical solution to the pandemic cannot be understated and with such a huge boost to reputation on the line, it is perhaps not surprising that each national victory on that path — real or imagined — generates a fevered response.
The many thousands of responses to each new development, filtered through national politics and geopolitical concerns, demonstrate the strength of this interplay among politicians, governments and the public. Nationalistic narratives are meant to elicit an emotional response, to encourage people to rally around the flag regardless of the circumstances. In many cases, it is working.
Indian Prime Minister Modi’s messaging, for example, has helped drive his “messianic” popularity. His approval rating was at 74 per cent at the end of June — the highest of any democratically elected leader — according to a Morning Consult poll, despite the alarming rise of coronavirus cases in the country at the time. When leaders can juice their popularity by shaping medical responses to the pandemic to fit their own agenda, decision making will often be led by politics more than science.
That focus on national responses to what is a global problem, influenced by shifting goals, alliances and animosities, could hamper our ability to recover from the pandemic.
As the WHO’s Tedros warned, mitigating the worst effects of a disease being felt across the world requires international cooperation. “For the world to recover faster, it has to recover together.”
This article is part of a series tracking the infodemic of coronavirus misinformation.
‘Fake news’ laws, privacy & free speech on trial: Government overreach in the infodemic?
The surfeit of misinformation online during the pandemic has prompted some governments to implement extraordinary measures in an attempt to establish control amid the chaos. Criminalizing the dissemination of “false news,” expanding existing penalties for spreading misinformation, and increasing surveillance are among the actions some authorities have taken. Human rights and media observers warn that such remedies are worse than the problem they seek to alleviate, and that freedom of expression, privacy and the right to protest are disintegrating under the pretext of safeguarding public health.
But governments’ concerns aren’t without reason. Over the past six months, conspiracy theories, bogus cures and partisan finger-pointing online have spilled over into real-world harm: more than 700 dead from alcohol poisoning in Iran, Muslims attacked in India, telecommunications infrastructure vandalized in the UK, and untold numbers of people sick or dead from a virus they thought wasn’t serious. This on top of more than 700,000 deaths from Covid-19 and the gutting of economies around the globe.
Hungary, Romania, Algeria, Thailand and the Philippines are among the countries that have instituted new laws or invoked emergency decrees giving authorities the power to block websites, issue fines or imprison people for producing or spreading false information during the pandemic. In Cambodia and Indonesia, social media users have been arrested after allegedly posting false news about the coronavirus. In Egypt, a journalist who had been critical of the government’s response to the pandemic and was detained for “spreading fake news” contracted the virus in custody and died before he could be tried. Even in South Africa, where freedom of expression is a constitutional right, politicians criminalized the publication of any statement made “with the intention to deceive any other person” about Covid-19, government measures to address the disease or — in a sign of the country’s grim experience with HIV/AIDS — a person’s infection status.
In a March 2020 statement, United Nations human rights experts urged governments to “avoid overreach of security measures” in responding to the pandemic, and said that emergency powers should be “proportionate, necessary and non-discriminatory,” and not be used to quash dissent.
It’s an exhortation that many are not heeding.
“Almost immediately, the COVID-19 pandemic became a justification for governments to use more intrusive surveillance technologies on refugees and migrants” — @draganakaurin on the @df_fund bloghttps://t.co/gNzxdRaeAQ#WorldRefugeeDay #COVID19 #surveillance #refugees #migrants
— Nani Jansen Reventlow (@InterwebzNani) June 20, 2020
Julie Posetti, global director of research at the International Center for Journalists, said legislation is being misused to justify crackdowns on legitimate speech in a number of countries.
“There are circumstances where journalists have been detained and fined, for example, in reference to reportage that has been critical of government and that is deemed to be ‘fake news’ because it doesn’t suit the government,” Posetti told First Draft.
But she said that even well-intentioned laws could “inadvertently catch legitimate communication in the net,” effectively criminalizing journalism and undermining fundamental rights.
Whistleblowers have also come under attack, notably the Chinese ophthalmologist Li Wenliang, who was reprimanded for “spreading rumors” about the outbreak in Wuhan before dying from Covid-19, only to receive a posthumous apology.
“If you can’t have doctors, nurses and other healthcare workers speaking publicly about failures of the system where it’s in the public interest to do so, because they’re afraid of being jailed on so-called ‘fake news’ laws because the government equates criticism with fakery, you have a really serious problem,” Posetti said.
Nani Jansen Reventlow, a human rights lawyer and founding director of the Digital Freedom Fund, said laws governing misinformation affect private individuals as much as journalists. So does increased surveillance, such as contact-tracing software.
“Using apps to track people’s movement has a chilling effect on people being able to share information because everyone knows where they’ve been,” she said.
She cited South Korea’s app as an example of a coronavirus tracker having a particularly deleterious impact on people’s privacy, with individuals able to monitor one another through technology that was found to have serious security flaws.
In addition to affecting whistleblowers and source confidentiality for journalists, increased surveillance could also affect people’s willingness to exercise their right to assemble.
“Will you actually go to a protest if you know you’re going to be monitored? That particularly applies to those of us who are in a more vulnerable position when it comes to law enforcement,” Jansen Reventlow said. “You never know how it’s going to backfire.”
4⃣Around the🌍the law is being weaponised against journalism, & governments are cracking down on whistleblowers under the cover of #COVID19. With @pressfreedom, we want to learn about the extent & costs of these incursions via media workers on the ground https://t.co/LK0nf2f1iB
— Dr. Julie Posetti (@julieposetti) May 14, 2020
Internet shutdowns preceding the pandemic — such as those in Indian-administered region of Kashmir and the Rohingya refugee camps in Cox’s Bazar, Bangladesh — have also impeded access to essential information about the virus and response.
Jansen Reventlow fears emergency measures that stifle civil society and press freedom, such as those implemented in Hungary, will outlast the pandemic. “There is a danger that those emergency powers will not be turned back anytime soon,” she said.
Then there’s the question of whether laws against misinformation achieve their professed purpose.
Posetti said there was a lack of empirical evidence as to whether such laws impeded the distribution of false and misleading information. “But what we can say from prior research is that these sorts of laws do, in fact, chill a broad range of public communication, and that is where the problem lies.”
It’s a concern Jansen Reventlow shares. “The only thing it’s going to do is make it easier for public authorities to clamp down on things they don’t like.”
Instead, she said, governments should consistently and proactively provide accurate, timely information about the situation and the basis for policy decisions, so that people aren’t left to speculate in a vacuum.
“It’s about finding the right balance. But a thorough debate about where that balance should be found is pretty absent at the moment,” she said.
This article is part of a series tracking the infodemic of coronavirus misinformation.
Misinformation in your backyard: Insights from 5 US states ahead of the 2020 election
This is an edited extract from First Draft’s new report on local misinformation in the United States ahead of the 2020 election. Download the full report (PDF): “Misinformation in your backyard.”
“All politics is local,” so the saying goes. The same could be said for misinformation. The 2016 elections brought stories of Macedonian teens pulling quick profits and Russian agents seeding polarization across the United States. But 2020 is teaching us that whatever the origins of a rumor, misleading meme or photo, it is the local twist and organic amplification that give it power — often leading to impact offline.
In five states that will be key in the upcoming US election — Colorado, Florida, Michigan, Ohio and Wisconsin — First Draft has collected dozens of examples of information disorder playing out via private Facebook Groups, text messages and other platforms.
The case studies, authored by Shana Black, Serena Maria Daniels, Sandra Fish, Howard Hardee and Damon Scott, are:
- Colorado: Voter fraud claims about mail-in ballots
- Florida: Viral photos misinterpreted
- Michigan: Data mining and transparency in advertising
- Ohio: Doxing and harassment of health officials
- Wisconsin: Conspiracy theories about government surveillance
In an echo of national trends, local influencers and elected officials — state representatives, sheriffs and political candidates — play a key role in amplifying and spreading misleading or harmful information about the pandemic and other issues. Confusion among the public, whether about the process of mail-in voting or the efficacy of mask-wearing, proves fertile ground for creating confusion and encouraging distrust.
While local news organizations enjoy more public trust than national sources, and are well-positioned to provide information to counter information disorder, they are under increasing financial stress. Even before the economic burden of the pandemic, local newsrooms had already been contracting and shutting down, driven in part by the migration of advertising dollars to social media platforms, resulting in local news deserts. And even in their previous financially stable state, newsroom staff lacked diversity. According to recent research by Gallup and the Knight Foundation, more than two-thirds of Americans think it is important for the media to represent the diversity of the US population, but nearly 40 per cent think the media is doing a poor job with diversity efforts.
All these trends have worsened during the pandemic. The Poynter Institute is keeping a continually updated list of newsrooms that have cut services and staff in recent months. One estimate puts the number of news jobs lost at 36,000, even though the audience has increased from a public seeking answers to local questions.
First Draft has dedicated its 2020 US program to training local reporters and increasing resources for combating local information disorder. In a tour of 14 states, First Draft extended its training on responsibly tracking and countering local misinformation to more than 1,000 local reporters.
In March, First Draft launched the Local News Fellows project, supported by Democracy Fund, training and investing in five part-time reporters embedded in their communities to serve as central resources in their state. The driving concept: In today’s challenging environment, many local newsrooms lack the resources to devote staff members to tracking local information disorder. But through collaboration, they can share resources and encourage on-the-ground efforts, bringing newsrooms together. The material in this report was all sparked by their daily monitoring of local online conversations.
First Draft has prepared case studies on five examples from Colorado, Florida, Michigan, Ohio and Wisconsin. They are small snapshots of information disorder in these particular states, but they also paint a broad picture of how the same themes and tactics cross state borders and flourish nationally.
This part of First Draft’s US 2020 project was made possible by the generous donation from and vision of the Democracy Fund.
Tracking the infodemic: Charting six months of coronavirus misinformation
“The 2019-nCoV outbreak and response has been accompanied by a massive ‘infodemic’ — an over-abundance of information — some accurate and some not — that makes it hard for people to find trustworthy sources and reliable guidance when they need it.”
– WHO Situation Report, February 2, 2020.
The World Health Organization held the world’s first infodemiology conference at the end of June. Rather than the usual mixture of seminars, lanyards and networking that accompanies most industry events, experts dialed in from their homes and offices to discuss the details of the problem over video chat.
In his keynote speech, Saad Omer, director of the Yale Institute for Global Health, urged viewers to think of the policy and practice of combating the “human catastrophe” of the pandemic “from a consequentialist perspective.”
“What we do is mainly judged by the outcomes that it generates, the outcomes of the activity itself,” he explained.
“I would submit that in a pandemic, where we are dealing… with the fierce urgency of now. That’s the imperative we should follow.”
The mainstream conversation about misinformation developed in the wake of the 2016 US election and has largely focused on politics, where its consequences are often relative, subjective or unknowable. Did Russian “fake news” swing it for Trump? It’s a question many studies have tried and failed to answer.
The consequences of misinformation about the coronavirus have, for some, been death. At least 44 people in Iran died in the early weeks of the pandemic from drinking bootleg alcohol in the mistaken belief it would protect them. Gary Lenius, a man in his sixties from Phoenix, died after drinking fish tank cleaner he mistakenly believed contained hydroxychloroquine, the drug promoted by President Donald Trump, which has since been dismissed by scientists as ineffective against Covid-19.
In Texas, a man in his thirties was reported in early July to have attended a “Covid party” held by someone who had been diagnosed with Covid-19 to test whether others would become infected and to “see if the virus is real,” according to Jane Appleby, chief medical officer of Methodist Healthcare in San Antonio.
“Just before the patient died they looked at their nurse and they said, ‘I think I made a mistake. I thought this was a hoax and it’s not,’” said Appleby.
We don’t know how this man had been led to believe the pandemic was a hoax, but we can assume that he found those arguments more compelling than the news reports, death tolls and stories of tragedy, which have dominated 2020.
We recently published a series of articles delving into the psychology behind this behavior, and it is at the core of the infodemic problem. The coronavirus poses a serious threat to life but, seven months since doctors first warned the WHO of a dangerous new type of coronavirus, we still don’t know everything about Sars-Cov-2 to give the virus its scientific name. That combination of uncertainty and fear has meant many people have tried to find their own answers and, in some cases, it’s killing them.
“Humans have an emotional relationship to information,” said Claire Wardle, First Draft’s US director, at the infodemiology conference.
“It doesn’t matter how educated you are, we are all susceptible to information that reinforces our worldview, and we are all drawn to that kind of information because it makes us feel something.”
In March, First Draft identified the distinct types of misinformation accompanying the pandemic, summarized most simply as the origin, spread, symptoms, treatments and responses. In the subsequent months these mistruths have snowballed, as existing conspiracy theories about vaccinations or new technology or global elites coalesce around the coronavirus and the attempts to mitigate its effects.
It is six months since that first announcement by the WHO declaring a crisis in information that could derail the fight against a highly infectious virus that kills indiscriminately. At First Draft, those months have been spent tracking the infodemic, investigating the sources and evolution of the misinformation that has circulated so perniciously, working with our network of CrossCheck partners to support newsrooms, and publishing courses, resources, webinars and guides to help as many people as possible protect themselves from the harms of coronavirus misinformation.
Throughout August, First Draft will publish a series of articles highlighting some of the key ways in which the infodemic has produced real-world consequences, how we can understand it, and what we can do about it.
“The difference between the facts and the truth” is the first in this series, published on our Medium publication, Footnotes. Tommy Shane, First Draft’s head of impact and policy, addresses the different modes of understanding that underpin how we approach knowledge and what that can tell us about conspiracy theories, misinformation and the worrying growth of those who resist measures to halt the spread of the coronavirus.
Much like the pandemic, we need to understand the infodemic if we are going to address it. We are still a long way from a solution for either. This series will hopefully be another step along the path to fixing the second of those problems, one that could have a significant impact on the first.
This article is part of a series tracking the infodemic of coronavirus misinformation.
The psychology of misinformation: How to prevent it
The psychology of misinformation — the mental shortcuts, confusions, and illusions that encourage us to believe things that aren’t true — can tell us a lot about how to prevent its harmful effects. It’s what affects whether corrections work, what we should teach in media literacy courses, and why we’re vulnerable to misinformation in the first place. It’s also a fascinating insight into the human brain.
In the third part of this series on the psychology of misinformation, we cover the psychological concepts that are relevant to the prevention of misinformation. As you’ll have seen from the psychology of correcting misinformation, prevention is preferable to cure.
Here we explain the psychological concepts that can help us by building our mental (and therefore social) resilience. What you’ll find is that many of the resources we need to slow down misinformation are right there in our brains, waiting to be used.
This is the third in our series on the psychology of misinformation. Read the first, “The psychology of misinformation: Why we’re vulnerable”, and the second, “The psychology of misinformation: Why it’s so hard to correct”.
Skepticism is an awareness of the potential for manipulation and a desire to accurately understand the truth. It is different from cynicism, which is a generalized distrust.
Skepticism involves more cognitive resources going into the evaluation of information, and as a result can lower susceptibility to misinformation. It can be contrasted with ‘bullshit receptivity’ and contributes to Gordon Pennycook and David Rand’s thesis that susceptibility to misinformation derives not from motivated reasoning (persuading yourself something is true because you want it to be), but from a lack of analytic thinking.
What to read next: “Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication” by Briony Swire and Ullrich K.H. Ecker, published in Misinformation and Mass Audiences in 2018.
Emotional skepticism is an awareness of potential manipulation through your emotions. It might involve taking a moment to calm down before sharing a shocking but false post.
Despite emotion being a strong driver of shares on social media, and therefore a powerful driver in disinformation campaigns, it is often overlooked in media literacy campaigns. More research is needed to understand what techniques can cultivate emotional skepticism, and how this can slow down the sharing of misinformation.
What to read next: “Reliance on emotion promotes belief in fake news” by Cameron Martel, George Pennycook, and David G. Rand, (preprint) in 2019.
Alertness is a heightened awareness of the effects of misinformation.
In 2010, misinformation researcher Ullrich Ecker and colleagues found that warning people about the effects of misinformation, such as the continued influence effect, can make them more alert. By being alert to them, the effects of misinformation are reduced.
What to read next: “Explicit warnings reduce but do not eliminate the continued influence of misinformation” by Ullrich K.H. Ecker, Stephan Lewandowsky, and David T.W. Tang, published in Memory and Cognition 38, 1087–1100 in 2010.
Analytic thinking, also known as deliberation, is a cognitive process that involves thoughtful evaluation rather than quick, intuitive judgements.
Taking more than a few more seconds to think can help you spot misinformation. Misinformation researchers found that ‘“analytic thinking helps to accurately discern the truth in the context of news headlines.”
What to read next: “Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines” by Bence Bago, David G. Rand and George Pennycook, (preprint) in 2019.
Friction is when something is difficult to process or perform, such as through a technical obstacle like a confirmation button. It is the opposite of fluency.
Introducing friction can reduce belief in misinformation. Lisa Fazio, a researcher based at Vanderbilt University, has found that if you create friction in the act of sharing, such as by asking people to explain why they think a headline is true before they share it, they’re less likely to spread misinformation.
What to read next: “Pausing to consider why a headline is true or false can help reduce the sharing of false news” by Lisa Fazio, Harvard Kennedy School Misinformation Review, in 2020.
Inoculation, also known as ‘prebunking’, refers to techniques that build pre-emptive resistance to misinformation. Like a vaccine, it works by exposing people to examples of misinformation, or misinformation techniques, to help them recognize and reject them in the future.
Inoculation has been found to be effective in reducing belief in conspiracy theories and increasing belief in scientific consensus on climate change.
What to read next: “Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence” by John Cook, Stephan Lewandowsky, and Ullrich K.H. Ecker, published in PLOS ONE 12 (5) in 2017.
Nudges are small prompts that subtly suggest behaviors. The concept emerged from behavioral science and in particular the 2008 book “Nudge: Improving Decisions About Health, Wealth, and Happiness.”
When it comes to building resilience to misinformation, nudges generally try to prompt analytic thinking. A recent study found that nudging people to think about accuracy before sharing misinformation significantly improves people’s discernment of whether it is true.
What to read next: “Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention” by George Pennycook, Jonathan McPhetres, Yunhao Zhang, Jackson G. Lu, and David G. Rand, (preprint) in 2020.
First Draft welcomes new US board
First Draft is excited to announce the 2020-21 US Board of Directors. We are incredibly grateful that Sam Gregory, Jesse Ma, Mike Miller, Ifeoma Ozoma, Chris Perry, Vivian Schiller and Carla Zanoni have all agreed to serve on our board.
Each member comes to the board with a wide range of expertise that we know will be invaluable as we make plans for our future.
Sam Gregory is the program director at WITNESS and an award-winning technologist, media-maker and advocate. For 20 years, Sam has enabled people to use the power of the moving image and participatory technologies to create human rights change. Currently co-chair of the Partnership on AI’s Expert Group on AI and the Media, he focuses on emerging threats linked to AI and mis/disinformation including deepfakes and synthetic media.
Jesse Ma is an adjunct professor of law at Fordham University School of Law. With more than a decade of digital media and startup experience, Jesse specializes in strategic planning and helping clients navigate a constantly evolving legal and business environment. Previously, he was head of partnerships at the South China Morning Post, general counsel at Upworthy, and counsel at Gawker Media.
Michael Miller is a program co-director for the Social Science Research Council’s Just Tech program and senior program officer for its Media & Democracy program. He received a PhD in political science from the City University of New York, where his research explored how mechanisms of censorship, surveillance and propaganda vary across media types in China. He is a 2018-2020 Mellon/ACLS Public Fellow. Prior to joining the SSRC he was an adjunct professor of political science at Hunter College and Hostos Community College of the City University of New York.
Ifeoma Ozoma was most recently Pinterest’s public policy and social impact manager. She led the company’s authoritative vaccine-related search experience, which was lauded by the World Health Organization and The Washington Post’s editorial board. She’s also a member of the Brookings Institution’s Transatlantic Working Group on Disinformation, and a member of The Washington Post’s Technology 202 Network. Before joining Pinterest, she was on the Public Policy teams at Google and Facebook. She received a BA in political science from Yale University.
Chris Perry is the chief innovation officer at Weber Shandwick, a leading global communications and marketing solutions firm. With over 20 years of digital and media experience, Chris specializes in helping clients decode the rapidly changing media environment. His writing and work have been featured in Forbes, Fortune, The New York Times and The Washington Post. Chris authors the Media/Genius newsletter, which focuses on media at the intersection of content and intelligence.
Vivian Schiller is the executive director for Aspen Digital, a program of the Aspen Institute focusing on technology, media, innovation and cybersecurity. Previously, she was global chair of news at Twitter, senior vice president and chief digital officer at NBC News, and president and chief executive of NPR, senior vice president and general manager of NYTimes.com and has held other executive roles in media.
Carla Zanoni is an award-winning journalist, writer and media strategist. Born in Argentina and a longtime New Yorker, she is TED’s first head of audience development, where she focuses on content and programming strategy, analytics, social media and community development. Carla was the first global audience and analytics editor to be named on the masthead of The Wall Street Journal and is a graduate of Columbia University’s School of General Studies and School of Journalism.
This Board of Directors supports First Draft News Inc., an independent 501(c)(3) organization, responsible for work completed in the United States. First Draft’s global work is managed by a separate limited liability company, based in London, and its work is guided by an Advisory Board. We are honored to have this group steer our global strategy:
- Daniel Bramatti, editor of Estadão Data and Estadão Verifica
- Liz Carolan, executive director of Digital Action
- Phil Chetwynd, global news Director of Agence France-Presse
- Sam Dubberley, manager of the Digital Verification Corps at Amnesty International
- Sameer Padania, director of Macroscope
- Adam Rendle, a partner at Taylor Wessing in the IP/IT group
- Dan Shefet, founder of Cabinet Shefet
We are grateful to have the global Advisory Board and the US Board of Directors as part of our team during this critical time for First Draft.
The psychology of misinformation: Why it’s so hard to correct
The psychology of misinformation — the mental shortcuts, confusions, and illusions that encourage us to believe things that aren’t true — can tell us a lot about how to prevent its harmful effects. It’s what affects whether corrections work, what we should teach in media literacy courses, and why we’re vulnerable to misinformation in the first place. It’s also a fascinating insight into the human brain.
In the second part of this series on the psychology of misinformation, we cover the psychological concepts that are relevant to corrections, such as fact checks and debunks. One key theme that will resurface is the central problem of correction: Once we’re exposed to misinformation, it’s very hard to get it out of our heads.
If you want a primer on the psychology of correction, we particularly recommend Briony Swire-Thompson’s “Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication”.
This is the second in our series on the psychology of misinformation. Read the first, “The psychology of misinformation: Why we’re vulnerable”, and the third, “The psychology of misinformation: How to prevent it“.
The continued influence effect
The continued influence effect is when misinformation continues to influence people even after it has been corrected. In short, it is the failure of corrections.
Sometimes called “belief echoes”, this is the most important psychological concept to understand when it comes to corrections. There is consensus that once you’ve been exposed to misinformation it is very, very difficult to dislodge from your brain.
Corrections often fail because the misinformation, even when explained in the context of a debunk, can later be recalled as a fact. If we think back to dual process theory, quicker, automatic thinking can mean we recall information, but forget that it was corrected. For example, if you read a debunk about a politician falsely shown to be drunk in a manipulated video, you may later simply recall the idea of that politician being drunk, forgetting the negation.
Even effective corrections, such as ones with lots of detail that affirm the facts rather than repeat the misinformation, can wear off after just one week. In the words of Ullrich Ecker, a cognitive scientist at the University of Western Australia, “the continued influence effect seems to defy most attempts to eliminate it.”
Most crucially, it means that when it comes to misinformation, prevention is preferable to cure.
What to read next: “Misinformation and Its Correction: Continued Influence and Successful Debiasing” by Stephan Lewandowsky, Ullrich K.H. Ecker, Colleen M. Seifers, Norbert Schwarz and John Cook, published in Psychological Science in the Public Interest, 13 (3), 106–131 in 2012.
A mental model is a framework for understanding something that has happened. If your house is on fire, and you see a broken Molotov cocktail, you might reasonably build a mental model that the fire was caused by an attack. If a fireman corrects you, saying that it wasn’t caused by the Molotov cocktail in front of you, you’re left with a gap in your mental model — specifically, what caused the fire.
This means that corrections need to also fill the gap that they create, such as with an alternative causal explanation. This is tricky, though: Replacing a mental model is not always possible with the available information.
What to read next: “Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication” by Briony Swire and Ullrich K.H. Ecker, published in Misinformation and Mass Audiences in 2018.
The implied truth effect
The implied truth effect is when something seems true because it hasn’t been corrected.
This is a major problem for platforms. When corrections, such as fact checks, are applied to some posts but not all of them, it implies that the unlabeled posts are true.
Gordon Pennycook and colleagues recently presented evidence that the implied truth effect exists when misinformation is labeled on some social media posts but not others.
What to read next: “The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warning” by Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand, published in Management Science in 2020.
Tainted truth effect
The tainted truth effect is where corrections make people start to doubt other, true information. The risk is that corrections and warnings create generalized distrust of what people read from sources such as the media.
As with the implied truth effect, the tainted truth effect (also known as the “spillover effect”) is a potential problem with labeling misinformation on social media: It can make people start to doubt everything they see online.
What to read next: “Warning against warnings: Alerted subjects may perform worse. Misinformation, involvement and warning as determinants of witness testimony” by Malwina Szpitalak and Romuald Polczyk, published in the Polish Psychological Bulletin, 41(3), 105-112 in 2010.
Repetition causes misinformation to embed in people’s minds and makes it much harder to correct.
There are a couple of reasons for this. First, if you hear a statement more than once, you’re more likely to believe it’s true. Repetition can also make a belief seem more widespread than it is, which can increase its plausibility, leading people to the false conclusion that if that many people think it’s true, there’s a good chance it is.
What to read next: “Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus” by Kimberlee Weaver, Stephen M. Garcia, Norbert Schwarz, and Dale T. Miller, published in Journal of Personality and Social Psychology, 92, 821–833 in 2007.
Illusory truth effect
The illusory truth effect occurs when familiarity makes something seem true when it isn’t.
This can occur with false news headlines even with a single exposure. Exposure can even increase the plausibility of headlines that contradict people’s world views.
What to read next: “Prior exposure increases perceived accuracy of fake news” by Gordon Pennycook, Tyrone D. Cannon, and David G. Rand, published in Journal of Experimental Psychology: General 147(12):1865‐1880 in 2018
The backfire effect
The backfire effect is the theory that a correction can strengthen belief in misinformation. It has been broken down into the overkill backfire effect, worldview backfire effect, and familiarity backfire effect, each of which we explain here.
The backfire effect is by far the most contested psychological concept in misinformation, and, while famous, has not been found to occur as a norm, and some are doubtful it exists at all. Reviewing relevant literature, Full Fact found it to be an exception rather than the norm. More recently, researchers have concluded that ‘fact-checkers can rest assured that it is extremely unlikely that their fact-checks will lead to increased belief at the group level.”
However, it still permeates the public consciousness. Somewhat ironically, it has been a difficult myth to correct.
What to read next: “The backfire effect: Does it exist? And does it matter for factcheckers?” by Amy Sippit, published in Full Fact in 2019.
Overkill backfire effect
The overkill backfire effect is when misinformation is more believable than overly complicated correction, leading the correction to backfire and increase belief in the misinformation. A correction can be too complicated because it’s difficult to understand, too elaborate, or because there are simply too many counterarguments.
A recent study found no evidence of a backfire from too many counterarguments.
What to read next: “Refutations of Equivocal Claims: No Evidence for an Ironic Effect of Counterargument Number” by Ullrich K.H. Ecker, Stephan Lewandowsky, Kalpana Jayawardana, and Alexander Mladenovic, published in Journal of Applied Research in Memory and Cognition in 2018.
Worldview backfire effect
The worldview backfire effect is when a person rejects a correction because it is incompatible with their worldview, and in doing so strengthens their original belief.
Although, like all backfire effects, there is a lack of robust evidence for its existence, the advice given to mitigate it is still relevant and worth noting. For example, one study advises to affirm people’s worldviews when making a correction. Self-affirmation can help, too: One study found that people are more likely to accept views that challenge their worldviews after being asked to write about something about themselves they were proud of.
What to read next: “Searching for the backfire effect: Measurement and design considerations” by Briony Swire-Thompson, Joseph DeGutis, and David Lazer, (preprint) in 2020.
Familiarity backfire effect
The familiarity backfire effect describes the fact that corrections, by repeating falsehoods, make them more familiar and therefore more believable.
Briony Swire-Thompson, associate research scientist at Northeastern University, and colleagues found no evidence of a familiarity backfire effect: “corrections repeating the myth were simply less effective (compared to fact affirmations) rather than backfiring.”
What to read next: “The role of familiarity in correcting inaccurate information” by Briony Swire, Ullrich K.H. Ecker and Stephan Lewandowsky, published in Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(12), 1948-1961 in 2017.
Introducing an SMS course to prepare for US election misinformation
One of the things we’ve learned from running so many face-to-face and online training sessions in the last five years is that people find it hard to carve out the time in their busy working lives. As November draws near, we’re trying a new format to help people be prepared for election misinformation in a way that fits into their daily schedules.
“Protection from deception,” our free two-week text message course delivers daily nuggets of training via SMS.
It’s designed to be quick and easy enough to appeal to everyone. Every day, at a time of your choosing, you will receive a little nugget of learning by text message, with some extra video and article links if you want to dive deeper.
The course will give you the knowledge and understanding you need to protect yourself and your community from online misinformation. You’ll learn why people create and share false and misleading content, commonly used tactics for spreading it, what you can do to outsmart it, and how to talk to family and friends about it.
We’re hoping to translate the course into multiple languages, but want to give it a thorough test in English first.
The course is focused on preparation, as there is a growing body of research that shows the importance of inoculating audiences against the tactics and techniques used by those creating and disseminating disinformation. Coronavirus has shown us how damaging misinformation can be, and in the US, with the election around the corner, it’s time to prepare everyone for what they might face online. Understanding the psychology of misinformation is important in fighting it, so please share this far and wide.
For a sneak preview of the course, check out this video on how emotional skepticism can help protect vulnerable communities.
How to use network analysis to explore social media and disinformation
Network analysis has become an important tool for a disinformation expert. An increasing number of journalists and researchers are using the practice to analyze the social web and gain insight into the hidden networks and communities that drive information — and disinformation — online.
Take, for instance, open-source intelligence expert Benjamin Strick’s Bolivian Info Op Case Study for Bellingcat, or his uncovering of a pro-Chinese government information operation on Twitter and Facebook. Thanks to the analysis of these networks, the author was able to disclose coordination and manipulation, and to shed light on some of the most common tactics behind a disinformation campaign.
Online connections can influence how political opinions take shape, so analyzing these networks has become fundamental.
But there are challenges. Not all datasets identify relationships and it’s up to the journalist or researcher to define these connections. Sometimes you might end up with a visualization that, despite its beauty or complex nature, reveals nothing of interest about your data.
Although they might seem to be impressive visual content to share with your audience, network visualizations are first and foremost great tools for exploration. They are by no means conclusive charts.
And even though it’s an increasingly common tool in the field, network analysis is a detailed process that challenges attempts to penetrate it by the world’s top academics, and its application requires caution.
A little graph theory
But first of all, how do you define a network? In graph theory, a network is a complex system of actors (nodes) interconnected by some sort of relationship (edges).
A relationship could mean different things — especially on social media, which are fundamentally made of connections. Network analysis means focusing on understanding the connections rather than the actors.
One of the first examples of online networks was the ‘Bacon number.’ The idea emerged in 1994 when the actor Kevin Bacon said in an interview that he had worked with “everyone in Hollywood.” Since then, Google can calculate Bacon numbers for any actor in the world and show the connections to the first node. Actors who have worked directly with Kevin Bacon have a Bacon 1, and so on.
On Facebook, networks take shape based on friendships, Pages, and groups in common.
On Twitter, however, you can also investigate things like hashtags, retweets, mentions, or quotes as well as whether users follow each other.
For example, two accounts on Twitter are the nodes of a network. One retweets the other, and this is called the edge. If they retweet each other multiple times, the weight of their relationship will be higher.
The weight of a relationship is only one of the attributes of nodes and edges.
Another important attribute is the degree (or connectivity) of a node, which describes the number of connections.
A fourth is called the direction, which helps us understand the nature of a relationship. When an account retweets another, it creates what is called a directed connection. In directional networks, there are in-degree values, for when an account retweets or mentions another, and out-degree values, when an account is retweeted or mentioned by another. But other times connections don’t have directions, such as two friends on Facebook, for example.
The density shows how well-connected the graph is, dividing the number of connections the nodes have by the total possible connections a node could have. The closeness is the position of a node within the network (or how close it is to all other nodes in the network); a high value might indicate an actor holds authority over several clusters in the network.
The position of the nodes is calculated by complex graph layout algorithms that allow you to immediately see the structure within the network.
The benefit of network visualizations is to look at a system as whole, not only at a part of it. The challenge is to identify what kinds of relationships you want to investigate, and to interpret what they mean.
Tools for analyzing and visualizing networks
Several tools are available for sketching and analyzing the networks that you produce in the course of your investigations, and some allow you to gather the necessary data directly.
Here are some of the best free tools:
Neo4j is a powerful graph database management technology, well-known for being used by the International Consortium of Investigative Journalists (ICIJ) for investigations such as the Panama Papers.
Gephi is an open-code program that allows you to visualize and consult graphs. It doesn’t require any programming knowledge. It is stronger for visualizations than for analysis, but it can handle relatively large datasets (the actual size will always depend on your infrastructure).
The trick about visualization in general is to find the clearest way to communicate the data.
Prepare your data
First you will need data to analyze, of course, and you can obtain network data in several ways.
The easiest way is to download the Twitter Streaming Importer plug-in directly on Gephi. It connects to the Twitter API and streams live data in a Gephi-friendly format based on words, users, or location, allowing you to navigate and visualize the network in real time.
But if you want to use Twitter historical data, you need to use some scrapers — read our tutorial on how to collect Twitter using Python’s tweepy — and convert the scraped data into a format-friendly file for network visualizations using the tool Table 2 Net.
The tricky part can be identifying which column in the data is for nodes and which is for edges. Not all datasets include connections, and you need to make sure the columns selected contain mentions, hashtags, and accounts, in a clean, tidy format.
Once you have selected the columns to create your network, you can download the network file as .GEXF, the Gephi file format.
Alternatively, you can use the programming language Groovy to generate a GEXF network graph from a JSON file in your terminal. You can choose from among mentions, retweets, and replies. Type on your terminal the following command: groovy export.groovy [options] <input files> <output file>
Explore a bit of Gephi
In this example, let’s use a sample that collects mentions of Bill Gates on Twitter, as he has been a constant subject of misinformation and conspiracy theories throughout the pandemic.
Once you upload the GEXF file on Gephi, this is how the network will look:
You have to play around with filters, parameters, and layouts to visually explore your network and turn this amorphous mass of dots into a meaningful shape.
The program will also give some initial information about the network. In the @BillGates file, for example, there are 19,036 nodes (individual accounts) and 207,780 edges (connections between accounts) — a fairly high number of edges.
Let’s have a quick overview of the menu options in the software at the top:
- Overview is where you work on the network’s functionalities;
- Data Laboratory is a display of your dataset in a table format;
- Preview is where you can customize your final network visualization.
Here are a few quick actions to start investigating the network:
What is the average degree of connections?
The average degree will show the average number of connections between the nodes. Run ‘Average degree’ under Statistics on the right-hand side to receive a ‘Degree Report’ of the network.
The average number of connections between the accounts that mentioned @BillGates is 1.092. In other words, an account has, on average, mentioned one other account in this sample.
Which are the main nodes in the network?
You now want to discover which nodes have the highest number of connections. These could be either direct or undirected connections. For example, it could be interesting to see which Twitter accounts a divisive handle retweets the most, or the top accounts that mention a divisive account.
Change the size of the nodes in Gephi by clicking on ‘Ranking’ on the left and by choosing ‘Degree’ as an attribute. Set the minimum and maximum sizes, and the nodes’ dimensions will change according to their number of connections.
In the Bill Gates example, we are interested in knowing which accounts tag him the most (inward connections). To do that, choose the ‘out-degree’ attribute, which will increase the dimensions of the accounts that mentioned Gates the most.
To improve the network’s visibility, choose the layout Force Atlas 2, which uses an algorithm to expand the network and better visualize the communities that are taking shape, as shown below.
Are the actors connected to different communities?
The ‘modularity’ algorithm is often used to detect the number of clusters, or communities, within a network, by grouping the nodes that are more densely connected.
Hit the Run button next to Modularity under the Statistics panel. Doing so will yield a modularity report, which often has quite interesting details for analysis.
At this point, use the partition coloring panel to color the nodes based on modularity class and apply a palette of colors to them.
The next step is to learn something from the graph you have created. By right-clicking on a node, you can see its position in the Data Laboratory, along with the newly created columns displaying degrees and modularity value. While the visualisation helps looking at the global picture, here you can manually explore the data.
You can also apply text to the graph and visualize the names of the nodes.
In the preview tab, you can choose different options to visualize the network. You can add labels, change the background color, or play with sizes of nodes and edges.
If the graph is too crowded, use filters to show fewer nodes. For example, you can filter by number of in-degree or out-degree connections, depending on what you are interested in highlighting.
Remember that data visualizations are a great way to make complicated topics more accessible and engaging, but they can be misleading if they are not well-designed and easy to understand.
Make sure your network graph is visualizing exactly what you are trying to express. Add clear annotation layers to aid in reading it. And don’t try to use them at all costs. Sometimes a simpler chart that conveys its meaning more effectively might be worth thousands of nodes.
The US protests have shown just how easily our information ecosystem can be manipulated
Those who study misinformation are often asked to attribute misleading content to particular actors — whether foreign governments, domestic hoaxers, or hyper-partisan media outlets. Actors deliberately promoting disinformation should not be ignored; however, the recent US protests have demonstrated that focusing on specific examples of disinformation can fail to capture the complexity of what is occurring.
Following the killing of George Floyd, who died after three Minneapolis police officers kneeled on his handcuffed body, hundreds of thousands of demonstrators took to the streets around the world. At the center of their demands was justice for Black people who have died in police custody, and alternative criminal justice systems.
In the US, several accompanying narratives were heavily discussed online. Piles of bricks near protest sites had social media users across the ideological spectrum speculating as to their origin; false rumors spread that protesters in DC were targeted by an internet blackout. Elsewhere, images of police officers who joined marches or kneeled with protesters were distributed to and published by several news outlets uncritically and without examination of law enforcement’s motivation. These examples — along with the deeper investigations into the involvement of the white supremacist “Boogaloo” movement, or questions around what exactly is happening in the Seattle Autonomous Zone — all distracted from the reason for the protests.
When demonstrators began to organize in early June, one of the narratives debated online was the emphasis on “outside agitators” infiltrating the protests. Social media was full of posts that claimed undercover police officers, white nationalist militia members, or organized “antifa” members were responsible for instigating property damage or violence at the protests. Users posted photos of graffiti, claiming the wording was suspicious, and questioned whether it was intentionally placed to sow division.
But focusing on protest attendees who do not care about addressing police brutality distracted from the demands of organizers who do. The “outside agitators” here served a double purpose in attacking and undermining the protests. In the first instance, it defamed the protesters by attributing some of the violent behavior and destruction to their cause. And in the second, it distracted from that cause by turning attention away from the reasons behind it.
Misinformation researchers have coined the term “source hacking” to describe the process by which “media manipulators target journalists and other influential public figures to pick up falsehoods and unknowingly amplify them to the public.” The nature of the news cycle and the way news is reported mean many outlets could not avoid covering these narratives. Internal and external pressures, both financial and professional, would not allow it. And some of the accounts spreading these narratives, but by no means all, would have had malicious intent.
What these narratives demonstrated is that the story of the swirling misinformation surrounding the protests is not one with a central villain or organized network of insidious actors. Instead it is a story of how the modern information landscape, made up of news media, social media, and the people who consume media, is vulnerable to manipulation that influences the ways in which events are shaped and discussed. The “source hacking” that occurred in many of these instances was an organic side effect of the complex information landscape, rather than an intentional ploy. This is perhaps even more difficult for decision makers to navigate, and requires careful consideration on the part of news outlets and journalists to determine how to most effectively center the audience’s needs in what is reported.
In today’s lesson of, “The cops probably did this.”
Can you guys tell me what is wrong with this image and who could have done this? pic.twitter.com/BinKwnZYjm
— Mightykeef (@MightyKeef) June 1, 2020
After all of our social media monitoring during the protests, it is not possible to blame the “outside agitator” narrative on one bad actor. Our analysis is still ongoing, but as with any moment of shared online attention, bots and sock puppet accounts were very likely to have been pushing out content related to those narratives of protest infiltration. And journalistic mistakes were made: There are examples of outlets poorly framing or mis-contextualizing rumors, giving “outside actors” more legitimacy than the evidence indicated. But identifying insidious networks and media missteps is futile without a simultaneous examination of how our current information landscape is so easily influenced by these disturbances.
Social media platforms and their algorithms, editorial decision making, and determinations about what to post and share on an individual level all contribute to the visibility of certain narratives, and they work unintentionally in synchrony — often to undesirable results. For example, news outlets used valuable resources investigating a 75-year-old police brutality victim’s ties — or lack thereof — to “antifa,” thanks in large part to the promotion of this false rumor by President Donald Trump. This is just one example from the protests when news outlets spent many hours having to investigate and debunk claims from politicians, police authorities and video evidence from the streets. When newsrooms, particularly local newsrooms where staff are being laid off and furloughed, are focused on this type of work, they are less able to focus on stories that reflect the experiences and needs of their communities. And yet it is difficult to argue that topics exploding on social media are not newsworthy. The feedback loop between social media and traditional media is broken, and the protests exhibit how damaging that has become.
While the process of misinformation monitoring — locating a piece of false or misleading content and finding the originator of it — is still useful, and fact checkers play an important role in maintaining a healthy information ecosystem, what is becoming increasingly clear is that we must tackle the problem with misinformation from a macro perspective. It’s no longer enough to tackle each ‘atom’ of misinformation. Misleading narratives sometimes flourish in the modern information ecosystem because of a confluence of circumstances, not because of a well-executed plan. Mitigating misinformation whack-a-mole style will be ineffective if we do not address the infrastructural problems that define the way people receive, process, and share information with their own networks. For journalists, this means carefully examining the urge to report on and debunk specific pieces of misinformation. Any effort to do so should be balanced by robust solutions journalism, emphasizing social issues affected or manipulated by misinformation.
Historically, media criticism has focused on gatekeepers. Do legacy news outlets foster sources that understand the communities on which they report? Are they irresponsibly or unintentionally amplifying particular voices? These questions are still relevant, but the internet has dramatically widened the pool of who can disseminate information to the public, and media scholars must adjust their lens. As a result, it’s more difficult to understand the reasons behind poor story framing. An individual Twitter user sharing information about suspicious protest attendees or promoting the “outside agitators” narrative does not obfuscate the reason for the protest by itself. And yet, as part of a groundswell covered by prominent news outlets, their tweets likely contributed to that happening. The questions journalists need to ask are not only “who is responsible” or “how do we stop misleading narratives.” Now, perhaps more than ever, newsrooms need to think about how we ensure our audiences and journalists are prepared to navigate a media landscape so susceptible to gaming and manipulation.
Jacquelyn Mason, Diara J. Townes, Shaydanay Urbani, and Claire Wardle contributed to this report.