First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

George Floyd protests: Resources for online newsgathering, verification, protecting sources and colleagues

Social unrest is always hard to cover. These protests have been particularly difficult for Black journalists whose personal experiences with systemic racism are colliding with the degradation of the profession. First Draft strongly believes that Black lives matter, and that journalism is not a crime.

Social media has leveled the playing field of mass communication and, in some ways, helps make the protests sparked by the death of George Floyd possible. Mobile technology has shone a light on the treatment of Black people by police, allowed crimes to be documented, helped protesters to organize and shown the violent response from law enforcement to the world. Continued coverage of the protests is public-interest journalism at its most raw and many of those directly affected are broadcasting events in real time.

Finding newsworthy material online is relatively straightforward. Verifying it and then using it ethically is more difficult. This article will address all three elements, as well as tips for supporting yourself and your colleagues when covering a story.

Online newsgathering

Before we discuss the details of how to find newsworthy material online, it is important to recognize the impact and responsibilities of journalists and news organizations in using it.

Using eyewitness media from protests can identify both the individuals in the footage and specific social media accounts, potentially putting both in harm’s way. Embedding posts in a news article online is essentially opening a window for readers onto that person’s life. News organizations should apply the same guidelines around consent and protection of sources in the online world as they do offline, or risk endangering sources, damaging their own reputation, or attracting legal action.


Twitter is still the social network most people think of when it comes to news, and Tweetdeck is still the best tool for finding information on Twitter quickly.

The free-to-use dashboard lets users set up columns of searches,  like lots of individual newsfeeds, then filter the results by things such as the level of engagement or media type.

To get the best results, we will need to fine-tune our search using a Boolean search string to look for specific keywords. Brainstorm some keywords witnesses or sources might use when tweeting about the event and combine them with the location. Add “me OR my OR I” if you are looking for people talking about their personal experience. Remember to use quote marks for phrases.

Then use AND and OR to separate out the different sections of the search. Use brackets to more clearly mark the different sections if necessary.

Here is an example you can copy and paste, filling in the relevant location keywords:

[location] AND (protest OR protesters OR protestors OR #blacklivesmatter OR police OR officer OR arrest* OR attack* OR charge OR shoot OR “pepper spray” OR mace OR curfew OR riot OR loot OR “rubber bullet*” OR “pepper ball” OR “tear gas” OR “far right” OR “far left” OR supremacist OR boogaloo OR antifa) AND (me OR my OR I OR we OR us OR our)

Read more: First Draft’s Essential Guide to Newsgathering and Monitoring on the Social Web

If the search doesn’t return results, it could be because there is a missing part of the search string, such as a solitary quote mark, or because it is too long. Tweak it until you get results and use filters to cut down on noise and irrelevant posts.

Twitter may be the easiest social media platform to search, but that doesn’t mean it has all the relevant material. The vast majority of social media users are not on Twitter, so searches here will only be a small window into  the true level of online activity about a particular issue.

Facebook and Instagram

In July 2019, Facebook effectively took a hammer to many of the newsgathering tools and techniques journalists and researchers had built up around the network. Almost a year later, the search options are limited.

People looking for information on the world’s biggest social network can use its standard search functions, on the website or in the app, then filter the results. Open-source intelligence (Osint) communities discuss editing the URLs of Facebook searches for more refined searches but the social network is constantly changing, making it difficult to keep up.

Instagram is a bit more versatile. Searches on the app or website on specific hashtags will return results sorted algorithmically, but the online tool Wopita, formerly Picbabun, will show most recent results first. It will also show users associated with certain keywords or phrases, and any related hashtags.

Alternatively, there’s CrowdTangle, a social media analytics tool owned by Facebook, which is free for journalists and researchers. Users can set up lists of public groups and pages to monitor and save searches of relevant keywords across Facebook, Instagram, and Reddit. 


TikTok now has more than 800 million users worldwide, was the second-most downloaded app in 2019 and has been used widely by people involved in the protests.

You can search for hashtags, keywords, or accounts on the mobile app, and it will show you the results and other relevant hashtags, which can help in the search.

On desktop there is no such search function, so tools like OsintCombine’s TikTok search can help for users and hashtags. Remember that TikTok hashtags can have emojis.

Also remember that with all kinds of online newsgathering, verifying newsworthy information is a crucial next step, and always treat contacts and sources with respect.


Whether it’s coronavirus or protests, when covering emotionally charged stories we need to be emotional skeptics. Disinformation spreads rapidly, particularly when it appeals to our emotions, and those same emotions can make us subconsciously more likely to believe certain claims or footage. 

Whether you’re looking at a video, a tweet, or an image from a protest, First Draft recommends thinking in terms of the five basic checks anyone can perform. For more details, see First Draft’s “Essential Guide to Verifying Online Information“.

Is it the original?

The first step in verification is finding where the video originated. A reverse image search is often the fastest way to ascertain whether an image or video has been used before in a different context.

The RevEye browser extension allows users to search multiple image databases for a match. InVid is an invaluable tool for analyzing and verifying videos.

If no other versions appear, see if the user will send you the original image or video through other means than social media. Exif data in the image file can corroborate what the source says about the circumstances in which it was taken — but that information is wiped when posted on the platforms.

Who is the source?

Look at the social media profile of the source for examples that would support the person’s  claims. Are there other pictures from the scene on the person’s  account? Does it make sense that the person  would post the type of content you wish to verify? Contact the person directly before making any claims about their identity. In the case of protests, someone may wish to remain anonymous for safety reasons. (See the section below on ethical reporting.)

When was the picture or video captured?

Even before speaking to the source, look to the weather and time of day to establish whether the content passes basic common-sense checks. Remember that the main social media platforms have different ways of determining timestamps. Read more about this in First Draft’s Essential Guide.

Where was it captured?

Now is the time to flex your observation skills. Look for recognizable landmarks, street names, or shop names to cross-reference the location with a mapping tool.

If details are difficult to make out, and with the knowledge that geotags can easily be manipulated, digging into the account that posted the content for clues can be the way forward. Did the person make note of their location in other posts? Does it make sense that the poster would have been present, or is there anything unusual about the account?

Why was it taken?

This can be the hardest question to answer, but it should still be top of mind throughout your verification. A reliable way to get a sense of why the content was captured is to speak directly with the source, but trying to glean more about their motivations from their account is possible. Perhaps they were simply a bystander, but you can still look at their account and other posts for potential motivations or affiliations that might influence their perspective on the scene.

Verification goes beyond just pictures and videos. Avoid amplifying claims that protests are subject to “outside agitators” — such as antifa, George Soros, white supremacists or Russian trolls — without hard evidence. Verify claims by law enforcement as you would any other statements.

For more on online verification, the recently published “Verification Handbook For Disinformation And Media Manipulation,” edited by BuzzFeed News’ Craig Silverman with contributions from experts, is filled with techniques for investigating social media accounts, platforms and content.

First Draft has collected a number of newsgathering and verification tools together in a dashboard for easy access. Screenshot by author.

Use it ethically

Gathering and distributing eyewitness media raises a range of ethical concerns.

For example, amplifying accounts or individuals in reporting can raise that person’s profile and draw unwanted attention to them.

Before quoting from, linking to or embedding a post about the protests, consider the impact on the user and the people in a picture or video. Seek informed consent for the use of their content, directly from the person who posted it. Always ask whether and how a source would like to be credited, make sure they know the possible repercussions, and give the source the option not to be credited at all.

Being transparent about how you will use any material is also vital in building trust. 

Protests and surrounding events can be particularly volatile, so consider the emotional state and physical safety of contributors who are live streaming or posting. When communicating with members of the public in dangerous situations, journalists should urge them to stay safe, reads the Online News Association (ONA) Build Your Own Ethics Code project. “Sometimes it’s best to wait until after the danger has passed.” If they are potentially in immediate danger, consider waiting to contact them.

Particular sensitivity is required when the source or subjects featured in footage are from vulnerable communities.  WITNESS Media Lab asks journalists to protect the anonymity of sources at risk for exposing abuse. This goes beyond not publishing the source’s name —  according to WITNESS, the journalist should also consider whether the footage contains metadata that pinpoints the source or their location, and whether the platform hosting the footage could reveal identifying details. 

Finally, as with all reporting, providing context behind the events is crucial. Look beyond “riot porn”: social movements and unrest build over time and manifest themselves in waves of visibility created by political opportunity.Marches don’t get automatically organized, protests don’t appear out of nowhere. The networks of support underlying mass mobilization are communities we need to report on and engage with before and after they take to the streets.

Look after yourself and your colleagues

Monitoring around-the-clock events that involve scenes of violence and suffering can take a mental toll. Advice from First Draft’s 2017 guide to journalism and vicarious trauma is still relevant today: Pace yourself so that distressing work is alternated with more positive experiences and take regular breaks when you are required to work with distressing material.

To reduce your exposure to the material to the strictly necessary, consider watching and listening to only what you need to complete the task at hand. Muting violent videos or adjusting your monitor to grayscale instead of color are a few ways of limiting the possibility of vicarious trauma from reporting.

Checking in with colleagues regularly and exchanging updates about how your work is affecting you is one way of maintaining a sense of togetherness during stressful times, especially when many people are working from home.

“Social isolation is one of the risk factors for physiological distress, and social connection and collegial support is one of the most important resilience factors,” Bruce Shapiro, executive director of the Dart Center for Journalism and Trauma, previously told First Draft, citing research into journalists covering traumatic events.

For managers and newsroom leaders, actively looking after your employees’ mental well-being during this time is essential. Have regular check-ins with staff and stay alert for signs of vicarious trauma. Consider the volume and content of protest-related materials to which your team is exposed and ensure there is variety in their working week, with time periods where their duties do not involve interacting with distressing content. 

Crucially, this also means letting go of “hyperproductivity” and encouraging switching off outside work. Be realistic about what output you expect your reports to produce, and focus their efforts on what is most achievable and valuable, Amnesty International’s Crisis Evidence Lab head Sam Dubberley says. Let your employees know they don’t need to be online around the clock and don’t undermine that with your own working practices and habits. In fact, being a role model and sharing your own self-care tactics can help staff look after their own mental well-being.

Social unrest is always hard to cover. These protests have been particularly difficult for Black journalists whose personal experiences with systemic racism are colliding with the degradation of the profession. We recognize and empathize as a newsroom that Black lives matter, and that journalism is not a crime.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

The headline, sub-head and first paragraph of this article have been updated to clarify the immediate and long-standing reasons behind the protests.

How to talk to family and friends about that misleading WhatsApp message

Combatting misinformation about coronavirus can be thought of in a similar way to the test-and-trace plan to stop the spread of the virus itself. Test a claim for accuracy, trace it back to the source, and isolate it to stop it spreading further.

On messaging apps such as WhatsApp, however, tracing is almost impossible. Memes, posts, videos and audio clips are forwarded to contacts and chat groups with a tap or swipe and no easy way to see how the content might have travelled between communities.

This is why it is important to talk to our contacts, especially those closest to us, about misinformation they might have shared.

Read more: The 5 closed messaging apps every journalist should know about — and how to use them

India has a history of violent incidents sparked by rumours or accusations on WhatsApp, used by 400 million people across the country. Some 52 per cent of Indian respondents surveyed for the 2019 Digital News Report said they get their news from WhatsApp, compared to just four per cent in the United States.

First Draft spoke to Pratik Sinha, editor and co-founder of Indian fact-checking organization Alt News, on how best to talk to friends and family about something false they might have shared on the messaging app.


Misleading messaging peddling false preventives and cures like hot weather and ‘gaumutra’ (cow urine) are viral in India. Screenshots by author.

Don’t shame your loved ones

The last thing you want to do is turn this into an ugly confrontation. As Sinha says, “Very often we end up getting into a conflict situation and then we see that one side doesn’t want to listen to the other side.”

Replying to the message and calling it out in public could shame the person who shared the claim, potentially making them double down on their views. A private message focusing on their motivations rather than the content is more likely to work. Ask them who they received the message from, if they know where it originated, and why they decided to pass it on to you.

Show empathy

The pandemic has created levels of uncertainty and anxiety that are not easy to handle. It is in this climate that your loved ones are sharing things, not only because they want to spread a message, but also because they are afraid. We’re all afraid. Recognize this in your response and put yourself in their shoes.

In an interview with Canada’s CBC News, Claire Wardle, co-founder of First Draft and a recognized expert in misinformation, said that reacting emotionally and taking a tone of “you’re wrong and I’m right” does not work. If anything, it strengthens the other person’s views, and pushes the two of you further apart. Approaching this with a “we’re all in this together” attitude is advisable.

Be responsible 

As tempting as it is to ignore the message, this sends the wrong signal — that you accept false or misleading content in your inbox. We all have a responsibility to call out our contacts, especially those closest to us, for spreading a false message. In the age of the coronavirus, discouraging your friends from sending these untruths any further could be the difference between life and death, as untested “cures” and “remedies” are flooding social media.

Do your research, and check established fact-checking organisations like Alt News or WebQoof in India, or one of their fellow signatories of the Poynter Institute’s IFCN code of principles to ensure a message’s authenticity.

Platforms have made it extremely easy to forward messages — it only takes two taps on WhatsApp — which is why this behavior has become part of our digital culture. The platforms are introducing measures to prevent the spread of false information; but we can all play a role in calling it out.

As Wardle told First Draft, there was once a time when you would simply hope for the best when your drunk friend decided to drive home. Now, you take away his car keys and make sure he gets back safely with someone else.

Don’t expect immediate change

No one forms or changes their views overnight, so don’t expect your loved ones to do so. The process takes courteousness and patience. The more you politely challenge them, the more likely they are to think about the things they share and to question the source.

Speaking up clearly, using concise language and providing sources works, according to a 2016 study on misinformation around research into the 2015-16 Zika virus outbreak.


Misleading messages with an audio clip falsely attributed to Dr. Devi Shetty and a ‘coronavirus vaccine’ were popular on WhatsApp. Screenshots by author.

Misinformation is often spread with an agenda in mind. India is a strong example of this; misinformation has thrived in the country’s WhatsApp groups and messages for several years and could hold lessons on how it can be countered in other parts of the world.

Sinha points out that the situation in India is a product of its polarized politics, and the problem is that misinformation often comes from the top — government officials and political parties. This makes it all the more difficult to challenge, and thus requires more patience. “It is a long process [to change one’s mind],” he says.

Misinformation has a very real impact. False cures, unscientific preventative measures, and conspiracy theories abound on WhatsApp, and all have a tangible influence. From Indians drinking cow urine to Americans consuming cleaning liquid in hope of curing the virus, the misinformation has proven as dangerous and disruptive as the virus itself. Breaking the chain of misleading messages on WhatsApp and elsewhere is a small but crucial step everyone can take.

Find out more about closed messaging groups, as well as online ads and Facebook groups in our recent Essential Guide.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

How to use Facebook’s Ad Library to find coronavirus stories

Facebook’s Ad Library might not seem like an obvious place to look for stories about misinformation and influence during the pandemic. The social media giant banned misleading coronavirus ads early in the crisis but its archive for promoted posts has been a consistent source for journalists since its launch in 2018.

Donald Trump’s re-election campaign spent more than $32 million on Facebook ads between January 2019 and June 2020, according to The Washington Post. And despite the restrictions, BuzzFeed News found Facebook was still accepting money for fraudulent ads selling face masks to unsuspecting users in May, although the platform later banned the advertiser.

Using the Facebook Ad Library to uncover these types of stories was the basis of a webinar led by First Draft investigative researcher Madelyn Webb, an extensive user of the library, and training and support manager Laura Garcia.

The basics

The library contains data on all ads run on Facebook, Instagram, Messenger and the Audience Network, which extends advertising campaigns from Facebook to separate platforms using the same targeting data. 

You can access the library through a Facebook Page of interest by selecting “Page Transparency” on the right hand side, clicking “See More”, navigating down to “Ads From This Page” and clicking “Go to Ad Library.”

You can also search for ads on specific subjects, either by searching for Pages which use specific keywords in their name or changing the search type to “Issue, Electoral or Political” and looking for individual ads directly.

For example, when the coronavirus pandemic was emerging, a search for “coronavirus” in the library uncovered reams of ads from both reputable organizations and more dubious sources, Webb said. She recommended using one specific keyword for the most relevant results. 

In the US, Facebook has added a specific section for housing after journalists found the platform allowed advertisers to restrict housing-related ads based on the viewer’s race, color, national origin, and religion.

Users can also filter ads based on a number of different details, including geographic region, whether ads are active or inactive, or by how many unique users might have seen an ad (described as “Potential Reach”).

The name of the person who paid for a political ad is displayed in a disclaimer at the bottom of each promoted post. One thing to look out for is when the name of the Page and the one on the disclaimer don’t match, Webb said. An example of this might be a local political candidate who might be running ads on several pages.

Users can also search for ads using certain keywords by going to a Page of interest, clicking “Issue, Electoral or Political” and entering words of interest in the search box which appears near the filters.

This function, however, doesn’t always return results, Webb said. “I suspect they’ve changed when it pops up so be warned this might be out of date in another year … Facebook changes it a lot.”

Behind the data

To find more information on who might be behind a particular ad, click “See Ad Details” from the search page. 

“Information from the advertiser” reveals the business address, website, phone number and other details.

Further down the page lies demographic data on who the advertiser is trying to reach based on age, gender, and location.

You can also see the amount spent, an ad’s potential reach (how many people are in the targeted audience for a certain ad) and impressions (the total number of times the ad was shown) in numerical ranges. Bear in mind that these numbers are estimates, Webb said.

The “About the Page” section contains additional details on total spending and other ads from the Page — useful for getting a broader look at a whole campaign or other activity.

Advanced searching

Facebook releases a daily report on political ads detailing what it describes as top searches, spending totals and ad numbers covering the past two years, and a spending tracker, which reveals the highest daily spenders or when there are spikes in spending on ads.

Only ads that have been labelled political, electoral or related to specific issues are included in the report. “It’s very hard to figure out how exactly Facebook goes about delineating that,” explained Webb, who said she assumed it is based on keywords or information from the advertiser.

Pages spending the most or posting the highest volume of ads could be the source for a story, particularly during or ahead of an election.

The report also allows users to search for specific advertisers and find a breakdown of spending by advertiser or region.

For journalists hoping to go deeper into the Facebook Ad data, you can download the daily report as a CSV file by hitting “Download Report” at the bottom of the page. 

For more comprehensive data analysis, you can use the Facebook Ad Library API.

The archive can be limited

There are challenges to using the Ad Library, however. 

Advertisers often specify email addresses that do not contain identifiable features or reveal little about their origin, such as that of a shared workspace. Digging deeper on any other information contained in a disclaimer can prove vital to finding the true source, Webb said.

The platform also doesn’t provide always details or consistency on why some ads are removed, Webb discovered during her research.

In one case, three identical ads were promoted by three different Pages, each with the same disclaimer. Facebook removed two for breaking advertising policies but the third remained active.

Researchers and journalists have also found the Ad Library, Ad Library Report and the Ad Library API to be unreliable. In one example, some 74,000 political advertisements disappeared from the archive two days before the UK’s 2019 election. There have also been issues with delays in delivering data, design limitations which prevent users from retrieving a sufficient amount of data, and other bugs which can affect access to the library.

While there are caveats to what information is available through the Facebook Ad Library and related transparency tools, Webb said, they can prove to be an invaluable starting point for many investigations.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

Covering coronavirus: Privacy and security tips for journalists

As the coronavirus outbreak brings with it a surge in online misinformation and conspiracy theories, and with a number of elections just around the corner, there has never been a better time to brush up on digital investigative skills. But whether you’re looking into certain closed groups or contacting sensitive sources, it’s critical to implement privacy and security measures to stay safe online.

First Draft’s co-founder and US Director, Claire Wardle, spoke to Christopher Dufour on May 7 for a webinar about privacy and security tips when reporting on coronavirus. Dufour is a noted national security consultant, disinformation researcher, and speaker. 

As the digital landscape is constantly changing, the aim is to provide best practices and good habits that can be used to make decisions about your privacy and security online. Read on for the key takeaways from the session, watch the full webinar below or on our YouTube channel.

‘Hardening’ your browser

As a primary window onto the online world, it’s crucial to think of how your browser is set up. As Dufour highlighted, “most identity security measures are invalidated by bad browser habits.” While Dufour’s session did not focus on specific tools or software, he did emphasize that Internet Explorer was an “absolute no-go.” (It’s also no longer recommended as a default browser by Microsoft.) 

Taking the time to go through the privacy and security settings on your browser is the first step to increasing online security. For a guide to “hardening” a browser, Dufour went through the setup of his own installation of Firefox during the webinar, which you can follow here. Similar measures also apply to other browsers, including Chrome and Safari.

Also central to any journalistic investigation is a search engine. On this, Dufour explained: 

“Not all search engines are created equal — almost all of them exist for free because they log your search habits. 

“They want to understand the keywords you’re using so that they can serve relevant ads or sell that behavior someplace.”

As such, you could opt for search engines like DuckDuckGo, which bills itself as “privacy focused.” But even then, Dufour said he would not recommend a particular search engine. 

“Everyone should use what they think is great, and have that baked into your browser.”

Your data is the target 

The main way journalists become victims of online attacks is through data leaks, so it’s important to be aware of your digital trail.

“We’re living in an unprecedented time where the sharing of personal information and data has moved so quickly that it is really difficult for us to wrap our minds around all the places where we have leaked information,” Dufour said.

As such, the best place to start is by trying to understand the data you leave in your wake.

A pro tip from Dufour is to “map your footprint.” He encourages journalists to write down all the email addresses and phone numbers and the places they may have been used for any online activity: apps, social media, loyalty programs, etc.

From there, log into each place and perform a security audit. Is two-factor authentication switched on? Is there old data that can be deleted? Are all the settings restricted? 

If you’re no longer using the service, delete the account, he said.

Screenshot from presentation

Defining your digital identity 

Whether you’re using LinkedIn to approach expert sources for your story, Facebook to keep up with friends, or Twitter to connect with other journalists, you’re not going to necessarily be the same person on different platforms. 

“As journalists, we have a responsibility to have a public-facing profile,” Dufour said. “We need to be associated with our journalism organization, we also need the public to trust us and reach out to us in a public way.”

He recommended going back to basics with this course from the Berkman Klein Center for Internet & Society at Harvard University. Although aimed at high school students, it contains a number of steps to help individuals think critically about their online presence, as well as practical tips, such as help changing privacy settings on social media accounts.

“Think about ways to rewrite your identity in a way that’s helpful for you in all the ways you want to represent yourself online,” Dufour said.


A recurrent theme was the idea of “trade-offs.” Having all the privacy and security settings activated on a browser will, for instance, mean that you might not be able to access certain websites and pages. As such, the weighing up these trade-offs is an integral part of online reporting.

Dufour offered some words of advice. First, if you work in an organization that has such a team, “work with your IT people, not against them.”

“If you work for a company that has resources, dedicated IT security managers, they probably already have policies involved,” he explained. 

“Get to know what those policies are, so that they can form in some way with your personal risk tolerance for how you want to use the company’s devices, accounts that they provide and those types of things.”

A frequent trade-off is security and privacy versus convenience. A rule of thumb: “If it’s less convenient, it’s usually more secure,” he said.

“If it’s more convenient, then usually someone is making it more convenient to pull information out.”

People do, however, need some element of convenience, Dufour acknowledged. This is why best practices and good habits should be at the heart of online reporting.

Best practices

Dufour offered some “dos and don’ts” on browsing habits as well as some digital security tips. 

(Screenshot from the presentation)

“All this takes is a little bit of time,” said Dufour. 

“One of the things that I had to learn myself in doing this is: ‘Hey, if I stop looking at pictures of laughing cats all day, and just sat down and got really serious about doing this I would build better habits so that I can return to looking at laughing cats — just in a more secure way.’”

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

Lessons on covering coronavirus misinformation from the fallout of ‘Plandemic’

Few pieces of misinformation have gone as noticeably viral during the coronavirus crisis as “Plandemic”, the 26-minute “documentary” film in which discredited scientist Judy Mikovits makes repeated false claims about the virus and a potential vaccine.

The film, initially hosted on YouTube, quickly took off in online communities, simmering in Facebook groups and on pages dedicated to anti-vaccine messaging and other conspiracy theories. Within a week, it had been viewed more than eight million times and crossed into the mainstream. 

Now the dust has settled, journalists and researchers are reflecting on the video and its spread as an illustration of how coronavirus misinformation and conspiracy theories travel online. 

The video and what we can learn from it was one of the key topics at a panel discussion held by First Draft to mark the release of the latest edition of the Verification Handbook, a guide for journalists reporting on disinformation and media manipulation edited by Craig Silverman, media editor at BuzzFeed News

Present at the talk alongside Silverman were First Draft’s co-founder and US director Dr Claire Wardle, Joan Donovan, research director of the Harvard Kennedy School’s Shorenstein Center, Donie O’Sullivan, a reporter at CNN, and Brandy Zadrozny, a reporter with NBC News.

Content lives beyond platform removals

Social platforms have been taking action to combat the spread of misinformation during the pandemic. Facebook, YouTube, Twitter, Google, TikTok, Pinterest and more have all added labels and warnings to potentially harmful posts, as well as linking to high-quality news sources and public health authorities when users search for coronavirus-related content. 

But “Plandemic” provided an “interesting” example of how manipulators adapt to platform changes, said Donovan, who researches changes to the online information ecosystem and how bad actors respond.

“You had content pulled down from platforms after a few hours, and the platform companies are like, ‘See how fast we did this, this is amazing!’” she explained. “But you see, instantly, hundreds of copies of it going up all over the web.”

Despite efforts to remove the misleading video, it continued to be reuploaded by users across various platforms.

“On the website of the ‘Plandemic’ movie from the get-go they said to download [the film]… spread it far and wide,” explained NBC News reporter Brandy Zadrozny. While YouTube and Facebook grappled to remove the clip, the “cat was out of the bag” and had been seen millions of times already, she added. 

Zadrozny also expressed concern over content removals encouraging conspiracy theorists to double down on their efforts.

“[‘Plandemic’] should be down because it’s dangerous,” she said. “But at the same time that take down just reinforces their status as a ‘whistleblower who the man is out to silence’.”

You won’t always be able to attribute suspicious activity

While fingers are often pointed at Russia and China, finding a simple explanation behind a disinformation network is not always easy or even possible.

“With Plandemic, questions editors were asking were ‘Who’s pushing it? Who’s behind it? Is it Russia? China?’” said O’Sullivan. “Sometimes we have to stand back and say nobody has to necessarily be behind it all the time.”

He added: “When it comes to domestic actors, people are perfectly capable of creating and sharing and making misinformation go viral themselves without having a big evil troll factory.”

NBC’s Zadrozny, whose powerful reporting on misinformation spans women’s health groups promoting dangerous health advice and celebrities sharing conspiracy theory videos, agreed. 

“We are not wanting for domestic disinformation agents, actors or campaigns,” she said. “They are everywhere.”

Communities are increasingly coming together

Supporters of a variety of conspiracy theories have united in their scepticism of mainstream narratives during the coronavirus crisis. The “Plandemic” video tapped into familiar conspiracy theories that claim a shadowy group of elites  is hoping to gain power, this time through the virus.

“People are jumping on to narratives if it supports their worldviews or cause,” explained First Draft’s Wardle.

Zadrozny agreed, saying at present it feels as though there’s a “monster” made up of trolls, anti-vaxxers, far-right political extremists and other conspiracy-oriented communities around the pandemic. “They’re all here, it’s like they’ve all come to play.” 

Detailing how conspiracy theories often focus on  a far-away enemy, a vague “they” who don’t want people to know what’s “really” going on, Donovan said: “Whichever faction you’re aligned with, ‘they’ is a different entity.”

Practical lessons

While the Verification Handbook is a treasure trove of verification tools and techniques, traditional reporting skills are just as important as open-source reporting methods, said CNN reporter Donie O’Sullivan. For example, he said he didn’t find out who was behind a fake Black Lives Matter page in 2017 until he’d picked up the phone and spoken to sources first-hand.

When it comes to finding leads, O’Sullivan said he spends a lot of time on social platforms. “My favourite way of getting stories is getting in the weeds myself, being able to look through accounts and Facebook groups.”

Predicting what disinformation agents and conspiracy theorists will do next is also advantageous. According to Donovan, anti-vaccine messages from “Plandemic” will persist in different forms for time to come.

“We’re going to see many confusing things about vaccines over the next few months and part of it is going to be wedded to this emerging conspiracist narrative around a ‘Plandemic’.”

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

Ethical questions for covering coronavirus online: Key takeaways for journalists

The coronavirus outbreak has brought with it a number of new ethical challenges for newsrooms and reporters.

Social media users are sharing their experiences with the virus which can have real and damaging impact on their lives. There has been an uptick in conspiracy theories and misinformation as the world attempts to follow the ever-evolving science around the virus. And much of the misleading information is flourishing in private spaces that are difficult for researchers and reporters to monitor and even harder to trace.

First Draft’s co-founder and US director Claire Wardle and ethics and standards editor Victoria Kwan discussed the tricky ethical terrain that is covering coronavirus in a recent webinar session on May 14. Read on for key takeaways from the session, watch the full webinar below or on our YouTube channel

Traditional journalistic ethics still apply online

Facebook’s pivot to privacy last year saw the platform recognise a growing preference from users to communicate one-to-one or with limited groups of friends online.

“We’ve seen people move into smaller spaces with people that they trust, people that they know, people that believe similar things to them,” said Claire Wardle.

But this has had important implications for those covering and researching misinformation during the coronavirus outbreak. While it’s crucial to be aware of and publicly counter rumours and hoaxes shared in private spaces, there are a number of ethical questions reporters and newsrooms should set out to answer before entering them.

In many ways, these considerations aren’t so different to those commonly found in traditional journalism: balancing privacy versus public interest, as well as security versus transparency.

“If we’re thinking about reporting from these spaces, what does it mean if ultimately these people believe that they are in a space that is private?” asked Dr Wardle.

“If you can join that group, you can get access to that information and as ever with journalism, it’s that tension with privacy versus public interest.”

Sometimes the public interest case is clear: gaining knowledge from a group discussing future anti-lockdown demonstrations could be of vital importance to the health and safety of the public. In contrast, a closed community where individuals are sharing their experiences of being affected by Covid-19 may not pass the same public interest test.

This also brings up questions of security versus transparency. “For certain types of journalistic investigations, you have to consider this as if you were going undercover,” said Wardle.

“Of course, wherever possible, be transparent but also think about your own security.”

And the burden of considering these questions should not rest solely on the individual reporter — it necessitates a top-down approach from newsrooms, according to Dr Wardle:

“The things we’re talking about today really need to be enshrined as part of a newsroom’s editorial guidelines.”

Adopt a people-first approach

The coronavirus outbreak is a news story with a profound impact on people’s lives. For this reason it is crucial that journalists covering aspects of the crisis adopt a people-first approach, even when venturing online in their reporting.

When seeking to use online testimony from users in reporting, First Draft’s ethics and standards editor Victoria Kwan highlighted the importance of considering the intent of the person posting — especially when they’re not a public figure.

“Sometimes people don’t realise that their posts are public, and sometimes they do but they haven’t thought through the potential consequences,” she said.

Giving the example of a nurse who published a Facebook post about lack of PPE and tagged a number of media outlets in the caption, Kwan illustrated that there are instances where the poster clearly wishes to get publicity. But, she added, even in these examples it’s vital to talk through the potential impacts of your coverage with the individual, outlining the benefits and drawbacks of receiving media attention.

The theme of adopting a people-first approach is also an important aspect of covering conspiracy theories, which have experienced a considerable boost since the beginning of the crisis according to researchers.

For this, Dr Wardle pointed to the human psychology behind such theories.

“When people feel powerless and vulnerable, it makes them much more susceptible to conspiracy theories,” she said.

Newsrooms and journalists should avoid using derogatory language when referring to conspiracy theories and the people who share them online. Prioritising empathetic reporting is key, she said. Reporters should seek to explain why certain theories take root, rather than simply debunking them.

“As societies, we need to have more conversations around why conspiracies are taking off, what is it in society that makes them so appealing, and to talk about the psychological mechanics of conspiracies rather than: ‘these people are crazy, that’s stupid, let’s debunk the stupidity,’” said Dr Wardle.

Screenshot from webinar slide — examples of explainers rather than debunks

Be aware of the ‘tipping point’

From false links between 5G and the coronavirus to conspiracy theories around a potential vaccine, a huge ethical consideration for journalists is to avoid amplifying dangerous or misleading information.

Wardle cited the recent case of the viral “Plandemic” video which contained a number of falsehoods. While initially some newsrooms avoided reporting on it for fears of “giving it oxygen”, the video eventually gained so much traction that reporters realised it needed mainstream coverage.

It’s not always straightforward to decide when to cover mis- or disinformation. For this, Dr Wardle has coined the idea of the “tipping point”. On a case-by-case basis, there are a number of questions to ask before covering misleading content:

The idea of the tipping point is also key when investigating closed groups or messaging apps, where it may be difficult — sometimes impossible — to retrieve data about the spread of a piece of misinformation. How can you quantify the spread of a rumour when it leaves no trace?

Dr Wardle advised looking for examples of whether the content is being shared to other platforms as a way of determining whether or not it is being circulated widely. 

Finally, she warned that sometimes mis- and disinformation is deliberately created by certain actors with the aim of reaching coverage from mainstream news outlets.

“Often things are seeded within closed messaging apps with the hope that it will then grow, move into social media and then be picked up,” said Dr Wardle.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

When it comes to scientific information, WhatsApp users in Argentina are not fools

This study was carried out by the author, María Celeste Wagner (University of Pennsylvania, USA), Eugenia Mitchelstein (Universidad de San Andrés, Argentina), and Pablo J. Boczkowski (Northwestern University, USA), acting as external consultant.

Do people believe the information they receive through messaging apps? Does the identity of the sender influence the credibility of that information? When do people feel more inclined to share content they have received? In the midst of the ongoing pandemic, when rampant misinformation has exploded on social media and closed messaging services, these questions are more pertinent than ever.

Evidence from a December 2019 online experiment in Argentina, where respondents read and then responded to the quality of news around vaccines and climate change, could contribute to answering some of these questions.

Contrary to narratives underscoring the persuasive power of some misinformation, evidence from our experiment suggests that Argentine audiences — at least — are good at differentiating facts from falsehoods and do not blindly accept information from personal contacts.

In other words, we observed that many people were highly critical when reading news about various scientific topics. This could be partly related to a generalized skepticism in Argentina and distrust of information that is thought to be sent by personal contacts.

Top findings:

  • Contrary to widely held beliefs, participants were more distrustful of news they believed had been sent by a personal contact than of news where they lacked information about the sender.
  • Participants were more distrustful of stories about vaccines and climate change which turned out to be false than they were of factually true stories on the same subject.
  • It is unclear how effective fact-checking labels are in correcting for misinformation.

The study

In recent years, there has been growing scholarly and journalistic attention given to issues relating to mis- and disinformation. A large part of this work has focused on analyzing how false content flows online, mostly during political or electoral processes. It has also centered on how political disinformation could change opinions and, eventually, voting preferences. Although valuable, most of this work has focused primarily on the Global North, and on two platforms in particular — Facebook and Twitter.

We sought to counter this dominant trend by examining how people from a country in the Global South (in this case Argentina) consume misinformation on health and the environment on messaging apps.

To do this we conducted an online survey with 1,066 participants in December 2019. The survey was representative of the Argentine population in terms of socioeconomic status, age and gender.

During the online survey, participants were shown news stories about vaccines and global warming and asked to imagine a situation in which they had received these news stories on WhatsApp either from a relative, a colleague or a friend. Half of our participants read four real news stories about the effectiveness of flu and measles vaccines, the roles of human activity in climate change and of the role of carbon dioxide in global warming.

Factually correct stories ran with the following headlines:

  1. Vacunarse anualmente previene la gripe y sus complicaciones (Annual vaccination prevents the flu and its complications)
  2. Si bien el dióxido de carbono compone una parte pequeña de la atmósfera es una de las causas del calentamiento global (Although carbon dioxide makes up a small part of the atmosphere, it is one of the causes of global warming)
  3. El calentamiento global de los últimos 70 años se debe en gran medida a la actividad humana (Global warming over the past 70 years is largely due to human activity)
  4. El continente americano ya no es más una región libre de sarampión (The American continent is no longer a measles-free region)

The other half of the participants read false news that had been circulating online in Argentina about these same topics. In order to make everything as comparable as possible, we only changed minimal aspects of the false and real news stories.

False stories ran with the following headlines:

  1. Vacunarse anualmente no previene la gripe y sus complicaciones (Annual vaccination does not prevent the flu and its complications)
  2. El dióxido de carbono compone una parte muy pequeña de la atmósfera y por ende no puede ser una de las causas del calentamiento global (Carbon dioxide makes up a very small part of the atmosphere and therefore cannot be one of the causes of global warming)
  3. El calentamiento global de los últimos 70 años se debe en gran medida a causas naturales (Global warming of the past 70 years is largely due to natural causes)
  4. El continente americano sigue siendo una región libre de sarampión (The American continent remains a measles-free region)

The assignment of these two factors — the person who was sharing the news, and the veracity of the news — was random. All participants judged the veracity and credibility of each news story they had read and expressed how willing they would be to share each news item.

At the end of the study, those participants exposed to false stories were notified that they had been shown false content and the factually correct stories were then shared with them. 

A forthcoming study from academics in Argentina reveals some interesting findings on how people consume information on WhatsApp. Source: Pixabay

Credibility, trust, and sharing

We found that people who had read false news stories about vaccines and global warming before being told they were false assessed them as less credible and less truthful than those who read real news stories about those same topics.

In other words, participants distrusted stories with misinformation about vaccines and climate change more than those comparable stories with true information. They also expressed less willingness to share stories with misinformation in comparison to the news stories that were true. These findings were especially positive given the current concern around the spread of Covid-related falsehoods and rumors.

Our study also showed that those reading stories containing misinformation were significantly less likely to believe that others would trust those stories than those reading accurate news. This suggests that participants extended their critical stance to others. In addition, people showed a general level of distrust towards their contacts. For example, having information about the person who forwarded a story significantly lowered both the credibility of the message and the receiver’s willingness to share it, compared to a control group that lacked information about the source. In other words, people were more distrustful of news they believed had been sent by a personal contact than from news where they lacked information about the sender.

Credibility of news and willingness to share news by experimental condition (true, false and control news). Note: Both credibility and willingness to share were measured on a scale from 1 to 7. The control group read the true news, but instead of being told that someone was messaging them the information, they were told to imagine they were finding this news in a newspaper. 95% CI. We used ANOVA tests for comparisons of means. For comparisons of means across different levels of one factor, Tukey tests were used. All reported findings are statistically significant at a p-value<0.05.

We found no significant gender differences. Receiving news from either a female or male relative, friend or colleague did not result in any significant differences among our measured outcomes of credibility and sharing. Furthermore, telling participants any information about a specific source that sent them the message — a relative, a colleague or a friend — increased their later perceptions that other friends, colleagues or relatives would get upset if the participant were to forward them the news. Overall this suggests that participants do not seem to blindly trust their personal contacts. On the contrary, they are distrustful of what people might send them regardless of the relation.

Fact checking

We also tested the effect of fact-checking labels on evaluating the accuracy of information. After reading false content on global warming and vaccines, half of the participants who had been exposed to and had evaluated these false stories were told that fact checkers had determined the content to be false. At the end of the survey, after a series of distracting measures, all participants were asked to assess the veracity of a number of statements, some of which related to the vaccine and global warming topics they had read about earlier.

We found evidence suggesting that reading false news does have negative effects on participants’ ability to accurately assess information in those stories. Those who read factually correct news about measles vaccines and the role of carbon monoxide in global warming were able to more accurately assess the validity of statements related to those topics than those who had been exposed to false stories. This suggests that reading false news seems to have a negative effect on how people assess information. We did not find significant effects of this in the other two stories: those related to the flu vaccine and the human role in global warming.

Accuracy assessment of statements related to the four scientific topics covered in the news by experimental condition (control, fact-checked, not fact-checked news). Note: All accuracy assessments were measured on a scale from 1 to 7. Higher values mean more accuracy. Accuracy assessments of information about these four topics were measured at the end of the study, after participants read true or false (with fact-checking labels or not) information. The control group read true news about these topics, but instead of being told that someone was messaging them the information, they were told to imagine they were finding this news in a newspaper. Both the “fact-check” and “no fact-check” groups read false news. 95% CI. We used ANOVA tests for comparisons of means. For comparisons of means across different levels of one factor, Tukey tests were used. All reported findings are statistically significant at a p-value<0.05.

We also found discouraging results around the efficiency of fact-checking labels. A fact-checking label would be efficient if someone reading a false story is then told the story they read was deemed false by fact checkers and they then adjusted their position over time by remembering that the information they had read was inaccurate.

However, we found this to be the case only for the news story about the impact of humans on global warming. Those participants who read a fact-checking label were later able to more accurately assess information related to that issue than those people who had been exposed to a false story without the fact-checking label.

When it came to stories about measles vaccines, flu vaccines and carbon monoxide, we found no significant difference on later accuracy assessment between those participants who read a story labeled false and those who read a story without a label. These findings might be due to features that are specific to our study design. It could also be the case that fact-checking labels which do not correct misinformation with accurate information are less reliable, or that receiving a fact-checking label confuses participants in a way that makes it harder to cognitively correct for misinformation later on.


In summary, the Argentine public rated false content on vaccines and the environment as significantly less credible, and was less likely to share false information than true information. Including data on the identity of the sender — friend, family member, colleague — decreased both the credibility and willingness to share the stories.

Despite narratives on the destabilizing power of misinformation on social media during democratic political processes and global events, such as the current Covid-19 pandemic, our findings contribute to the current debate by showing that in a Global South context, even when false news seems to have some negative impact in subsequent opinion-formation, Argentines are good at identifying false information and are wary of the information their personal contacts share with them on messaging apps.

This research was independently designed by the research team, but was supported by a grant from WhatsApp.

How to analyze Facebook data for misinformation trends and narratives

As coronavirus misinformation continues to spread, knowing some basic methods for collecting and analyzing data is essential for journalists and researchers who want to dive under the surface of online information patterns.

There is a mountain of data that can help us examine topics such as the spread of 5G conspiracy theories or where false narratives around Covid-19 cures came from. It can help us analyze cross-border narratives and identify which online communities most frequently discuss certain issues.

While Twitter’s public data is accessible through its Application Programming Interface (API), it can be much more complicated for researchers to access platforms such as Facebook and Instagram.

Facebook-owned platform CrowdTangle is the most easily accessible tool to handle three of the most important social networks — Facebook, Instagram, and Reddit — and it is free for journalists and researchers.

What is CrowdTangle?

CrowdTangle is an enormous archive of social media data that allows us to search through public Instagram, Facebook and Reddit posts or organise public accounts and communities into lists. 

Here are a few things we can do easily with CrowdTangle:

  • Monitor what is trending around a certain topic
  • Search for combinations of words and phrases to discover trends and patterns.
  • Track the activity of public accounts and communities.
  • See which accounts are posting most and who is getting the most interactions around certain issues.

Read our Newsgathering and Monitoring guide to learn more on how to monitor disinformation on CrowdTangle.

There are some limitations however. The data is not exhaustive and you can only access content that is already publicly available — meaning we can’t see posts published within private groups or by private profiles.

We can also only track the activity of those accounts which are already included in the CrowdTangle database. CrowdTangle have made great efforts to ensure the database is comprehensive but it is not perfect. Nevertheless, it’s the best option we have to access historical data on these platforms, measure trends and find patterns.

It is also useful in helping us decide whether a rumor has crossed the tipping point and entered into the public eye, making it a prime candidate for reporting to stem the spread of misinformation, rather than amplifying something few have seen.

Download the data

If you don’t already have access to CrowdTangle, you can request it through its websiteIt is free for journalists and researchers.

There are a few different options available for accessing public posts in a data-friendly format, mainly through ‘Historical Data’ (under ‘General Settings’, on the top-right corner) or through the ‘Saved Searches’ feature. We’ll illustrate the latter as it allows us to tailor the search according to a broader range of keywords or phrases.

Saved searches and historical data on CrowdTangle.

On the CrowdTangle dashboard page, go to ‘Saves Searches’, click on ‘New Search’ and then on ‘Edit Search’ to input more advanced queries. Unfortunately, we can’t search on both Facebook Pages and Groups at the same time, so we have to choose where we want to search or repeat the same steps twice.

We can use a list of keywords and the CrowdTangle version of Boolean expressions

In the following example, we used the ‘Saved Searches’ feature to search trends around the false claim that 5G causes Covid-19 when it started trending on social media.

‘Saved Searches’ feature to search trends around the false claim that 5G.

In the first search query, below, we are asking CrowdTangle to search its database of Facebook Pages for posts containing one of a number of variations on “coronavirus” and “5G”. We can input here any word we think people might use instead of “Coronavirus”, including misspellings. Remember it is important to think about the language that people use on social media.

We can then add similar queries an unlimited number of times.

Search queries in CrowdTangle aren’t always straightforward but are powerful.

Doing this as a normal Boolean search, of the kind we might use on Twitter, would look something like this:

(coronavirus OR corona OR caronavirus OR carona OR covid OR covid-19) AND (5g OR fivegee OR “5 gee” OR “five G”)

While CrowdTangle’s new Search feature does accept Boolean queries, the dashboard and saved searches like this do not.

So for CT searches you basically have to repeat:

(coronavirus OR corona OR caronavirus OR carona OR covid OR covid-19) AND 5g

(coronavirus OR corona OR caronavirus OR carona OR covid OR covid-19) AND “5 gee”

And so on. With CrowdTangle you can just replace all the “OR” operators with a comma.

After inputting the search queries, it’s time to set filters, like language, timeframe, media type and more.

In our 5G search, for example, we set a timeframe from Jan 1 up to April 16.

By default CrowdTangle will sort the results by ‘Overperforming’, or how well a particular post is doing compared to the average for that page or group. To download the whole dataset we have to change ‘Overperforming’ to ‘Total Interactions’, which would retrieve every public post sorted by total likes, comments, shares, and other reactions.

Click the download button (like a cloud with a down arrow, on the right) to save the data. After a few minutes, depending on the amount of data we are requesting, we will receive an email with a CSV file.

Repeat the steps to search within Facebook Groups.

Making patterns visible

Before moving forward to explore and analyze the data collected, it’s helpful to see what a public post means in terms of the available data. This annotated screenshot shows which details are accessible.

Information contained in a Facebook post.

And below is what the CSV file looks like once it is downloaded, based on our search, with columns for different fields like the name of the Page which published the post, the time and date the post was published, and more. We can also easily see the text in the post and the link to the original on the relevant platform.

A CSV downloaded on CrowdTangle.

Data journalism is often about patterns, trends or outliers and looking at social media data is no different. Are there lots of identical posts from different accounts? Or do many posts appear on the same day? Which posts received the most interactions, and why?

It’s important to remember that we are not seeing the whole picture – we are only accessing public data but it is enough to pull out general trends.

Import, clean and analyze the data on a Google Sheet

Once we have downloaded the CSV file, we can import it into spreadsheet software such as Microsoft Excel or Google Sheets. We’ve used the latter in this example as it is free to use, however the steps are similar in Excel.

After starting a new spreadsheet, we just need to click ‘File’ on the menu bar, then ‘Import’ to find the CSV file in our downloads. It might take a few moments depending on the size of our file.

Before starting, we need to lock the top row in our spreadsheet by clicking on ‘View’ and then ‘Freeze first row’. By doing this, we make sure the column headers remain visible as we scroll down and don’t get mixed up when we play around with the data.

Freeze first row.

First of all, we want to make sure the data is ‘clean’ and ready to be analyzed. 

Data cleaning is a fundamental process to prepare our dataset for analysis and assure accuracy and reliability. We might need to fix spelling and syntax errors, for example, or remove empty fields and duplicates to standardize the data sets. 

To handle big datasets, we suggest downloading OpenRefine, an open-source tool to clean messy data. But even on Google Sheets there are a few quick actions you can take.

We sometimes end up with some non-printable characters in our ‘Group Name’ column that could interfere with our next steps. Non-printable characters are characters that fall outside of the American Standard Code for Information Interchange (ASCII) range, that we might wish to remove.

We can easily remove them by adding a new column and writing the function =CLEAN(A2) on the second row of the new column. With the CLEAN function, we remove any non-printable computer characters presented in our worksheet.

Clean unknown characters.

To do this for the whole column, select the top cell and then hit ctrl+shift+down.

We can also easily remove some specific characters by selecting a column, press command+F on a Mac or ctrl+F on a PC, and then the three dots for ‘more options’. Here we type any character we want to remove and replace with a blank-space, and we click ‘Replace all’.

The Find and Replace feature.

We also want to make sure the dates in the ‘Created’ column are formatted correctly.

Change date format.

We select the ‘Created’ column, then go under ‘Format’, ‘Number’ and click on the date format we need for our analysis. We can also click on ‘More Formats’ and customize our own format. This is particularly important if we want to visualize the data which often requires specific date formats.

Once we’ve cleaned up the data, we can move to look at some general patterns in our dataset.

Type the function =MEDIAN(X2:X10001), where X is the column letter, at the end of each column to calculate the average. For example, in our 5G dataset the median of the ‘Total Interaction’ column is 94, which means 5G posts reached an average of 94 interactions – a figure which we would need to investigate more to understand if it is significant.

We can also look more closely at the different reactions from users to the post, such as ‘Haha’ and ‘Angry’, to see what was the most popular in terms of average reactions per post.

Let’s explore some of the posts which attracted the highest number of shares by sorting the data by ‘Shares’ column. We look at shares because we consider it to be the most significant interaction in terms of tipping point.

Click the small arrow at the top of the column then select ‘Sort sheet Z → A’ and the whole worksheet will change order: from posts with the highest number of shares on the top, down to the lowest.

Sort the dataset by shares

By clicking through the URLs here we can analyze the content of the top ten most shared posts in the dataset and their source.

We can also focus on the messages of the top posts and analyze their content. For example, we can have a quick overview of which languages people are using the most to spread the rumor: we add a column on the right of the ‘Message’ column and we type the function: =DETECTLANGUAGE(U2), where U2 is the ‘Message’ column. If many of the top pages in our dataset use different languages, it might mean a rumour or misleading narrative has spread outside of a specific region and become global.

Detect language.

A similar analysis, but from a different perspective, involves analyzing the most common Groups in our dataset. To address this question we need to create a pivot table. There are numerous valuable things we can do with the simple use of a pivot table, segmenting the sheet and making a huge amount of data more manageable.

First, select all of our data by clicking on the little rectangle on the top left side of the sheet. Then go to ‘Data’ on the top menu and select ‘Pivot Table’.

On the new page, we’ll have a few options to determine the rows, columns, values, and filters for our pivot table. Select ‘Group Names’ both under ‘Rows’ and ‘Value’, because we want to see how many times each Group name recurs in the dataset.

Then we summarize by ‘COUNTA’ under ‘Value’. This will quickly summarize the entire dataset by unique Group Names and tell us how many times that name appears in the dataset.

Create a pivot table to obtain the most recurrent Groups in the dataset.

Click on ‘Sort by’ and select the column ‘COUNTA of Group Names’ and descending order. This gives us a quick overview of who is posting the most about this topic.

We can repeat the same operation with the ‘Links’ column to see the most common links in the dataset:

Count the most common links in the dataset.

Or with the ‘Media types’ column or ‘Language’:

It is very important to look at trends over time, especially if we want to know whether or not we should publish a debunk or amplify a rumor.

We can do that using a pivot table again which will sum up all shares for each day in the dataset.

CrowdTangle tells us the exact second a post was published as default, so we need to change the time stamp in our spreadsheet to show the date.

For this we need to add two blank columns on the right of the ‘Created’ column. Copy and paste it, and highlight the new one. Then click on ‘Data’ on the top navigation menu and ‘Split text to columns’ and select ‘Space’ as separator; in this way, we will remove the timestamp and leave a clean column with dates only.

Now we can create a new pivot table. Select the new ‘Created’ column within ‘rows’, and the ‘Shares’ column in ‘values’. We then select ‘sum’ under ‘Summarize by’. We can also create a chart out of our data to overlook how it looks like. Click on Insert Chart in the top menu. 

Sum up all shares for each day in the dataset.

As we see in this case, 5G related posts started increasing around the end of March and spiked at more than 40,000 shares per day at the beginning of April.

We can also decide to simply sort the data by the ‘Created’ column, which could lead us to reconstruct a timeline of how and when the content was amplified and reveal the spread of disinformation. 

This is just a quick demonstration of some basic actions we can do with CrowdTangle’s Facebook data. We can also use CrowdTangle for Instagram and Reddit, which will give us slightly different — and less detailed — datasets.

Obviously there are many more ways we can clean and analyze the data. What we do will be shaped by the questions we want to ask of it. Understanding the spread of mis and disinformation is hard but using data to ask the right questions is a solid first step.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

3 lessons on the coronavirus ‘infodemic’ from experts and tech companies

The UK Parliament’s select committee for digital, culture, media and sport, where elected representatives grill the rich and powerful over matters of public interest, has played host to numerous dramatic scenes in recent years. This is where the details of the Cambridge Analytica scandal played out as part of the committee’s inquiry into disinformation and ‘fake news’, where some of the myriad ways political actors manipulate social media to influence voters was brought to light for a mainstream audience.

As part of its ongoing inquiry into online harms and disinformation, the committee summoned assorted experts and technology company representatives on Thursday April 30 to answer questions on the subject of misinformation and the coronavirus pandemic.

First Draft’s co-founder and US director Dr Claire Wardle, as well as Professor Philip Howard, the director of the Oxford Internet Institute, and Stacie Hoffmann, a digital policy consultant at Oxford Information Labs were asked for their opinions before representatives from Google, Facebook and Twitter faced questions. 

We’ve picked out some of the key lessons and talking points from the session.

Understanding the psychology behind coronavirus-related misinformation is crucial

To reduce the spread of misinformation relating to the coronavirus, it’s important to understand what’s driving it. And the answer lies somewhere in human psychology. 

“We have to recognise that there’s a lot of misinformation, and people are becoming nodes for this because they’re scared,” Dr Wardle told the politicians. 

When there is a constant flow of new coronavirus reports, she said people are often sharing rumours and unverified information with family and friends “just in case”. 

“That dynamic is critical to how we’re seeing people respond to the pandemic,” she added.

The outbreak has prompted a slew of viral messages and hoaxes on WhatsApp and other messaging services claiming that various authorities are on the verge of announcing a complete lockdown, or that dubious practices like inhaling hot air can cure the coronavirus.

Dr Wardle also noted that we’re not just seeing an increase in people sharing unverified information but also an uptick in conspiracy theories. 

Whether they lead to the burning of mobile telephone masts, claims that the virus is a man-made bioweapon, or doubts around the potential vaccine, these fringe theories can have serious consequences.

“It’s easy to dismiss conspiracies, but we have to understand why they’re taking hold,” according to Dr Wardle. “There isn’t a good origin story for the virus, and so this information vacuum is allowing misinformation to circulate.

“The reason people attach themselves to conspiracies is because they are simple, powerful narratives. Right now, people are desperate for an explanation of what they’re going through.

“They feel out of control, and conspiracies give them control because it gives an explanation that they’re lacking.” 

Influencers can be a ‘gateway drug’

The subject of celebrities and influencers spreading misinformation online was raised repeatedly. While the 5G conspiracies had been gaining traction in online spaces since January, they reached new heights when a number of high-profile celebrities shared the theory in early April.

Professor Philip Howard of the Oxford Internet Institute said that, in some ways, public figures could be considered a “gateway drug” to misinformation.

“If a prominent Hollywood star or a prominent political figure says something that is not consistent with the science or the public health advice, some people will go looking for it and spread it.” 

Both Professor Howard and Stacie Hoffman, from Oxford Information Labs, were asked about the questions of responsibility when it comes to public figures sharing misinformation. Hoffman suggested that guidelines may be needed to outline what standards are expected of high-profile accounts.

This topic came up again in the questions put to Katy Minshall, Twitter’s UK Head of Government, Public Policy and Philanthropy, particularly on the issue of users with blue ticks — the platform’s system for verifying the identity of a well-known figure.

“I can assure the committee that if any account — verified, or not — breaks any rules, they will be subject to enforcement action,” she said.

At the end of March, both Twitter and Facebook took action to remove misleading posts from Brazil’s president, Jair Bolsonaro. In a video, the president endorsed the antiviral drug hydroxychloroquine and encouraged an end to social distancing measures.

When asked about recent comments made by US President Donald Trump regarding the use of disinfectant as treatment, Minshall told the committee that Twitter had blocked the hashtag #InjectDisinfectant from trending. Videos of Trump making the statement, however, are allowed on the platform

The platforms’ responses to the ‘infodemic’ raise questions about the future

Social media platforms have been stepping up measures to meet the unprecedented public health crisis, from changing their policies, to tweaking their algorithms, and these changes have not gone unnoticed.

“Covid-19 is a different type of stress test for these platforms because we’re really starting to see what they can do,” said Hoffman.

Since the outbreak, measures have included notifying users who have been exposed to debunked misinformation and collaborating with governments on chatbots that can respond to people’s questions.

Minshall told the politicians that Twitter had challenged 3.4 million accounts that had appeared to engage suspiciously in coronavirus-related conversations.

Google, Facebook and Twitter all mentioned their efforts to direct users off their platforms towards authoritative official sources on health matters.

“The number one ask we have heard from government and health organisations is that people need clear, concise information from one of two single sources about what to do,” said Alina Dimofte, Public Policy and Government Relations Manager at Google.  

“What we want to do is to empower people with this information.”

While the expert witnesses also mentioned the importance of building trust in official sources, they proposed additional ways the platforms could help. 

Dr Wardle and Professor Howard referenced the need for more transparency in the information they share with researchers. 

“We’re right now in the middle of a natural experiment,” said Dr Wardle, “so what I would like to see is the platforms do more but then allow academics to test alongside them to see what the effects are.

“All of them are doing different things but what we’re lacking is transparency and oversight.”

It also still remains to be seen whether the measures implemented in response to coronavirus misinformation will be permanent.

“It’ll be really interesting to see if they use these going forward — if they stay in place post-pandemic,” said Hoffman.

Watch the full session here.

Stay up to date with First Draft’s work by becoming a subscriber and follow us on Facebook and Twitter.

Too much coronavirus information? We’ll help you separate the helpful from the harmful

Sign up for free and get started. 

Right now there are a lot of people feeling overwhelmed with information about coronavirus. We’re in a whirlwind of rumors, hoaxes and speculation, spreading like the virus itself. The World Health Organization is calling it an ‘infodemic’. So much information that we don’t know what to believe.

It’s clear that we need good information now more than ever, and it’s never been more important to pause and consider what we see online, whether it’s a fake cure, funny meme or out-and-out conspiracy theory. This isn’t just a problem for journalists. We’re all publishers now, and have a collective responsibility to make sure that what we share is accurate and based on facts.

While most of us are just trying to help, if we share the wrong thing our good intentions can backfire. So we designed a quick and simple public guide to help everyone navigate the infodemic.

Throughout this guide we’ll help you answer questions like:

  • Just what on earth is an ‘infodemic’ anyway?
  • Where does false information come from and why does it spread?
  • What should I be most wary of?
  • How can I check if something is credible or cooked-up?
  • Is that picture what I think it is?
  • To share or not to share?
  • How can I keep my head at a time like this?

And much more.

How to sign up 

Step 1 – Head to and create a free account.

Step 2 – Once you’re on the landing page, click on the big banner image that says “Find out more about First Draft Courses by clicking here”.

Step 3 – Select “Too much information: How to separate the helpful from the harmful about coronavirus” and click “Enroll”.

You’re in! This free guide is made up of snackable reads and videos, so you can pick what feels most interesting and dive right in, or go through from start to finish.

All you need is a computer or phone and an internet connection and you’re good to go. 

Sign up for free and get started. 

This course is for everybody. Just like we wash our hands to make sure we don’t pass on the virus, we can work together to make sure that we don’t pass on harmful information.

Check out First Draft’s coronavirus resource hub for reporters, with tools and guides for verification, newsgathering, effective reporting, and more.

Stay up to date with First Draft’s work by subscribing to our newsletter and follow us on Facebook and Twitter.

Partnership on AI & First Draft begin investigating labels for manipulated media

In the months since we announced the Partnership on AI/First Draft Media Manipulation Research Fellowship, the need for collaborative, rigorous research on whether to add descriptive labels that explain manipulated and synthetic media, and how to do this so audiences understand those labels, has become increasingly urgent. Technology platforms and media entities not only face challenges when evaluating whether or not audio-visual content is manipulated and/or misleading, but they must also determine how to take action in response to that evaluation.

Labelling is touted as an effective mechanism for providing adequate disclosure to audiences, thereby mitigating the impact of mis/disinformation; it has even been highlighted by certain platform users themselves as a particularly appealing strategy for dealing with manipulated and synthetic media.

While labels have the potential to provide important signals to online content consumers, they may also lead to unintended consequences that amplify the effects of audio-visual mis/disinformation (both AI-generated and low-tech varieties). We must, therefore, work collaboratively to investigate the impact labelling has on audiences. Doing so can help preempt any unintended consequences of scattered, hasty deployment of manipulated and synthetic media labels and ensure they prove useful for mitigating the effects of mis/disinformation.

Why we must study the impact of media manipulation labels

Labelling could vastly change how audiences make sense of audio-visual content online. Academic research on the “implied truth effect” suggests that attaching warning labels to a portion of inaccurate headlines can actually increase the perceived accuracy of other headlines that lack warnings. While this realm of academic research is in its nascent stages and was conducted on text-based mis/disinformation, might video and image labels change how people make sense of and trust content that is not labeled?

Such early findings imply a need to consider potential unintended consequences associated with deploying labelling interventions that do not draw from diverse input and testing. Although rigorous academic studies can sometimes take up to two years to get published, and organizations responding to mis/disinformation challenges must take swift action to respond to manipulated and synthetic media, we must work promptly alongside the academic community to further understand how labels, the language we use in those labels, and the dynamics of labels across different platforms and online spaces impact perceptions of information integrity.

This is an area that is ripe for multi-stakeholder, coordinated effort from technology platforms, media entities, academics, and civil society organizations – within the Partnership on AI (PAI) and beyond.

While each technology platform has a very different way of presenting audio-visual information, and even within particular tech organizations there may be visually distinct platforms, entirely disparate labelling methods across platforms could sow confusion in an already complex and fluid online information ecosystem. As we’ve suggested previously, we need to evaluate a new visual language that is more universal and proves effective across different platforms and websites. This underscores the potential benefit to cross-sector collaboration in testing, sharing insights, and deploying interventions to deal with manipulated media.

Our upcoming work

PAI and First Draft are excited to begin work that furthers our community’s collective understanding of how the language and eventual labels we use to describe audio-visual manipulations impacts audience perceptions of information integrity. It has been heartening to see related efforts from The Washington Post and the Duke Reporter’s Lab/ that attempt to drive towards shared language amongst fact checkers, platforms, and researchers to capture the nuances of different audio-visual manipulations.

Our research seeks to understand how labels describing manipulated and synthetic media might ultimately be leveraged to help audiences and end-users recognize mis/disinformation and interpret content online. To do so, we plan to landscape existing interventions, develop a series of manipulated media labels for testing, and then ultimately conduct audience research to study the label designs and how labeling impacts audiences more generally. This research will help illuminate whether or not a robust labeling mechanism can help audiences confidently and accurately recognize mis/disinformation. In doing so, we hope to improve the general understanding of how people interpret mis/disinformation and the effectiveness of interventions aimed at helping people to do so effectively.

Introducing our Media Manipulation Research Fellow

Emily Saltz has joined the PAI team as the PAI/First Draft Media Manipulation Research Fellow in order to drive this timely work. Emily joins us from The New York Times (a PAI Partner), where she was the User Experience Lead on the News Provenance Project. Her previous research focused on how people assess news photography on their social media feeds, and what types of labels and contextual information might help audiences better make sense of the photo posts they see. Emily brings her design and research experience to this very human-centered work, work that will only become more integral to ensuring information integrity as techniques for manipulating audio-visual content become more widespread and varied.

PAI and First Draft look forward to collaborating with our partners and beyond on this important project, one that requires consistent and collective attention across the media integrity community. We plan to share many of our findings as we work with a diverse suite of stakeholders to consider meaningful methods for promoting information integrity.

This article is re-published with permission from

Stay up to date with First Draft’s work by subscribing to our newsletter and follow us on Facebook and Twitter.

How mental health subreddits are coping with the coronavirus infodemic

“I have had two meltdowns from the fear mongering,” one Reddit user writes on a mental health forum. “I know that some people are so health concern dismissive that they NEED the fear tactics to take reasonable health precautions. But for us it is hell to read over and over again!”

Another adds: “The misinformation is the worst part. You can’t tell what’s misinformation and what isn’t – unqualified people [are] making statements as if they were fact.” 

As the world responds to an unprecedented crisis, the coronavirus pandemic has become a source of great anxiety for many. However, for those with pre-existing mental health conditions like health anxiety and obsessive compulsive disorder, the impact of the outbreak, and the ‘infodemic’ that has accompanied it, has been especially strong. Mental health charity Anxiety UK advises that they may find the “current heightened focus on contamination and infection particularly distressing”.

The disorders are characterised by an intolerance of uncertainty and excessive worry, psychological states that have moved to the forefront of many people’s lives as we grapple with a crisis situation. Peter Tyrer, Emeritus Professor in Community Psychiatry at Imperial College and a health anxiety expert, defines the illness as being characterised by “an irrational fear of either having or getting a disease”. He stressed that the current situation with coronavirus is markedly different. In a pandemic, fear is not irrational, but some “complicated and highly unproductive” measures people may take to avoid infection can be.

First Draft spoke to Dr. Kate Starbird, Associate Professor of Human Centered Design & Engineering at the University of Washington and crisis informatics researcher, who explained why the pandemic is fertile ground for misinformation. 

“We’re really vulnerable to disinformation flows, especially at this time when we have all of this anxiety. We’re trying desperately to find information that can help us make the right decisions for ourselves, our families and our communities, and just resolve that uncertainty. And so [that makes us] acutely vulnerable to the spread of mis- and dis- information.”

Reddit hosts a number of mental health communities, voluntarily moderated like the others on the platform. The head of the WHO has said that coronavirus misinformation is spreading even faster than the virus itself and Reddit has proven to be no exception. First Draft found that in the absence of the platform taking a hard line against misinformation, moderators in mental health communities have had to step up to deal with the swell of coronavirus information.

The site has also run banners by health organisations and quarantined communities posting hoaxes or misinformation, like r/Wuhan_flu. But its approach has been markedly less stringent than that taken by some of the other platforms, including Facebook and Pinterest.

In some ways, online mental health communities like r/healthanxiety are well-prepared for the Covid-19 infodemic. The subreddit has a rule in place that feels timely: “No ‘buzz illness’ posts”. The rule warns that posts about coronavirus in the subreddit will be removed to avoid the forum being “overrun” with “media-hyped illness”. Posts about coronavirus are only allowed within a specific megathread within the subreddit. On r/OCD, moderators use an AutoModerator feature to remove posts with certain keywords.

Matt, a moderator on r/healthanxiety, said this was to “ensure that the conversations remain productive and healthy”. He told First Draft: “There has been no shortage of scams, spam, and misinformation being shared. Thankfully, we have caught most of them before they were publicly visible.”

He said moderators are taking steps to keep the environment productive and healthy, including implementing the AutoModerator feature to filter out coronavirus-related posts and direct posters to the forum’s megathreads, as well as experimenting with manually approving links and filtering them out entirely. Moderators have also been sharing links to expert guidance on the virus.

Reddit says it is closely monitoring the pandemic, is providing resources to support volunteer moderators and users.

A spokesperson said: “Moderators have access to crisis management resources on our Mod Help Center, which were also shared through a direct message and post in our dedicated community for moderators, r/modsupport, in light of Covid-19. We have also created a Community Council of our most impacted communities to work closely together, understand the issues they face, and better support them through this time.

“In the event that emergency moderation resources are needed, we will also make our Moderator Reserves program available. Reddit recently launched a partnership with Crisis Text Line as well, which any user or moderator that is struggling to cope may access. We will continue to evaluate and evolve how we can best support our communities.”

Moves to keep the environment free from an influx of unverified information can be crucial during a crisis. Dr. Starbird said that the way people respond to crises like a pandemic can often lead to the spread of rumour and false information online. “People come together to try to make sense of what’s going on: it’s a natural response to the uncertainty and anxiety that are inherent to crisis events like this, where we don’t really know how it’s gonna play out.”

This has always been true, long before the advent of social media. “These behaviours are natural human behaviours: social media becomes a facilitator for them,” says Dr. Starbird. “But there’s new configurations of how we can participate [in this sense-making process]. Because we can participate from all over the world and we have these different network structures, there’s different ways that things are amplified and influences are flowing.

“It qualitatively shapes how the sense-making processes are taking form, as well as just scaling up the number of people that can participate and the distance across which we can [communicate].”

While collective sense-making, and the rumours that often accompany it, are a long-standing human response to crises, our information environments have evolved rapidly over time. Though historically crises have often been characterised by a lack of information, they now increasingly feature an overabundance of it, as we’re seeing with the Covid-19 infodemic. 

Dr. Starbird underlined a similarity between these distinct states: “The effect is the same in that you have uncertainty about what to do. The difference is that before, your sense-making practices were making it up from scratch, and now you have an infinite number of sources to turn to good, bad and otherwise to pick from to help you grasp those ideas.”

Our current information abundance, and the natural response of feeling overwhelmed in times of crisis raise questions of platform responsibility. While volunteer moderators work to maintain an information ecosystem for people who Professor Tyrer says often feel isolated in their daily lives, should Reddit be taking a more interventionist approach in filtering out fact from fiction?

Matt thinks so, saying: “I suspect that Reddit could be doing more for source validation, and at the very least giving moderators more pre-made tools for verifying and displaying high quality sources and data.”

Dr. Starbird agreed the platform should give moderators more tools to regulate the site. However, she stressed the complexity of policing misinformation, cautioning that it can have far reaching implications. 

“I think that the focus right now is best placed on when problematic information is getting a wide reach; when information that is likely to cause people to take actions that are detrimental to themselves or society is going viral.”

She emphasised the importance of transparency from official channels in a situation that is changing day-to-day, but also our own need to adapt as information participants.

“We’ve got to adjust our natural tendency to want certainty and really accept the fact that [the situation] is uncertain and changing, and as much as we want to resolve that, it’s only going to resolve with time. And we don’t even know how long that time is.”

How First Draft is working with journalists and the public to ensure credible coverage in critical moments

At First Draft we are approaching our five-year year anniversary. We launched in June 2015, a year that saw a terrifying rise of terrorist attacks around the world. Eyewitness experiences and images that were instantly shared on social media became an essential aspect of real-time reporting.  A lot has happened since then and challenges for journalists have increased, year on year. As we find ourselves now experiencing a pandemic and associated infodemic, everything we have learned about how to support journalists during periods of extraordinary pressure matters now more than ever. 

With continued core support from Google News Initiative and the Craig Newmark Philanthropies, First Draft is working to help drive credible coverage during this crisis. So here is what we have planned for the coming weeks and months: 

Guiding best practice reporting through online training

First Draft is sharing skills, insights and tools to support reporters and the wider public respond to the ‘infodemic’ following the outbreak of coronavirus. Yesterday we launched Covering Coronavirus: an Online Course for Journalists, designed to help reporters and information providers gain new skills and receive best practice recommendations on how to tackle misinformation relating to the coronavirus. You can read our post about the course here.

The course sets out to: explain how and why false information spreads; provide tools and techniques for monitoring and verifying information online, including images and videos; share best practice around reporting on coronavirus; and offer advice on how those covering the crisis every day can protect their own mental and emotional wellbeing.

The modular design of the course allows journalists to jump to the section they are most interested in, or complete the course sequentially from start to finish.

We aim to translate key aspects of this course into as many languages as we can, as soon as we can. A further course on this topic for the wider public is imminent as we continue our work with communities in how they understand and inform their navigation around the information overload.

Empowering the public to report concerning content and quickly find the information they are seeking

In the coming days, we will launch our public campaign, designed to encourage people to pause and consider the information they see online and to highlight any useful, harmful, questionable or missing information.  At a time when quality information matters more than ever, these ‘tips’ will assist First Draft’s global network of reporters, researchers and specialists and help shape news coverage. Our aim is to support newsrooms in coordinating a rapid, effective response to information vacuums and prevent misinformation breaking through.

Helping journalists to build resilience using crisis simulation training

We started the year preparing newsrooms across the US for an information crisis, visiting 15 cities and training over 900 journalists using a dedicated online simulation platform to demonstrate how to monitor, verify and respond to online threats. The scenario we used was built around election preparation and tactics of manipulation. We are working to adapt this simulation to reflect the anticipated challenges presented by coronavirus and allow participants to experience the exercise remotely, testing their newsroom capabilities and response. 

Sharing the very best resources for reporters

We are currently running multiple webinars per week (you can keep track of the schedule on our dedicated webinar page), and we are leading as many as possible in different languages (including French, Spanish, Portuguese and Italian). We are scheduling the series to reach different timezones and uploading the recordings to our webinar page so they will be available to everyone at any time.

We will continue to update our dedicated Reporters Hub with guides, recommended tools, relevant articles and research. We will be building on our series of essential guides and preparing short, quick reference explainers and checklists to accompany each one. 

Also from this week, our Daily & Weekly Briefing emails include trends, insights and analysis directly related to coronavirus, based on our own monitoring efforts and the input we are receiving from our international network of reporters and researchers. Sign up to receive these here.

You can find and search for each of our daily briefings on this dedicated page in our Reporters Hub.

Sparking successful collaboration through programs and partnerships 

Work is underway to connect journalists and researchers from around the world, in an effort to enhance reporting and ensure that newsrooms and other information providers can respond quickly to address escalating content that is causing confusion and harm. This global CrossCheck network is launching with invitations to all partners who have previously participated in a CrossCheck project over the past few years. We already have journalists from across the US sharing examples, tips and verification expertise. Joining them in the coming days and weeks will be representatives from Argentina, Australia, Brazil, France, Germany, Ireland, Nigeria, Spain, South Africa, UK and Uruguay. To nominate your organization to join, click here

We will also expand on our searchable database of fact-checked and verified reports, collected from credible information sources across the web, such as IFCN and the WHO. This will soon include our own CrossCheck information cards, designed to provide clarity and guidance around the most urgent topics of concern relating to coronavirus. The cards will be presented in a digital, shareable format which will allow us to update them daily and link directly to the latest information reported by our network of partners around the world. Our goal is to provide an online template for verified CrossCheck participants to create their own region and language specific cards. We will encourage our partners to ‘crosscheck’ each others work by adding the logo of their organization, a technique that we know can help to provide an indicator of credibility to the public.

Above all else, our priority is to provide useful and constructive support to everyone working to improve information quality, and ensure credible coverage when it matters most. If you have any suggestions for more that we can do, please email [email protected] 

Covering coronavirus: An online course for journalists

Alongside the rapid spread of the coronavirus, an ‘infodemic’ is also on the march. Misleading or incorrect information about the disease, how it spreads, and how we can protect ourselves against is increasing. We are also overwhelmed by accurate sources of information about the pandemic. This complex digital environment presents challenges to journalists and reporters. But we can work together to meet them.  

First Draft has designed “Covering coronavirus: An online course for journalists” to help newsroom and communications professionals tackle the challenge posed by this infodemic. This course is available in 6 languages: English, Spanish, Italian, Portuguese, French, German and Hindi. 

The course explains how and why false information spreads and provides tools and techniques for monitoring and verifying information online. It also offers advice on how those covering the crisis every day can protect their own mental and emotional well being. 



By creating this practical online course, we hope to help journalists slow the spread of misinformation and produce credible and reliable coverage for their audiences, at the time they need it most. Sign up now at

The course includes: 

  • How to understand information disorder, as wells how and why false information spreads
  • How to monitor for coronavirus-related information on the social web
  • The main tools and techniques for verifying content online
  • Best practices to slow down the spread of misinformation  
  • How to look after your own mental health while covering a pandemic

The course features two hours of brand new training material, created by different members of the First Draft staff. It’s designed so you can run through the material sequentially, or dip in and out of the different modules.

Everything has been designed to be bite-sized so busy reporters can find time to develop skills, learn about best practices, and discover new tools and techniques. It also works beautifully on mobile phones. 

How to sign up

Step 1 – Head to and create a free account.

Step 2 – Tell us a little bit about yourself. Your preferred language will set the courses you can see. You can select more than one if you’re multilingual.

Step 3 – Once you’re in the landing page, click on the big banner image that says “Find out more about First Draft Courses by clicking here”.

Step 4 – Select “Covering coronavirus – an online course for journalists” and click ENROLL.

You’re in! We know that if you’re a journalist right now, you’re probably struggling to find time for lunch. We have designed the course to be snackable. You don’t have to go through the modules in any specific order. Pick what you like and dive in. 

We’ve made the videos as short as possible. A couple of videos are longer, but most are two to three minutes long. We’ve also repeated key information in the text below so you can scan there if you don’t have time to watch.

The 6 types of coronavirus misinformation to watch out for

The comparisons between how the coronavirus spread and the tidal wave of rumours and fakes which followed in its wake have been made repeatedly in recent months, but only because they are so accurate. In many countries, the misinformation has preceded the virus itself. The WHO declared an “infodemic” weeks before it declared a pandemic, and the response to both has often been similarly lacking.

Six distinct types of misinformation are emerging. They follow the same infectious pattern as the virus and escalate in sequence with confirmed cases in each country, like a shadow of rumours, an outrider to events before reality hits.

At First Draft we spend a lot of time talking about the tools and techniques for verifying dubious claims on social media but they’re no use if we don’t know when to apply them. Misinformation has a habit of slipping through our defences, waving a fake ID at the bouncers which normally stand guard in our mind and then making itself at home to become our assumed truth.

So the first step in dealing with any problem is to diagnose it. We all play a role in stopping the spread of the virus and viral misinformation. Here’s what you need to know.

Where the new coronavirus came from

Misinformation thrives when there is an absence of verified facts and it is human nature to try to make sense of new information based on what we already know.

So, when Chinese authorities reported a new strain of coronavirus to the WHO in December, social media users flooded the fact vacuum with their own theories about where it came from.

For conspiracy theorists, it was created in a lab by Microsoft founder Bill Gates as part of a globalist agenda to lower populations. Or it originated in the Chinese government as a bioweapon unleashed upon the world to undermine the United States. Or it was manufactured by the CIA as part of an economic hybrid war against China.

One of the most pernicious falsehoods comes in the form of a video from a market in Indonesia posted online in June 2019. The video is shocking by any measure,  showing bats, rats, cats — you name it — cooked and ready for purchase.

Dozens of enterprising YouTubers took the clip and removed the first few seconds which name the true location (Langowan, on the island of Sulawesi) and added “WUHAN MARKET”. The caption or title invariably connect the video to the coronavirus outbreak.

Like many rumours, this is wrapped around a kernel of truth. The seafood market in Wuhan, which stocked a wide variety of animals, was closed on January 1 and the Chinese government banned the sale and consumption of wild animals in late February, in direct response to the coronavirus. While some of the first cases in the region had connections to the market, many people who had no link whatsoever also became infected.

It is likely that we will never see a detailed timeline of where the virus came from but this is what the (current) science is telling us: The new coronavirus sweeping the globe is a mutation of another virus more commonly found in bats. It likely transferred to humans via another animal, possibly the armadillo-like pangolin. The scientific evidence shows that it is not man-made.

How the new coronavirus spreads

Again, many of these false claims have their root in very real confusion and fear. The currency of social media is emotion, and in the coronavirus crisis many users are trading what they think is useful advice for likes and shares, little hits of dopamine which confirm each person’s value as a member of their community.

This is especially true of the first category here, about the features of the virus which make it contagious. The WHO website is full of information countering some of these claims, including rumours that both hot weather and cold weather kill the coronavirus (they don’t), that mosquitoes can transmit the disease (they can’t) and that readers should use an ultraviolet disinfectant lamp to sterilise their skin (you shouldn’t).

The second category is more malicious, concerning how people are spreading the virus.

In one example, a number of outlets claimed the “patient zero” in Italy was a migrant worker who refused to self-isolate after testing positive for Covid-19, the disease this new coronavirus causes. Again, this holds a kernel of truth. A delivery driver was fined for doing exactly that, but there is no evidence he introduced the virus to the country. That detail appears to have been added by a website associated with the European alt-right.

Elsewhere, the WorldPop project at the University of Southampton published a study in February estimating how many people may have left Wuhan before the region was quarantined. When tweeting out a link to the study, someone chose a picture showing global air-traffic routes and travel for the entirety of 2011. You can probably guess what happened next.

Australian television and British tabloids published breathless stories about the “terrifying map” without verifying it. Although the WorldPop project deleted their tweet, the damage was done.

Symptoms of Covid-19

The new coronavirus is one of a family of viruses with similar features. Some cause symptoms similar to a common cold and others are more deadly. Covid-19 falls into the latter category. But this hasn’t stopped wild speculation about what getting sick will entail as social media users seek to reassure themselves and each other that they will be ok.

A prime example is a massively viral list of checks and symptoms which coursed around the world in early March. Depending who you were and where you received it, the list was attributed to Taiwanese “experts”, Japanese doctors, Unicef, the CDC, Stanford Hospital Board, “Standford” hospital, the classmate of the sender who had an uncle with a master’s degree and worked in Shenzen, and more.

Fact checkers at AFP dug into the details of one example with the central claim that “a runny nose and sputum” are not symptoms of Covid-19. Doctors have confirmed that these are symptoms.

The same post also claimed to be an authority on how the virus spread and gave a chronological progression of the symptoms from a sore throat through to feeling “like you’re drowning”, at which point it recommends seeking “immediate attention”. Many of the claims were knocked down with a little research from AFP.


The same thread of claims has been shared thousands of times on different platforms.

Treatment of Covid-19

The same post suggested that drinking plain old warm water was “effective against viruses”, without any further explanation. Later, it recommended that gargling salt water will “suffice” in terms of a prevention.

Needless to say, neither of these treatments have been recommended by doctors as a way to rid the body of the coronavirus. Garlic, salt water, onions, lemon juice, alcohol, ginger, chlorine and hairdryers have all featured in viral posts as treatments from people who are often just looking out for their friends and family, hoping to give them the advice which will keep them safe.

Wrong advice about treatments and cures are by far the most common form of misinformation here, but they can have real, serious consequences. Aside from the fact they can prevent people getting the care they need, bad advice can kill.

In Iran, where alcohol is illegal and thousands have been infected, 44 people died and hundreds were hospitalised after drinking home-made booze to protect against the disease, according to Iranian media.

In perhaps the most famous example, President Trump claimed in a March 19 press conference that “chloroquine or hydroxychloroquine”, often used in malaria treatments, had been approved by the Federal Drug Administration for the treatment of Covid-19. The FDA had, at that point, not approved it for such use. Two days later, an Arizona couple in their sixties were hospitalised after taking chloroquine phosphate, according to Banner Health. The wife told NBC she had seen the press conference. The husband died.

Since this article was first published, the FDA has approved some forms of chloroquine and hydroxychloroquine as an emergency, experimental treatment for Covid-19 while tests are ongoing. Dr. Daniel Brooks, medical director at the Banner Poison and Drug Information Center, summed up the problem at the root of this issue in a statement announcing the death of the Arizona man.

“Given the uncertainty around Covid-19, we understand that people are trying to find new ways to prevent or treat this virus,” he said. “But self-medicating is not the way to do so.”

How authorities and public figures are responding to the pandemic

Many countries with a significant number of cases have gone into lockdown, urging businesses to close and people to stay in their homes.

With these new measures have come an outbreak of misrepresented pictures and videos used to claim police are getting heavy-handed with those who go outside, or that the army are roaming the streets to enforce new measures under martial law.

It’s not just the security services which have been the subject of rumours and speculation. Various doctors, health authorities and public services or figures have had to refute bad claims, whether it’s been about the distribution of “rescue packs” or the closure of public transport.

Even footballer Cristiano Ronaldo has been drawn into the mire after a false claim from a sports journalist alleged he would be turning his hotels into hospitals to help treat patients.

The actions of authorities and public figures are often naturally newsworthy but, unless the information has come direct from the source, it’s always worth checking out before sharing.

How people are responding to the pandemic

One of the earliest phenomena of self-isolation and quarantine was the sight of dozens of Italians out on their balconies, singing together in a heart-warming display of community spirit.

But some of the internet’s practical jokers saw an opportunity. Pop stars Madonna, Katy Perry, Rihanna and Cheryl Cole were all tricked into thinking people were singing their songs from the rooftops after footage was cut together with old audio taken from live performances.

Poking a bit of fun at celebrities may seem harmless but, as the reality of self-isolation sets in, some fakes have graver overtones. An old video of a supermarket sale in 2011 was reshared and attributed to panic-buying in various cities across the UK, Spain and Belgium. Pictures of empty shelves in US supermarkets from 2018 have been shared as the current state of panic-buying in Sri Lanka, while old footage from Mexico was reported to show looting in Turkey.

In the UK, some social media users have used an old video to claim that Muslims in London are breaking social-distancing rules and endangering the public. Anti-hate crime group TellMAMA said the tweets were “generating anti-Muslim and Islamophobic responses”.

Lockdown could last for months. It is likely this is only the beginning of a new trend in falsehoods as people come to terms with a new lifestyle.

We all play a role in keeping each other safe from the virus and the impact it has on our lives. The information we share plays a big role here too. Let’s apply the same level of care in not spreading viral misinformation as we do to the virus itself.

Clarification: We have updated this article to clarify that President Trump said the FDA had approved the use of “chloroquine or hydroxychloroquine” to treat Covid-19 at a March 19 press conference, when it had not been approved. It has also been updated to to reflect that the FDA has since approved the emergency use of hydroxychloroquine as an experimental treatment.

Check out First Draft’s coronavirus resource hub for reporters, with tools and guides for verification, newsgathering, effective reporting, and more.

Stay up to date with First Draft’s work by subscribing to our newsletter and follow us on Facebook and Twitter.