Too much information: a public guide to navigating the infodemic - First Draft
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

Too much information: a public guide to navigating the infodemic


Welcome to ‘Too much information’ – our guide to navigating the coronavirus ‘infodemic’. The team at First Draft has pulled this guide together to help you separate the helpful from the harmful and find the reliable in the rumor mill during the current coronavirus crisis. There is a whirlwind of speculation, hoaxes, false information and misleading content at the moment, and we know that it can be very hard to know what to trust. 

It’s clear that we need good information now more than ever, and it’s never been more important to pause and consider what we see online – whether it’s a fake cure, funny meme or out-and-out conspiracy theory. Even though most of us have the best intentions, if we share the wrong thing then we could wind up doing more harm than good.

So we’ve pulled this guide together to help you understand why misleading information gets shared so much and give you some practical tips, tricks and tools to help you double-check what you see online. You don’t have to take it from start to finish, we’ve tried to make the guide snackable, so you can pick what feels most useful to you and dive in. But if you do want to cover it all, you can expect to find answers to questions like:

  • Just what on earth is an ‘infodemic’ anyway? 
  • Where does false information come from and why does it spread? 
  • What should I be most wary of?
  • How can I check if something’s credible or cooked-up? 
  • Is that picture what I think it is? 
  • To share or not to share? 
  • How can I keep my head at a time like this?

Understanding the infodemic

The impact of sharing

When we’re sitting alone with our phone, reading WhatsApp messages or social media, hitting the share button feels like it would cause barely a ripple in the great scheme of things. But if we all do it, those ripples can become a tidal wave. At the moment we have to think of ourselves as publishers because what we share can influence as much as any article or news report. And, without meaning to, we can cause harm. If we share something that inadvertently causes panic and sends someone out stockpiling for example, and that inadvertently deprives someone in greater need, our good intentions can backfire. As publishers, we have a collective responsibility to make sure that the information we put out is accurate and based on facts.

Think about the last thing you shared online. Did you verify it, or pause to consider the likely impact on everyone who saw it before you shared it? Don’t worry if you didn’t, that’s what this guide is all about. 

What is an ‘infodemic’ and why is it a problem?

As a species, we’re hardwired to gossip. Rumors, conspiracy theories and hoaxes are nothing new. Back when we lived in small villages, those village boundaries tended to be as far as the rumor spread. Today though, we’re all living together in one big digital village, so information travels farther and faster. That can be a blessing, but it can also be a curse.

The term ‘infodemic’ was recently used by the World Health Organization (WHO). Put simply, it means that there’s so much information out there about the coronavirus, that we don’t know what to believe anymore. Misleading or incorrect information can spread like the virus itself.

Fear and love are powerful motivators. When we’re faced with a global pandemic like coronavirus, people share information because they’re scared and want to keep loved ones safe. It’s our natural instinct to try and be helpful and warn others. But share the wrong thing and we can accidentally do more harm than good.

Navigating the infodemic isn’t as simple as deciding if something is ‘true’ or ‘false’. Some of it will be rooted in truth, some will be taken out of context, some will be deliberately created to cause mischief, and actually most of it will be shared by someone you trust who is just trying to help.

In the current infodemic we need to be more aware than ever of how and why information spreads, so we can understand and navigate what we find online and have better-informed conversations with family and friends.

Why do people share?

As humans, we have an emotional relationship with information. We tend to share information that reinforces our world view and says something about who we want to be.

People like to feel connected to a ‘tribe’. That tribe could be made up of members of the same political party, parents that don’t vaccinate their children, activists concerned about climate change, or those belonging to a certain religion, race or ethnic group. Online, people tend to behave in accordance with their tribe’s identity.

No matter what we believe, we are all susceptible to information if it reinforces our existing position, and can be easily seduced by stories that back up what we already believe.

Think about some of the things you have shared online recently. What did they say about you and your identity? What emotions did they make you feel?

How to: have that difficult conversation

We all have a friend or relative who is a bit overzealous when it comes to sharing. It’s all very well when it’s limited to funny jokes or memes, but misleading content can be more damaging. It is tempting to hit mute, but it’s better to confront the issue rather than hide from it. Ignoring the spread of false information is not the answer. We do need to take a gentle approach when having these conversations though. The important thing is not to shame them or make them feel bad, but instead to be empathetic.

In one scenario, you just come straight out and tell them that they are wrong. But the problem is that when you do that they feel affronted and ashamed, so they are more likely to dig their heels in and double down on their beliefs.

In another scenario, you use empathetic language, show concern, and make it clear that we’re all in this together. Maybe tell them that you’ve seen similar posts and that you’re worried people out there are trying to scare us, or make money from us. Now their anger isn’t aimed back at you, but at the person who originally misled them by creating the content in the first place.

The important thing is that it shouldn’t feel like you versus the sharer. It should feel like you and the sharer versus the creator of the content.

Do you have a friend or family member who shares misinformation or problematic memes? Have you ever tried talking to them about it? How did it go? Could you have handled it any differently to get a better outcome?

The full spectrum of problematic content

Why we don’t say ‘f*** news’

Firstly, let’s get the elephant out of the room. Let’s talk about f*** news. F*** is a four letter word as far as we’re concerned. “F*** news” has become a widely used term and something of a catch-all, but we’re not fans of it.

The problem is that it doesn’t describe the full spectrum of information we find online. Information isn’t binary. It’s not a choice between fake or not fake. The term can also be used as a weapon by politicians to dismiss news outlets they don’t agree with. To make sense of the infodemic, we need to get into the complexity of it.

The four main themes of the infodemic

When it comes to misinformation about coronavirus, four distinct themes have emerged. We’re sure you’ve seen at least one of them on your screens, if not all of them. They’re spreading through social media and messaging apps in a similar pattern to the virus itself, hitting each country in turn as the crisis travels around the world.

1. Where it came from

Misinformation thrives when there’s an absence of verified facts. It’s human nature to try to make sense of new information based on what we already know. We’re all too eager to fill in the gaps.

So, when Chinese authorities reported a new strain of coronavirus to the World Health Organization (WHO) in December, social media users flooded the information vacuum with their own theories about where it came from.

According to conspiracy theorists, it was created in a lab by Microsoft founder Bill Gates as part of a globalist agenda to lower populations. Or by the Chinese government as a weapon against the United States. Or by the United States as a weapon against China. There are also a number of inaccurate conspiracy theories claiming that radiation from 5G towers can be linked to coronavirus.

2. How it spreads

Many false claims are rooted in a very real sense of confusion and fear. Social media users are trading what they think is useful advice for likes and shares, but the real currency here is emotion. This is especially true when it comes to information on how coronavirus spreads. The WHO website is full of information countering false claims, including that both hot and cold weather kills the coronavirus (they don’t), that mosquitoes can transmit the disease (they can’t) and that an ultraviolet lamp will sterilize your skin (it won’t, but it might burn it).

3. Symptoms and treatment

Bad advice about treatments and cures is by far the most common form of misinformation, and it can have serious consequences. At best it stops people from getting the proper care they need, and at worst it costs lives. In Iran, where alcohol is illegal, 44 people died and hundreds were hospitalized after drinking home-made alcohol to protect against the disease, according to Iranian media.

We have also seen wild speculation on the symptoms of the virus. A list of checks and symptoms went viral around the world in early March. Depending on where you got it from, the list was attributed to Taiwanese experts, Japanese doctors, Unicef, the CDC, Stanford Hospital Board and the classmate of the sender who held a master’s degree and worked in Shenzhen. The messages suggested that hot drinks and heat could kill the virus, and that holding your breath for 10 seconds every morning could diagnose it. None of these claims were proven.

4. How we’re responding

Both the pandemic and the infodemic are truly global by nature. People are sharing content across borders through social media in an effort to warn each other of what’s to come.

Photos or videos which claim to show panic buying have spread quickly from country to country. An old video of a supermarket sale in 2011 was reshared and attributed to panic-buying in various cities across the UK, Spain, and Belgium.

Rumors about how different governments are reacting to the pandemic also travel far and wide, as people view them as a sign of what could be coming next. With each new government measure comes an outbreak of misrepresented pictures. They might claim that the police are getting heavy-handed with people going outside, or that the army is roaming the streets to enforce martial law.

Recently, false screenshots of texts have circulated in the UK, claiming that the government is surveilling people using their phones and issuing fines by text message if they go outside. Again, this is rooted in a kernel of truth – the British government did send out a text message to every person in the UK saying they shouldn’t leave the house except for essential trips.

Disinformation vs misinformation

You may have heard these two terms used interchangeably to talk about the infodemic, but there is an important distinction when it comes to what’s being created, why it’s being created, and how it spreads. And if we’re going to make sense of the infodemic, we need to be speaking the same language.


When people intentionally create false or misleading information to make money, have political influence, damage someone’s reputation or maliciously cause trouble or harm.


When people share disinformation but they don’t realize it’s false or misleading, often because they’re trying to help.

The ‘Deceptive Seven’

Within those two overarching categories (disinformation and misinformation), we also refer to seven types of content that you’ll commonly find in the infodemic. They help us understand the complexity of online information and the shades of gray that exist between true and false. They live in a spectrum, so more than one type can apply to a specific piece of content. Introducing: the deceptive seven.

A Facebook post by a satirical website called “The Science Post”

1. Satire

Satire is a sure sign of a healthy democracy. A good satirist will shine a light on an uncomfortable truth and take it to ludicrous extremes. They do this in a way that leaves the audience in no doubt about what’s true and what’s not. The audience recognizes the truth, recognizes the exaggeration and therefore recognizes the joke.

The problem is that satire can be used strategically to spread rumors and conspiracies. If challenged, it can just be shrugged off that it was never meant to be taken seriously. “It was just a joke” can become an excuse for spreading misleading information.

Satire can be a powerful tool because the more it gets shared, the more the original context gets lost. In a newspaper, you know if you’re reading the opinion section or the cartoon section. This isn’t the case online. Content may have come originally from a well known satirical site, but as it spreads it loses that connection very quickly, getting turned into screenshots or memes.

2. False connection

Beware of clickbait. This is when you see a headline or photo that’s designed to make you want to click it, but when you click through the content is of dubious value or interest, to say the least. We call this ‘false connection’ — sensational language used to drive clicks, which then falls short for the reader when they get to the site.

In a study, researchers at Columbia University and the French National Institute (Inria) found that 59% of all the links shared on Twitter were not opened first, meaning that people shared them without reading past the headline. According to one of the co-authors of the study, Inria’s Arnaud Legout, we are “more willing to share an article than read it”.

It’s unlikely that media organizations will stop using clickbait techniques – they need clicks, after all – but as readers we need to be wary of them as they often use polarizing and emotive language to drive traffic, and this can cause deeper challenges.

Examples of clickbait-style headlines from an online headline generator.

3. Misleading content

What counts as ‘misleading’ can be hard to define. It comes down to context and nuance: How much of a quote has been omitted? Has the data been skewed? Has the way a photo been cropped changed the meaning of the image? This complexity is why we’re such a long way from having artificial intelligence flag this type of content. Computers understand true and false, but ‘misleading’ is a grey area.

The WorldPop project at the University of Southampton published a study in February estimating how many people may have left Wuhan before the region was quarantined. When tweeting a link to the study, they chose a picture which mapped global air-traffic routes and travel for the entirety of 2011. This “terrifying map” was then published by various Australian and British news outlets without being verified. Although the WorldPop project deleted their misleading tweet, the damage had been done.

A map showing air traffic across the world in 2011 was wrongly used to show the movement of people out of Wuhan, China.

4. Imposter content

Our brains are always looking for heuristics (mental shortcuts) to help us understand information. Seeing a brand or person that we already know is a very powerful shortcut. We trust what we know and give it credibility. Imposter content is false or misleading content that claims to be from established figures, brands, organizations or journalists.

The amount of information people take in on a daily basis, even just from their phones, means heuristics have become even more impactful. The trust we have in certain celebrities, brands and organizations can be manipulated to make us quickly share information that isn’t accurate.

An imposter Twitter account posing as the BBC breaking news service to spread false information.

5. False context

This term is used to describe content that is genuine, but has been warped and reframed in dangerous ways. We’re increasingly seeing the weaponization of context. This is one of the most common trends relating to the infodemic, and it is very easy to produce. All it takes is to find an image, video or old news story, and reshare it to fit a new narrative.

One of the first viral videos after the coronavirus outbreak in January 2020 showed a market selling bats, rats, snakes, and other animal meat products. Different versions of the video were shared online claiming to be from the Chinese city of Wuhan, where the new virus originated. The video was originally uploaded in July 2019, and it was shot in Langowan Market in Indonesia. However, the video was shared widely online because it played on existing anti-Chinese sentiments and preconceptions.

AFP debunked a viral video allegedly showing the food market where the coronavirus originated.

This photograph shows how a face mask was photoshopped onto the Sudanese minister during a visit with the Chinese Ambassador to Sudan.

6. Manipulated content

Manipulated content is when something genuine is altered. Here content isn’t completely made-up or fabricated, but manipulated to change the narrative. This is most often done to photographs and images. This kind of manipulation relies on the fact that most of us merely glance at images while scrolling on phone screens, so the content doesn’t have to be sophisticated or perfect. All it has to do is fit a narrative and be good enough to ‘look real’ in the two or three seconds it takes for people to choose whether to share it or not.

The Sino-Sudanese visit

On February 3rd, 2020 the Sudanese Minister for Foreign Affairs and the Chinese Ambassador to Sudan met to discuss the ongoing coronavirus outbreak. In the next couple of weeks, the photographs of that meeting were photoshopped to show the Sudanese Minister wearing a face mask. The images were shared widely on social media, including comments like “Africans don’t want to take chances with the Chinese”. 

7. Fabricated content

Fabricated content is anything that is 100% false. This is the end of the spectrum and the only content we can really call ‘fake’. Staged videos, made up websites and completely fabricated documents would all fall into this category.

Some of this is what we call ‘synthetic media’ or ‘deepfakes’. Synthetic media is a catch-all term for fabricated media produced using Artificial Intelligence (AI). Through synthesizing or combining different elements of existing video or audio, AI can relatively easily create ‘new’ content in which individuals appear to say or do things that never happened.

The need for ‘emotional skepticism’

Most of what we share is fuelled by emotion. Misleading stories spread like wildfire because they prey so heavily on our emotions. To avoid fuelling the fire, we need to pause for a second and switch off our emotions — go cold, be analytical, be more robot. Easier said than done but all it takes is a few seconds of pause.

It’s been proven that we both share and remember information that appeals to our emotions: stories that make us angry, happy, or scared; or ones that make us laugh out loud. It’s harder to press the mental delete button on something that’s struck fear or joy into our hearts, even if our head is questioning it.

The neuroscience of how fake news grabs our attention, produces false memories and appeals to our emotions SOURCE: Neiman Lab.

For example, there were claims that “patient zero” in Italy was a migrant worker who refused to self-isolate after testing positive. This held a kernel of truth, as a delivery driver was fined for doing exactly that, but there was no evidence he introduced the virus to the country. That detail appears to have been added by a website associated with the European alt-right.What’s more, our brains are more biased to accept something if it confirms our existing views. Something doesn’t have to be true to feel like it should be true. It’s also worth remembering that misleading information tends to be grounded in truth, rather than entirely made up. Things that sound ‘about right’. Anything with a kernel of truth is far more successful in terms of persuading and engaging people.

If we see something online that feels like it should be true and touches an emotional hotspot, it triggers an almost involuntary reaction to share it and tell people about it. So pause, recognize that emotional response, question whether you might be getting tricked and practice ‘emotional skepticism’.

Think back to something you recently shared. How do you think the person who created the content wanted you to feel? Did you share it based on emotion? Would you still share it now? If so, that’s fine — it’s good to share. Just make sure you pause and consider your emotional reactions first. 

Introducing some practical tips and tricks

Over these next three videos, we will go through a number of ways that you might be able to help verify the source of a tweet or social post on your phone. We’ll show you how to check if any images you find have been used before (reverse image searching), if the picture’s location is where it claims to be (geolocation) and if someone really is who their bio claims they are (profile verification).

Sharing images mindfully

It’s true what they say: images really are worth a thousand words. They can also tell a thousand stories, many of which aren’t true. While images can be great for explaining complicated things, capturing a moment or just cheering us up, it’s important that they’re shared mindfully. As with most things, it comes down to context. What was the original story behind the image? And has that story become lost or skewed?

Fuelling xenophobia

It’s important to pause before sharing any images that could feed stereotypes. Do you need to be sharing an article or post featuring a photo of an Asian person wearing a face mask, for example? And even if memes are funny, remember that they can also help fuel xenophobia and stereotypes.

Causing panic

The same goes for sharing images that could alarm people unnecessarily. Images of people wearing hazmat suits or pictures of empty shelves can feed panic and accelerate behaviors like stockpiling, which in themselves can be harmful.
As a general rule, at a time like this it’s better to share images that reinforce the behaviors we want to see in society — things like resilience, kindness and bravery.

Looking after our emotional wellbeing 

This is a very difficult time for us all. To feel stressed and anxious at a time like this is to feel human, and being barraged with scary messages and news reports day in, day out certainly doesn’t help. But we owe it to ourselves and to the people around us to try and look after our mental and emotional wellbeing. We need to be strong for each other to get through this crisis. Here’s a few things that might help:​

1. Stay connected

Social distancing doesn’t have to be anti-social. Video calling is giving us all the benefits of face-to-face contact and millions of people are turning to it for virtual dinner parties, pub quizzes, or just a natter with friends and family. But if you’re not feeling presentable (let’s face it, we’re all dressing for comfort right now) you can stay connected via messaging apps and social media. Just try to keep positive and avoid excessive chat about the crisis if you can.

2. Know when to switch off

Checking in with the main headlines from trusted sources a few times a day is enough to stay informed at the moment. Plugging into rolling updates 24/7 will quickly make you feel overwhelmed. If you must have breaking news push alerts on your mobile device, choose just one outlet to receive them from, not a dozen. Unsubscribe from newsletters that use alarmist subject lines or fill you with anxiety when they arrive in your inbox. Remember: we all need to be able to disconnect completely from time to time to clear our heads.

3. Stick to official guidance 

If you’re already prone to health-anxiety or obsessive-compulsive disorder (OCD), the global focus on the virus is likely hitting hard. Suddenly you’re hearing your usually unwanted OCD thoughts like hand washing and surface cleaning everywhere you turn. It’s important to follow the official advice for protecting yourself and others from the virus, but also to know that it is not necessary to go above and beyond it. Know the facts and guidance, but don’t overthink it.

4. Create a self-care plan 

Self-care is too important to leave to chance. It deserves planning and prioritization. Start with the essentials: getting enough sleep, going offline before bed, eating well, and exercising. Do whatever you can to get your mind onto something else. Take up a hobby, read a book, watch television, meditate, write a journal, take a bath or socialize through video calls. If it makes you feel happy, make it happen.

5. Relax your standards 

Finally, it’s important to note that these are unprecedented times and the usual rules and gold standards don’t have to apply. So be kind to yourself and be realistic about what you can do. You don’t have to be a perfectionist and this isn’t the time to push yourself too hard. Do what you can. ‘Good enough’ actually is good enough.

Useful resources

Where to go for reliable information

The infodemic that’s accompanying the coronavirus outbreak is making it hard for people to find trustworthy sources and reliable guidance when they need it. Knowing where to turn to for credible advice is vital, so here we’ll list some reliable sources.

Remember: not all research is created equal. Just because something is presented in a chart or a table, doesn’t mean the data behind it is solid.

Reuters looked at scientific studies published on the coronavirus since the outbreak began. Of the 153 they identified, 92 were not peer-reviewed despite including some pretty outlandish and unverified claims, like linking coronavirus to HIV or snake-to-human transmission. The problem with “speed science”, as Reuters called it, is that people can panic or make wrong policy decisions before the data has been properly researched.

SOURCE: Reuters, 2020. Speed Science: The risks of swiftly spreading coronavirus research

Government departments and agencies

These government departments provide updates on the number of coronavirus cases, government activities, public health advice, and media contacts:

INGOs and NGOs

These non-governmental organizations provide global and regional data and guidance on coronavirus:

Academic institutions

These organizations and institutions provide research and data on coronavirus, as well as expert commentary.

Further reading

An evolving glossary

The infodemic is a complex thing, and it’s changing all the time. We hope that this glossary helps you to understand it, talk about it and navigate it.

This glossary includes 38 words and you can use CMD + F (control + find) to search through the content, or just scroll down the list.


When content is shared on the social web in high numbers, or when the mainstream media gives attention or oxygen to a fringe rumor or conspiracy. Content can be amplified organically through coordinated efforts by motivated communities and/or bots, or by paid advertisements or “boosts” on social platforms. First Draft’s Rory Smith and Carlotta Dotto wrote up an explainer about the science and terminology of bots.


Algorithms in social and search platforms provide a sorting and filtering mechanism for content that users see on a “newsfeed” or search results page. Algorithms are constantly adjusted to increase the time a user spends on a platform. How an algorithm works is one of the most secretive components to social and search platforms; there has been little transparency with researchers, the press or the public. Digital marketers are well-versed on changes to the algorithm and it’s these tactics – using videos, “dark posts,” tracking pixels, etc. – that also translate well for disinformation campaigns and bad actors.

Anonymous message boards

A discussion platform that does not require people who post to publicly reveal their real name in a handle or username, like Reddit, 4chan, 8chan, and Discord. Anonymity can allow for discussions that might be more honest, but can also become toxic, and often without repercussions to the person posting. Reporting recommendation: as with 4chan, if you choose to include screenshots, quotes, and links to the platform, know that published reports that include this information have the potential to fuel recruitment to the platform, amplify campaigns that are designed to attack reporters and news organizations or sow mischief and confusion, and inadvertently create search terms/drive search to problematic content.

Analytics (sometimes called “metrics”)

Numbers that are accumulated on every social handle and post, and sometimes used to analyze the “reach” or extent to which how many other people might have seen or engaged with a post.


An API or application programming interface is when data from one web tool or application can be exchanged with, or received by, another. Many working to examine the source and spread of polluted information depend upon access to social platform APIs, but not all are created equal and the extent of publicly available data varies from platform to platform. Twitter’s open and easy-to-use API has enabled more research and investigation of its network, which is why you are more likely to see research done on Twitter than on Facebook.

Artificial intelligence (“AI”)

Computer programs that are “trained” to solve problems. These programs “learn” from data parsed through them, adapting methods and responses in a way that will maximize accuracy. As disinformation grows in its scope and sophistication, some look to AI as a way to effectively detect and moderate concerning content, like the Brazilian fact-checking organization Aos Fatos’s chatbot Fátima, which replies to questions from the public with fact-checks via Facebook Messenger. AI can also contribute to the problem of things like “deepfakes” and enabling disinformation campaigns that can be targeted and personalized much more efficiently.² Reporting recommendation: WITNESS has led the way to understanding and preparing for “synthetic media” and “deepfakes”. See also the report by WITNESS’ Sam Gregory and First Draft “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening.”


The process of designing a “machine” to complete a task with little or no human direction. Automation takes tasks that would be time-consuming for humans to complete and turns them into tasks that are quickly completed by a machine. For example, it is possible to automate the process of sending a tweet, so a human doesn’t have to actively click “publish.” Automation processes are the backbone of techniques used to effectively manufacture the amplification of disinformation. First Draft’s Rory Smith and Carlotta Dotto have written an explainer about the science and terminology of bots.


Fabricated media produced using artificial intelligence. By synthesizing different elements of existing video or audio files, AI enables relatively easy methods for creating ‘new’ content, in which individuals appear to speak words and perform actions which are not based on reality. Although still in its infancy, it is likely we will see examples of this type of synthetic media used more frequently in disinformation campaigns as these techniques become more sophisticated.⁹ Reporting recommendation: WITNESS has led the way to understanding and preparing for “synthetic media” and “deepfakes.” See also: the report by WITNESS’ Sam Gregory and First Draft “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening.”


The removal of an account from a platform like Twitter, Facebook, YouTube, etc. The goal is to remove a person from a social platform to reduce their reach. Casey Newton writes that there is “evidence that forcing hate figures and their websites to continuously relocate is effective at diminishing their reach over time.”

Disinformation campaign

A coordinated effort by a single actor or group of actors, organizations or governments to foment hate, anger, and doubt in a person, systems and institutions. Bad actors often use known marketing techniques and exploit the design of platforms to give agency to toxic and confusing information, particularly around critical events like democratic elections. The ultimate goal is for their messaging to reach the mainstream media.

Doxing or doxxing

The act of publishing private or identifying information about an individual online, without their permission. This information can include full names, addresses, phone numbers, photos and more.¹¹ Doxing is an example of malinformation, which is accurate information shared publicly to cause harm.


False information that is deliberately created or disseminated with the express purpose of causing harm. Producers of disinformation typically have political, financial, psychological or social motivations.


The process of encoding data so that it can be interpreted only by intended recipients. Many popular messaging services like Signal, Telegram and WhatsApp encrypt the text, photos, and videos sent between users. Encryption prevents third parties from reading the content of intercepted messages. Encryption also thwarts researchers and journalists from attempting to monitor mis- or disinformation being shared on the platform. As more bad actors are deplatformed and the messaging becomes more volatile and coordinated, these conversations will take place on closed messaging apps which law enforcement, the public, and researchers and journalists who are trying to understand the motivations and messaging of these groups will no longer be able to access them.

Fake followers

Anonymous or imposter social media accounts created to portray false impressions of popularity or credibility on another account. Social media users can pay for fake followers as well as fake likes, views, and shares to give the appearance of a larger, more engaged audience. For example, one English-based service offers YouTube users one million “high-quality” views and 50,000 likes for $3,150. The number of followers can build a cache for a profile or give the impression of credibility.

Information disorder

A phrase created by Claire Wardle and Hossein Derakshan to place into context the three types of problematic content online:

  • Mis-information is when false information is shared, but no harm is meant.
  • Dis-information is when false information is knowingly shared to cause harm.
  • Mal-information is when genuine information is shared to cause harm, often by moving private information into the public sphere.


One of the few US platforms allowed in China, LinkedIn can be a good starting point for digital footprinting an online source. Bellingcat published a useful tipsheet on how to get the most out of this platform.


Genuine information that is shared to cause harm. This includes private or revealing information that is spread to harm a person or reputation.


Coined by biologist Richard Dawkins in 1976, is an idea or behavior that spreads person-to-person throughout a culture by propagating rapidly, and changing over time. The term is now used most frequently to describe captioned photos or GIFs, often incubated on 4chan or Reddit, which have spread online. Take note: memes can be powerful vehicles of disinformation and often receive more engagement than a news article on the same topic from a mainstream outlet.


Information that is false, but not intended to cause harm. For example, individuals who don’t know a piece of information is false may spread it on social media in an attempt to be helpful.


Online slang meaning a person who consumes mainstream news, uses online platforms and follows popular opinion. It’s not intended as a compliment.


An acronym for open-source intelligence. Intelligence agents, researchers, and journalists investigate and analyze publicly available information to confirm or refute assertions made by governments or verify the location and time of day for photos and videos, among many other operations. OSINT online communities are incredibly helpful for exploring and explaining new tools, how investigators have arrived at a conclusion, and often offer help with verifying information. A good example of this is the work Bellingcat did to find the whereabouts of a wanted criminal in the Netherlands in March 2019 — an investigation helped by sixty Twitter users over the course of a single day.


A threaded-discussion message board that started in 2005 and requires registration to post. Reddit is the fifth most-popular site in the US, with 330 million registered users, called “redditors,” who post in subreddits. The subreddit, “the_Donald,” was one of the most active and vitriolic during the 2016 US election cycle.


Writing that uses literary devices such as ridicule and irony to criticize elements of society. Satire can become misinformation if audiences misinterpret it as fact. There is a known trend of disinformation agents labeling content as satire to prevent it from being flagged by fact-checkers. Some people, when caught, cling to the label of satire, as did a teacher in Florida who was discovered to have a racist podcast.


Low-quality manipulations that change the way a video plays. Awareness of shallowfakes increased in April and May 2019 when an edited video of US House Speaker Nancy Pelosi circulated online. The clip had been slowed down to give the illusion that she was drunk while speaking at a recent event. Shallowfake manipulation is more concerning at present than deepfakes, as free tools are available to subtly alter a video, making them quick to produce. Reporting recommendations: enlist the help of a video forensic when a video trends that appears to be out of character with the person or people featured.


The act of throwing out huge amounts of content, most of it ironic, low-quality trolling, for the purpose of provoking an emotional reaction in less Internet-savvy viewers. The ultimate goal is to derail productive discussion and distract readers.


Unsolicited, impersonal online communication, generally used to promote, advertise or scam the audience. Today, spam is mostly distributed via email, and algorithms detect, filter and block spam from users’ inboxes. Similar technologies to those implemented to curb spam emails could potentially be used in the context of information disorder, or at least offer a few lessons to be learned.

Synthetic Media

A catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing its original meaning.


Launched in 2017 as a rebrand of the app by the Chinese company ByteDance, TikTok is a mobile-app video platform. Wired reported in August 2019 that TikTok is fuelling India’s hate speech epidemic. Casey Newton offers great distillation for what’s at stake, and Facebook is using the app’s growth in popularity as a proof point for a “crowded market” in an appeal to lawmakers to avoid regulation.


Used to refer to any person harassing or insulting others online. While it has also been used to describe human-controlled accounts performing bot-like activities, some trolls prefer to be known and often use their real name.


The act of deliberately posting offensive or inflammatory content to an online community with the intent of provoking readers or disrupting the conversation.

Troll farm

A group of individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion. One prominent troll farm was the Russia-based Internet Research Agency that spread inflammatory content online in an attempt to interfere in the U.S. presidential election.

Two-factor authentication (2FA)

A second way to identify yourself when logging in to an app or website, 2FA is a more secure way to access your accounts. It is usually associated with your mobile phone number, so that you can receive an SMS with a security code to enter when prompted, giving you access to the app or site. This step can be annoying, but getting hacked due to a weakness in your protocols is more so. Reporting recommendation: protect yourself and your sources by setting up two-factor authentication on every app and website (especially password manager and financial sites) that offers it. It’s also recommended to use a free password manager like LastPass, long passwords of 16 characters or more, a VPN, and browse in incognito when viewing toxic information online (Chrome, Firefox).


The process of determining the authenticity of information posted by unofficial sources online, particularly visual media.²⁵ It emerged as a new skill set for journalists and human rights activists in the late 2000s, most notably in response to the need to verify visual imagery during the “Arab Spring.” Fact-checking looks at official records only, not unofficial or user-generated content, although fact-checking and verification often overlap and will likely merge.

VPN, or virtual private network

Used to encrypt a user’s data and conceal his or her identity and location. A VPN makes it difficult for platforms to know where someone pushing disinformation or purchasing ads is located. It is also sensible to use a VPN when investigating online spaces where disinformation campaigns are being produced.


Started in 2011 as a WhatsApp-like messaging app for closed communication between friends and family in China, the app is now relied upon to interact with friends on Moments (a feature similar to the Facebook timeline), reading articles sent from WeChat Public Accounts (public, personal or company accounts that publish stories), calling a taxi, booking movie tickets and paying bills with a single tap. The app, which has 1 billion users, is also reported to be a massive Chinese government surveillance operation. Reporting recommendation: WeChat is popular with Chinese immigrants globally, so if your beat includes immigration, it’s important to understand how the app works and how information gets exchanged.

Wedge Issue

These are controversial topics that people care about and have strong feelings about. Disinformation is designed to trigger a strong emotional response so that people will share it — either out of outrage, fear, humor, disgust, love, or anything else from the broad range of human emotions. The high emotional volatility of these topics makes them a target for agents of disinformation to use as a way to get people to share information without thinking twice. For example politics, policy, the environment, refugees, immigration, corruption, vaccines, women’s rights, etc.


With an estimated 1.6 billion users, WhatsApp is the most popular messaging app and the third-most-popular social platform following Facebook (2.23 billion) and YouTube (1.9 billion) in terms of active monthly users. WhatsApp launched in 2009 and Facebook acquired the app in February 2014. In 2016, WhatsApp added end-to-end encryption, however, there were data breaches by May 2019, which has made users nervous about privacy and data protection. First Draft was the first NGO to have API access to the platform with our Brazilian election project Comprova. Even with special access, it was difficult to know where disinformation started and where it might go next.