The psychology of misinformation — the mental shortcuts, confusions, and illusions that encourage us to believe things that aren’t true — can tell us a lot about how to prevent its harmful effects. It’s what affects whether corrections work, what we should teach in media literacy courses, and why we’re vulnerable to misinformation in the first place. It’s also a fascinating insight into the human brain.
In the second part of this series on the psychology of misinformation, we cover the psychological concepts that are relevant to corrections, such as fact checks and debunks. One key theme that will resurface is the central problem of correction: Once we’re exposed to misinformation, it’s very hard to get it out of our heads.
If you want a primer on the psychology of correction, we particularly recommend Briony Swire-Thompson’s “Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication”.
This is the second in our series on the psychology of misinformation. Read the first, “The psychology of misinformation: Why we’re vulnerable”, and the third, “The psychology of misinformation: How to prevent it“.
The continued influence effect
The continued influence effect is when misinformation continues to influence people even after it has been corrected. In short, it is the failure of corrections.
Sometimes called “belief echoes”, this is the most important psychological concept to understand when it comes to corrections. There is consensus that once you’ve been exposed to misinformation it is very, very difficult to dislodge from your brain.
Corrections often fail because the misinformation, even when explained in the context of a debunk, can later be recalled as a fact. If we think back to dual process theory, quicker, automatic thinking can mean we recall information, but forget that it was corrected. For example, if you read a debunk about a politician falsely shown to be drunk in a manipulated video, you may later simply recall the idea of that politician being drunk, forgetting the negation.
Even effective corrections, such as ones with lots of detail that affirm the facts rather than repeat the misinformation, can wear off after just one week. In the words of Ullrich Ecker, a cognitive scientist at the University of Western Australia, “the continued influence effect seems to defy most attempts to eliminate it.”
Most crucially, it means that when it comes to misinformation, prevention is preferable to cure.
What to read next: “Misinformation and Its Correction: Continued Influence and Successful Debiasing” by Stephan Lewandowsky, Ullrich K.H. Ecker, Colleen M. Seifers, Norbert Schwarz and John Cook, published in Psychological Science in the Public Interest, 13 (3), 106–131 in 2012.
Mental models
A mental model is a framework for understanding something that has happened. If your house is on fire, and you see a broken Molotov cocktail, you might reasonably build a mental model that the fire was caused by an attack. If a fireman corrects you, saying that it wasn’t caused by the Molotov cocktail in front of you, you’re left with a gap in your mental model — specifically, what caused the fire.
This means that corrections need to also fill the gap that they create, such as with an alternative causal explanation. This is tricky, though: Replacing a mental model is not always possible with the available information.
What to read next: “Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication” by Briony Swire and Ullrich K.H. Ecker, published in Misinformation and Mass Audiences in 2018.
The implied truth effect
The implied truth effect is when something seems true because it hasn’t been corrected.
This is a major problem for platforms. When corrections, such as fact checks, are applied to some posts but not all of them, it implies that the unlabeled posts are true.
Gordon Pennycook and colleagues recently presented evidence that the implied truth effect exists when misinformation is labeled on some social media posts but not others.
What to read next: “The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warning” by Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand, published in Management Science in 2020.
Tainted truth effect
The tainted truth effect is where corrections make people start to doubt other, true information. The risk is that corrections and warnings create generalized distrust of what people read from sources such as the media.
As with the implied truth effect, the tainted truth effect (also known as the “spillover effect”) is a potential problem with labeling misinformation on social media: It can make people start to doubt everything they see online.
What to read next: “Warning against warnings: Alerted subjects may perform worse. Misinformation, involvement and warning as determinants of witness testimony” by Malwina Szpitalak and Romuald Polczyk, published in the Polish Psychological Bulletin, 41(3), 105-112 in 2010.
Repetition
Repetition causes misinformation to embed in people’s minds and makes it much harder to correct.
There are a couple of reasons for this. First, if you hear a statement more than once, you’re more likely to believe it’s true. Repetition can also make a belief seem more widespread than it is, which can increase its plausibility, leading people to the false conclusion that if that many people think it’s true, there’s a good chance it is.
What to read next: “Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus” by Kimberlee Weaver, Stephen M. Garcia, Norbert Schwarz, and Dale T. Miller, published in Journal of Personality and Social Psychology, 92, 821–833 in 2007.
Illusory truth effect
The illusory truth effect occurs when familiarity makes something seem true when it isn’t.
This can occur with false news headlines even with a single exposure. Exposure can even increase the plausibility of headlines that contradict people’s world views.
What to read next: “Prior exposure increases perceived accuracy of fake news” by Gordon Pennycook, Tyrone D. Cannon, and David G. Rand, published in Journal of Experimental Psychology: General 147(12):1865‐1880 in 2018
The backfire effect
The backfire effect is the theory that a correction can strengthen belief in misinformation. It has been broken down into the overkill backfire effect, worldview backfire effect, and familiarity backfire effect, each of which we explain here.
The backfire effect is by far the most contested psychological concept in misinformation, and, while famous, has not been found to occur as a norm, and some are doubtful it exists at all. Reviewing relevant literature, Full Fact found it to be an exception rather than the norm. More recently, researchers have concluded that ‘fact-checkers can rest assured that it is extremely unlikely that their fact-checks will lead to increased belief at the group level.”
However, it still permeates the public consciousness. Somewhat ironically, it has been a difficult myth to correct.
What to read next: “The backfire effect: Does it exist? And does it matter for factcheckers?” by Amy Sippit, published in Full Fact in 2019.
Overkill backfire effect
The overkill backfire effect is when misinformation is more believable than overly complicated correction, leading the correction to backfire and increase belief in the misinformation. A correction can be too complicated because it’s difficult to understand, too elaborate, or because there are simply too many counterarguments.
A recent study found no evidence of a backfire from too many counterarguments.
What to read next: “Refutations of Equivocal Claims: No Evidence for an Ironic Effect of Counterargument Number” by Ullrich K.H. Ecker, Stephan Lewandowsky, Kalpana Jayawardana, and Alexander Mladenovic, published in Journal of Applied Research in Memory and Cognition in 2018.
Worldview backfire effect
The worldview backfire effect is when a person rejects a correction because it is incompatible with their worldview, and in doing so strengthens their original belief.
Although, like all backfire effects, there is a lack of robust evidence for its existence, the advice given to mitigate it is still relevant and worth noting. For example, one study advises to affirm people’s worldviews when making a correction. Self-affirmation can help, too: One study found that people are more likely to accept views that challenge their worldviews after being asked to write about something about themselves they were proud of.
What to read next: “Searching for the backfire effect: Measurement and design considerations” by Briony Swire-Thompson, Joseph DeGutis, and David Lazer, (preprint) in 2020.
Familiarity backfire effect
The familiarity backfire effect describes the fact that corrections, by repeating falsehoods, make them more familiar and therefore more believable.
Briony Swire-Thompson, associate research scientist at Northeastern University, and colleagues found no evidence of a familiarity backfire effect: “corrections repeating the myth were simply less effective (compared to fact affirmations) rather than backfiring.”
What to read next: “The role of familiarity in correcting inaccurate information” by Briony Swire, Ullrich K.H. Ecker and Stephan Lewandowsky, published in Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(12), 1948-1961 in 2017.
Look out for part three and stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.