• The Key is Being Metacognitive
  • The Big Picture
  • Learning Outcomes
  • Test your Existing Knowledge
  • Definitions of Critical Thinking
  • Learning How to Think Critically
  • Self Reflection Activity
  • End of Module Survey
  • Test Your Existing Knowledge
  • Interpreting Information Methodically
  • Using the SEE-I Method
  • Interpreting Information Critically
  • Argument Analysis
  • Learning Activities
  • Argument Mapping
  • Summary of Anlyzing Arguments
  • Fallacious Reasoning
  • Statistical Misrepresentation
  • Biased Reasoning
  • Common Cognitive Biases
  • Poor Research Methods - The Wakefield Study
  • Summary of How Reasoning Fails
  • Misinformation and Disinformation
  • Media and Digital Literacy
  • Information Trustworthiness
  • Summary of How Misinformation is Spread

Critical Thinking Tutorial: Fallacious Reasoning

Some people mistakenly believe that logical reasoning is infallible or that it guarantees the absolute truth. However, logical reasoning is a tool that helps evaluate the validity of arguments and draw reasonable conclusions. It is still subject to limitations, biases, and the availability of accurate information. In this section, you will explore some common fallacies (errors in reasoning) that lead to incorrect or invalid conclusions, undermining the logic of an argument.

This video discusses the types of fallacies to avoid when constructing logical arguments. Examples include:

  • False Dichotomy : Presenting only two options while ignoring others to narrow the argument in one person's favor.
  • Appeal to Emotion : Using emotion-based language to persuade others of a belief or position.
  • Equivocation : Presenting an argument in an ambiguous, double-sided way to mislead.
  • Bandwagon Appeal : Presenting the thoughts of a group to persuade others to think the same way based on peer pressure.
  • False Analogy : Comparing two unalike things based on a trivial similarity to prove a point.

Listen carefully to the definitions of each fallacy and the examples provided, then attempt the drag and drop activity that follows.

Test Your Understanding

  • << Previous: Test Your Existing Knowledge
  • Next: Statistical Misrepresentation >>
  • Library A to Z
  • Follow on Facebook
  • Follow on Twitter
  • Follow on YouTube
  • Follow on Instagram

The University of Saskatchewan's main campus is situated on  Treaty 6 Territory and the Homeland of the Métis.

© University of Saskatchewan Disclaimer | Privacy

  • Last Updated: Dec 14, 2023 3:51 PM
  • URL: https://libguides.usask.ca/CriticalThinkingTutorial

GCFGlobal Logo

  • Get started with computers
  • Learn Microsoft Office
  • Apply for a job
  • Improve my work skills
  • Design nice-looking docs
  • Getting Started
  • Smartphones & Tablets
  • Typing Tutorial
  • Online Learning
  • Basic Internet Skills
  • Online Safety
  • Social Media
  • Zoom Basics
  • Google Docs
  • Google Sheets
  • Career Planning
  • Resume Writing
  • Cover Letters
  • Job Search and Networking
  • Business Communication
  • Entrepreneurship 101
  • Careers without College
  • Job Hunt for Today
  • 3D Printing
  • Freelancing 101
  • Personal Finance
  • Sharing Economy
  • Decision-Making
  • Graphic Design
  • Photography
  • Image Editing
  • Learning WordPress
  • Language Learning
  • Critical Thinking
  • For Educators
  • Translations
  • Staff Picks
  • English expand_more expand_less

Critical Thinking and Decision-Making  - Logical Fallacies

Critical thinking and decision-making  -, logical fallacies, critical thinking and decision-making logical fallacies.

GCFLearnFree Logo

Critical Thinking and Decision-Making: Logical Fallacies

Lesson 7: logical fallacies.

/en/problem-solving-and-decision-making/how-critical-thinking-can-change-the-game/content/

Logical fallacies

If you think about it, vegetables are bad for you. I mean, after all, the dinosaurs ate plants, and look at what happened to them...

illustration of a dinosaur eating leaves while a meteor falls in the background

Let's pause for a moment: That argument was pretty ridiculous. And that's because it contained a logical fallacy .

A logical fallacy is any kind of error in reasoning that renders an argument invalid . They can involve distorting or manipulating facts, drawing false conclusions, or distracting you from the issue at hand. In theory, it seems like they'd be pretty easy to spot, but this isn't always the case.

Watch the video below to learn more about logical fallacies.

Sometimes logical fallacies are intentionally used to try and win a debate. In these cases, they're often presented by the speaker with a certain level of confidence . And in doing so, they're more persuasive : If they sound like they know what they're talking about, we're more likely to believe them, even if their stance doesn't make complete logical sense.

illustration of a politician saying, "I know for a fact..."

False cause

One common logical fallacy is the false cause . This is when someone incorrectly identifies the cause of something. In my argument above, I stated that dinosaurs became extinct because they ate vegetables. While these two things did happen, a diet of vegetables was not the cause of their extinction.

illustration showing that extinction was not caused by some dinosaurs being vegetarians

Maybe you've heard false cause more commonly represented by the phrase "correlation does not equal causation ", meaning that just because two things occurred around the same time, it doesn't necessarily mean that one caused the other.

A straw man is when someone takes an argument and misrepresents it so that it's easier to attack . For example, let's say Callie is advocating that sporks should be the new standard for silverware because they're more efficient. Madeline responds that she's shocked Callie would want to outlaw spoons and forks, and put millions out of work at the fork and spoon factories.

illustration of Maddie accusing Callie of wanting to outlaw spoons and forks

A straw man is frequently used in politics in an effort to discredit another politician's views on a particular issue.

Begging the question

Begging the question is a type of circular argument where someone includes the conclusion as a part of their reasoning. For example, George says, “Ghosts exist because I saw a ghost in my closet!"

illustration of George claiming that ghosts exists and him seeing one in his closet

George concluded that “ghosts exist”. His premise also assumed that ghosts exist. Rather than assuming that ghosts exist from the outset, George should have used evidence and reasoning to try and prove that they exist.

illustration of George using math and reasoning to try and prove that ghosts exist

Since George assumed that ghosts exist, he was less likely to see other explanations for what he saw. Maybe the ghost was nothing more than a mop!

illustration of a splitscreen showing a ghost in a closet on the left, and that same closet with a mop in it on the right

False dilemma

The false dilemma (or false dichotomy) is a logical fallacy where a situation is presented as being an either/or option when, in reality, there are more possible options available than just the chosen two. Here's an example: Rebecca rings the doorbell but Ethan doesn't answer. She then thinks, "Oh, Ethan must not be home."

illustration showing the false dilemma of either Ethan being home or his home being empty

Rebecca posits that either Ethan answers the door or he isn't home. In reality, he could be sleeping, doing some work in the backyard, or taking a shower.

illustration of Ethan sleeping, doing yard work, and taking a shower

Most logical fallacies can be spotted by thinking critically . Make sure to ask questions: Is logic at work here or is it simply rhetoric? Does their "proof" actually lead to the conclusion they're proposing? By applying critical thinking, you'll be able to detect logical fallacies in the world around you and prevent yourself from using them as well.

previous

Bandwagon Fallacy (29 Examples + Definition)

practical psychology logo

Ever felt the pressure to agree with the majority, even when you secretly disagreed? You're not alone, and there's a term for this psychological trick: the Bandwagon Fallacy.

A Bandwagon Fallacy is the mistaken belief that an idea or action is correct or beneficial simply because it is popular or endorsed by influential people.

You'll learn why our brains are wired to make this mistake, discover its historical roots, and explore examples from politics to advertising. Along the way, you'll gain the tools to spot and counter this fallacy in your own life.

What is a Bandwagon Fallacy?

social media like button

Picture this: You're at a party and everyone is drinking a specific brand of soda. You might think, "Well, if everyone is drinking it, it must be good." That's the Bandwagon Fallacy at play. Simply put, you're led to believe something is true or good because a lot of people are doing it. You might commonly know this as peer pressure.

In psychology terms, this fallacy taps into our social nature. We're wired to seek approval and fit in, making us susceptible to group thinking . But remember, popularity doesn't equal correctness. Just because many people believe in something doesn't make it true or right.

The Bandwagon Fallacy is an appeal to popularity or authority, which diverts attention away from the actual argument or evidence. The aim is to make you feel like you'll miss out or be socially awkward if you don't join in.

Fallacies are logical errors, usually in arguments, that people make which lead to inconsistent reasoning or incorrect answers. The Bandwagon argument is a logical fallacy when it tries to convince us that the majority opinion or popular opinion is the best despite not having relevant evidence.

In particular, the bandwagon fallacy is an informal fallacy because it is the content of the argument is wrong, not the structure of the argument; if it were the structure, it would be called a formal fallacy.

Most simply, the bandwagon effect is when people support a common belief or claim without asking for proof or for the supporters to explain the conclusion to them. It's more about what appeals to the masses rather than something based on a valid argument.

Other Names for the Bandwagon Fallacy

  • Appeal to Popularity
  • Appeal to the Majority
  • Appeal to the People
  • Argumentum ad Populum

Other Logical Fallacies

  • Appeal to Authority Fallacy - Arguing that something is right because an important person or authoritative source says so.
  • Appeal to Tradition - Arguing that something is right because that's the way it's "always been done".
  • Ad Hominem - Countering an argument by attacking someone's character instead of the topic.
  • Slippery Slope - Arguing that one event will inevitably lead to a series of other events.

The term "Bandwagon Fallacy" finds its roots in American politics. The phrase "jump on the bandwagon" was coined during the presidential campaign of William Henry Harrison in 1840.

During that time, a bandwagon was quite literally a wagon that carried a circus band. Politicians would use these bandwagons in their parades to gather a crowd. Soon, the idea of "jumping on the bandwagon" became a metaphor for joining a popular cause.

29 Examples

1) social media likes.

"Just look at how many likes this post has; it must be true!"

This example shows the Bandwagon effect in the context of social media. A high number of likes does not validate the truth of a statement or post.

2) Fashion Trends

"Everyone is wearing this popular fashion brand now, so it must be the best."

In fashion, the Bandwagon Fallacy convinces people that a brand's quality is determined by its popularity, not its actual value or utility. If it was determined by the value or utility, that would be a logical reason to wear it.

3) Academic Choices

"All of my friends are majoring in Business, so I should too."

Here, the Bandwagon Fallacy leads one to choose an academic path based on what friends are doing, rather than personal interest or career goals. If a person makes an education decision based on what what large groups are doing, they may end up with more competition in jobs!

4) Eating Habits

"All my friends are going vegan, so that's what I'll do too."

Choosing a diet based solely on its popularity is another example of the Bandwagon Fallacy. While it is easier to diet if so many people are doing it, you may notice that it's not a healthy choice for you.

5) Fitness Fads

"Everyone at the gym is doing HIIT workouts, so they must be the best."

In fitness, trends often take hold quickly. However, the popularity of a workout regimen doesn't automatically mean it's the most effective.

6) Parenting Styles

"All the parents in my circle are into attachment parenting, so that's what I should do."

The Bandwagon Fallacy can even affect how people choose to raise their children.

7) Popular TV Shows

"If you're not watching this series, you're missing out!"

Just because a TV show is popular doesn't mean it's high-quality or to everyone's taste.

8) Music Preferences

"This artist is topping the charts, so their music must be good."

Music is subjective. Popularity on the charts isn't the sole indicator of quality.

9) Sports Teams

soccer game

"Everyone supports this team; you should too!"

In sports, the Bandwagon Fallacy can lead people to support teams for their popularity rather than any personal connection.

10) Investing Trends

bitcoin coin

"All the top investors are getting into cryptocurrency; you should invest now."

Following investment trends without doing your research is a classic Bandwagon Fallacy.

11) Holiday Destinations

beautiful holiday beach

"This place is the most Instagrammed destination; it must be worth visiting."

A popular hashtag doesn't always equal an exceptional travel experience.

12) College Choices

"Everyone I know is going to Ivy League schools; I should aim for those as well."

The Bandwagon Fallacy can wrongly guide your academic future, making you aim for popular choices instead of what's best for you.

13) Job Opportunities

"People are saying that tech jobs are the future; I should switch my career."

Career choices should be based on individual skills and interests, not what's trending.

14) Medical Treatments

"Many people are going for this alternative treatment; it must be effective."

Medical decisions should be based on scientific evidence, not popular opinion.

15) Skincare Products

"This skincare line is sold out everywhere; it must be effective."

Product efficacy is not guaranteed by its popularity.

16) Political Opinions

"Most people in my community are voting for this candidate, so they must be good."

The Bandwagon Fallacy can have serious implications in political contexts.

17) Religious Beliefs

"Millions of people follow this faith; it must be the true one."

Large following doesn't validate any religion's claims.

18) Philosophical Views

"Many influential thinkers were existentialists; therefore, existentialism must be correct."

Again, popularity among a group of intellectuals doesn't make a philosophical viewpoint universally correct.

19) Movie Choices

"This film won several awards; it's a must-see."

Awards do not always align with personal taste or artistic quality.

20) Pet Choices

happy dogs

"Everyone has a dog; cats must be bad pets."

The Bandwagon Fallacy wrongly informs pet choices based on popularity.

21) Food Choices

"This restaurant always has a long line; the food must be good."

Long lines can be deceptive and are not the sole indicators of quality.

22) Car Choices

"Most people in my city drive this brand of car; it must be the best."

Popularity doesn't always equate to quality or suitability for your needs.

23) Tech Gadgets

"Everyone is using this brand's smartphone, so it must be the best."

Tech choices should be based on individual needs, not on what's popular.

24) Book Choices

"This book is a bestseller, so it must be good."

Bestsellers can be hit or miss; personal preference matters.

25) Video Games

"This game is all the rage right now; you should get it."

Popular games are not universally enjoyable for everyone.

26) Beverage Choices

"Everyone's drinking this new health drink; it must be beneficial."

Popular health trends can be misleading and are not always backed by science.

27) Art Appreciation

"This artist's work sells for millions; they must be a great artist."

High prices and popular demand don't necessarily reflect artistic quality.

28) Home Decor

"Everyone is using this home decor style; it must be the best."

Home decor is subjective; what works for the majority may not work for you.

29) Workplace Practices

"All the successful companies are implementing this management style; we should too."

Copying popular trends doesn't always yield success; individual company needs vary.

Psychological Mechanisms It

Our brains are wired to make life easier, and sometimes that means taking shortcuts. One of these mental shortcuts is called " heuristics ," which are quick ways to solve problems or make judgments.

When you see a lot of people doing the same thing, your brain may automatically assume it's the right or best thing to do. This heuristic often works in your favor. For example, if everyone is running in one direction, there's a good chance you should be running too. It might be a sign of danger.

However, these mental shortcuts can also lead us astray, as in the case of the Bandwagon Fallacy. The desire to fit in and gain social approval is strong. This need is rooted in our evolutionary past where being part of a group increased chances of survival.

The Bandwagon Fallacy capitalizes on this natural inclination to follow the herd. It's why you might find yourself swayed by popular opinion or majority rule, even when logical reasoning suggests otherwise.

The Impact of the Bandwagon Fallacy

The impact of the Bandwagon Fallacy is far-reaching. In the short term, you might end up making poor choices, like buying a product that doesn't suit your needs or voting for a political candidate without understanding their platform.

These decisions, guided by the false comfort of popular belief that "everyone else is doing it," can lead to regret or a sense of betrayal when the popular choice doesn't meet your expectations.

In the long term, falling for the Bandwagon Fallacy can lead to a lack of critical thinking . You become accustomed to following the crowd, which means you may stop questioning things or seeking out evidence. This habit can be detrimental in various aspects of life, from your professional choices to your relationships.

How to Identify and Counter It

Spotting the Bandwagon Fallacy requires a keen sense of awareness. First, pay attention to the language used in an argument or sales pitch. Phrases like "everyone is doing it," "join the majority," or "don't miss out" are often signs of this fallacy at work.

The key is to separate popularity from credibility. Just because something is popular doesn't make it true or right for you.

Countering the Bandwagon Fallacy involves critical thinking and sometimes a bit of courage. If you find yourself leaning towards a popular choice, take a moment to consider why. Are you convinced by the merits of the option, or are you simply drawn in by its popularity?

If it's the latter, try to weigh the evidence and consider alternatives. Don't be afraid to go against the grain if your reasoning leads you in a different direction. After all, the most popular choice is not always the best one for you.

Related posts:

  • Logical Fallacies (Common List + 21 Examples)
  • Genetic Fallacy (28 Examples + Definition)
  • Hasty Generalization Fallacy (31 Examples + Similar Names)
  • Ad Hoc Fallacy (29 Examples + Other Names)
  • Fallacy of Composition (27 Examples + Definition)

Reference this article:

About The Author

Photo of author

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Logo for OPEN OKSTATE

3 Fallacies

I. what a re fallacies 1.

Fallacies are mistakes of reasoning, as opposed to making mistakes that are of a factual nature. If I counted twenty people in the room when there were in fact twenty-one, then I made a factual mistake. On the other hand, if I believe that there are round squares I believe something that is contradictory. A belief in “round squares” is a mistake of reasoning and contains a fallacy because, if my reasoning were good, I would not believe something that is logically inconsistent with reality.

In some discussions, a fallacy is taken to be an undesirable kind of argument or inference. In our view, this definition of fallacy is rather narrow, since we might want to count certain mistakes of reasoning as fallacious even though they are not presented as arguments. For example, making a contradictory claim seems to be a case of fallacy, but a single claim is not an argument. Similarly, putting forward a question with an inappropriate presupposition might also be regarded as a fallacy, but a question is also not an argument. In both of these situations though, the person is making a mistake of reasoning since they are doing something that goes against one or more principles of correct reasoning. This is why we would like to define fallacies more broadly as violations of the principles of critical thinking , whether or not the mistakes take the form of an argument.

The study of fallacies is an application of the principles of critical thinking. Being familiar with typical fallacies can help us avoid them and help explain other people’s mistakes.

There are different ways of classifying fallacies. Broadly speaking, we might divide fallacies into four kinds:

  • Fallacies of inconsistency: cases where something inconsistent or self-defeating has been proposed or accepted.
  • Fallacies of relevance: cases where irrelevant reasons are being invoked or relevant reasons being ignored.
  • Fallacies of insufficiency: cases where the evidence supporting a conclusion is insufficient or weak.
  • Fallacies of inappropriate presumption: cases where we have an assumption or a question presupposing something that is not reasonable to accept in the relevant conversational context.

II. Fallacies of I nconsistency

Fallacies of inconsistency are cases where something inconsistent, self-contradictory or self-defeating is presented.

1. Inconsistency

Here are some examples:

  • “One thing that we know for certain is that nothing is ever true or false.” – If there is something we know for certain, then there is at least one truth that we know. So it can’t be the case that nothing is true or false.
  • “Morality is relative and is just a matter of opinion, and so it is always wrong to impose our opinions on other people.” – But if morality is relative, it is also a relative matter whether we should impose our opinions on other people. If we should not do that, there is at least one thing that is objectively wrong.
  • “All general claims have exceptions.” – This claim itself is a general claim, and so if it is to be regarded as true we must presuppose that there is an exception to it, which would imply that there exists at least one general claim that does not have an exception. So the claim itself is inconsistent.

2. Self- D efeating C laims

A self-defeating statement is a statement that, strictly speaking, is not logically inconsistent but is instead obviously false. Consider these examples:

  • Very young children are fond of saying “I am not here” when they are playing hide-and-seek. The statement itself is not logically consistent, since it is not logically possible for the child not to be where she is. What is impossible is to  utter the sentence as a true sentence  (unless it is used for example in a telephone recorded message.)
  • Someone who says, “I cannot speak any English.”
  • Here is an actual example: A TV program in Hong Kong was critical of the Government. When the Hong Kong Chief Executive Mr. Tung was asked about it, he replied, “I shall not comment on such distasteful programs.” Mr. Tung’s remark was not logically inconsistent, because what it describes is a possible state of affairs. But it is nonetheless self-defeating because calling the program “distasteful” is to pass a comment!

III. Fallacies of R elevance

1. taking irrelevant considerations into account.

This includes defending a conclusion by appealing to irrelevant reasons, e.g., inappropriate appeal to pity, popular opinion, tradition, authority, etc. An example would be when a student failed a course and asked the teacher to give him a pass instead, because “his parents will be upset.” Since grades should be given on the basis of performance, the reason being given is quite irrelevant.

Similarly, suppose someone criticizes the Democratic Party’s call for direct elections in Hong Kong as follows: “These arguments supporting direct elections have no merit because they are advanced by Democrats who naturally stand to gain from it.” This is again fallacious because whether the person advancing the argument has something to gain from direct elections is a completely different issue from whether there ought to be direct elections.

2. Failing to T ake R elevant C onsiderations into A ccount

For example, it is not unusual for us to ignore or downplay criticisms because we do not like them, even when those criticisms are justified. Or sometimes we might be tempted to make a snap decision, believing knee-jerk reactions are the best when, in fact, we should be investigating the situation more carefully and doing more research.

Of course, if we fail to consider a relevant fact simply because we are ignorant of it, then this lack of knowledge does not constitute a fallacy.

IV. Fallacies of Insufficiency

Fallacies of insufficiency are cases where insufficient evidence is provided in support of a claim. Most common fallacies fall within this category. Here are a few popular types:

1. Limited S ampling

  • Momofuku Ando, the inventor of instant noodles, died at the age of 96. He said he ate instant noodles every day. So instant noodles cannot be bad for your health.
  • A black cat crossed my path this morning, and I got into a traffic accident this afternoon. Black cats are really unlucky.

In both cases the observations are relevant to the conclusion, but a lot more data is needed to support the conclusion, e.g., studies show that many other people who eat instant noodles live longer, and those who encounter black cats are more likely to suffer from accidents.

2. Appeal to I gnorance

  • We have no evidence showing that he is innocent. So he must be guilty.

If someone is guilty, it would indeed be hard to find evidence showing that he is innocent. But perhaps there is no evidence to point either way, so a lack of evidence is not enough to prove guilt.

3. Naturalistic F allacy

  • Many children enjoy playing video games, so we should not stop them from playing.

Many naturalistic fallacies are examples of fallacy of insufficiency. Empirical facts by themselves are not sufficient for normative conclusions, even if they are relevant.

There are many other kinds of fallacy of insufficiency. See if you can identify some of them.

V. Fallacies of Inappropriate Presumption

Fallacies of inappropriate presumption are cases where we have explicitly or implicitly made an assumption that is not reasonable to accept in the relevant context. Some examples include:

  • Many people like to ask whether human nature is good or evil. This presupposes that there is such a thing as human nature and that it must be either good or bad. But why should these assumptions be accepted, and are they the only options available? What if human nature is neither good nor bad? Or what if good or bad nature applies only to individual human beings?
  • Consider the question “Have you stopped being an idiot?” Whether you answer “yes” or “no,” you admit that you are, or have been, an idiot. Presumably you do not want to make any such admission. We can point out that this question has a false assumption.
  • “Same-sex marriage should not be allowed because by definition a marriage should be between a man and a woman.” This argument assumes that only a heterosexual conception of marriage is correct. But this begs the question against those who defend same-sex marriages and is not an appropriate assumption to make when debating this issue.

VI. List of Common Fallacies

A theory is discarded not because of any evidence against it or lack of evidence for it, but because of the person who argues for it. Example:

A: The Government should enact minimum-wage legislation so that workers are not exploited. B: Nonsense. You say that only because you cannot find a good job.

ad ignorantiam (appeal to ignorance)

The truth of a claim is established only on the basis of lack of evidence against it. A simple obvious example of such fallacy is to argue that unicorns exist because there is no evidence against their existence. At first sight it seems that many theories that we describe as “scientific” involve such a fallacy. For example, the first law of thermodynamics holds because so far there has not been any negative instance that would serve as evidence against it. But notice, as in cases like this, there is evidence for the law, namely positive instances. Notice also that this fallacy does not apply to situations where there are only two rival claims and one has already been falsified. In situations such as this, we may justly establish the truth of the other even if we cannot find evidence for or against it.

ad misericordiam (appeal to pity)

In offering an argument, pity is appealed to. Usually this happens when people argue for special treatment on the basis of their need, e.g., a student argues that the teacher should let them pass the examination because they need it in order to graduate. Of course, pity might be a relevant consideration in certain conditions, as in contexts involving charity.

ad populum (appeal to popularity)

The truth of a claim is established only on the basis of its popularity and familiarity. This is the fallacy committed by many commercials. Surely you have heard of commercials implying that we should buy a certain product because it has made to the top of a sales rank, or because the brand is the city’s “favorite.”

Affirming the consequent

Inferring that P is true solely because Q is true and it is also true that if P is true, Q is true.

The problem with this type of reasoning is that it ignores the possibility that there are other conditions apart from P that might lead to Q. For example, if there is a traffic jam, a colleague may be late for work. But if we argue from his being late to there being a traffic jam, we are guilty of this fallacy – the colleague may be late due to a faulty alarm clock.

Of course, if we have evidence showing that P is the only or most likely condition that leads to Q, then we can infer that P is likely to be true without committing a fallacy.

Begging the question ( petito principii )

In arguing for a claim, the claim itself is already assumed in the premise. Example: “God exists because this is what the Bible says, and the Bible is reliable because it is the word of God.”

Complex question or loaded question

A question is posed in such a way that a person, no matter what answer they give to the question, will inevitably commit themselves to some other claim, which should not be presupposed in the context in question.

A common tactic is to ask a yes-no question that tricks people into agreeing to something they never intended to say. For example, if you are asked, “Are you still as self-centered as you used to be?”, no matter whether you answer “yes” or ”no,” you are bound to admit that you were self-centered in the past. Of course, the same question would not count as a fallacy if the presupposition of the question were indeed accepted in the conversational context, i.e., that the person being asked the question had been verifiably self-centered in the past.

Composition (opposite of division)

The whole is assumed to have the same properties as its parts. Anne might be humorous and fun-loving and an excellent person to invite to the party. The same might be true of Ben, Chris and David, considered individually. But it does not follow that it will be a good idea to invite all of them to the party. Perhaps they hate each other and the party will be ruined.

Denying the antecedent

Inferring that Q is false just because if P is true, Q is also true, but P is false.

This fallacy is similar to the fallacy of affirming the consequent. Again the problem is that some alternative explanation or cause might be overlooked. Although P is false, some other condition might be sufficient to make Q true.

Example: If there is a traffic jam, a colleague may be late for work. But it is not right to argue in the light of smooth traffic that the colleague will not be late. Again, his alarm clock may have stopped working.

Division (opposite of composition)

The parts of a whole are assumed to have the same properties as the whole. It is possible that, on a whole, a company is very effective, while some of its departments are not. It would be inappropriate to assume they all are.

Equivocation

Putting forward an argument where a word changes meaning without having it pointed out. For example, some philosophers argue that all acts are selfish. Even if you strive to serve others, you are still acting selfishly because your act is just to satisfy your desire to serve others. But surely the word “selfish” has different meanings in the premise and the conclusion – when we say a person is selfish we usually mean that he does not strive to serve others. To say that a person is selfish because he is doing something he wants, even when what he wants is to help others, is to use the term “selfish” with a different meaning.

False dilemma

Presenting a limited set of alternatives when there are others that are worth considering in the context. Example: “Every person is either my enemy or my friend. If they are my enemy, I should hate them. If they’re my friend, I should love them. So I should either love them or hate them.” Obviously, the conclusion is too extreme because most people are neither your enemy nor your friend.

Gambler’s fallacy

Assumption is made to take some independent statistics as dependent. The untrained mind tends to think that, for example, if a fair coin is tossed five times and the results are all heads, then the next toss will more likely be a tail. It will not be, however. If the coin is fair, the result for each toss is completely independent of the others. Notice the fallacy hinges on the fact that the final result is not known. Had the final result been known already, the statistics would have been dependent.

Genetic fallacy

Thinking that because X derives from Y, and because Y has a certain property, that X must also possess that same property. Example: “His father is a criminal, so he must also be up to no good.”

Non sequitur

A conclusion is drawn that does not follow from the premise. This is not a specific fallacy but a very general term for a bad argument. So a lot of the examples above and below can be said to be non sequitur.

Post hoc, ergo propter hoc  (literally, “ after this, therefore because of this ” )

Inferring that X must be the cause of Y just because X is followed by Y.

For example, having visited a graveyard, I fell ill and infer that graveyards are spooky places that cause illnesses. Of course, this inference is not warranted since this might just be a coincidence. However, a lot of superstitious beliefs commit this fallacy.

Red herring

Within an argument some irrelevant issue is raised that diverts attention from the main subject. The function of the red herring is sometimes to help express a strong, biased opinion. The red herring (the irrelevant issue) serves to increase the force of the argument in a very misleading manner.

For example, in a debate as to whether God exists, someone might argue that believing in God gives peace and meaning to many people’s lives. This would be an example of a red herring since whether religions can have a positive effect on people is irrelevant to the question of the existence of God. The positive psychological effect of a belief is not a reason for thinking that the belief is true.

Slippery slope

Arguing that if an opponent were to accept some claim C 1 , then they have to accept some other closely related claim C 2 , which in turn commits the opponent to a still further claim C 3 , eventually leading to the conclusion that the opponent is committed to something absurd or obviously unacceptable.

This style of argumentation constitutes a fallacy only when it is inappropriate to think if one were to accept the initial claim, one must accept all the other claims.

An example: “The government should not prohibit drugs. Otherwise the government should also ban alcohol or cigarettes. And then fatty food and junk food would have to be regulated too. The next thing you know, the government would force us to brush our teeth and do exercises every day.”

Attacking an opponent while falsely attributing to them an implausible position that is easily defeated.

Example: When many people argue for more democracy in Hong Kong, a typical “straw man” reply is to say that more democracy is not warranted because it is wrong to believe that democracy is the solution to all of Hong Kong’s problems. But those who support more democracy in Hong Kong never suggest that democracy can solve  all  problems (e.g., pollution), and those who support more democracy in Hong Kong might even agree that  blindly  accepting anything is rarely the correct course of action, whether it is democracy or not. Theses criticisms attack implausible “straw man” positions and do not address the real arguments for democracy.

Suppressed evidence

Where there is contradicting evidence, only confirming evidence is presented.

VII. Exercises

Identify any fallacy in each of these passages. If no fallacy is committed, select “no fallacy involved.”

1. Mr. Lee’s views on Japanese culture are wrong. This is because his parents were killed by the Japanese army during World War II and that made him anti-Japanese all his life.

2. Every ingredient of this soup is tasty. So this must be a very tasty soup.

3. Smoking causes cancer because my father was a smoker and he died of lung cancer.

4. Professor Lewis, the world authority on logic, claims that all wives cook for their husbands. But the fact is that his own wife does not cook for him. Therefore, his claim is false.

5. If Catholicism is right, then no women should be allowed to be priests. But Catholicism is wrong. Therefore, some women should be allowed to be priests.

6. God does not exist because every argument for the existence of God has been shown to be unsound.

7. The last three times I have had a cold I took large doses of vitamin C. On each occasion, the cold cleared up within a few days. So vitamin C helped me recover from colds.

8. The union’s case for more funding for higher education can be ignored because it is put forward by the very people – university staff – who would benefit from the increased money.

9. Children become able to solve complex problems and think of physical objects objectively at the same time that they learn language. Therefore, these abilities are caused by learning a language.

10. If cheap things are no good then this cheap watch is no good. But this watch is actually quite good. So some good things are cheap.

Critical Thinking Copyright © 2019 by Brian Kim is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Logical Fallacies: How They Undermine Critical Thinking and How to Avoid Them

Profile image of Hershey H Friedman

This paper explains how to recognize and steer clear of numerous common logical fallacies, ranging from ad hominem arguments to wishful thinking, that can damage an argument. Critical thinking is essential in the digital age, where we must question false or flawed claims. It helps us base our decisions on facts and evidence, not feelings or fallacious reasoning. Unfortunately, many employers struggle to find workers with this skill. To develop it, one must learn how to understand and evaluate the essence of an argument.

Related Papers

claudia cuadro

peer pressure fallacy critical thinking

rohmani indah

Critical thinking skill has been a crucial issue in the context of higher education as the benchmark of qualified graduates. It is integrated in all aspects and involved in all of the courses as elaborated in the syllabus. Assessing the critical thinking becomes prominent to make sure that its quality is maintained. The real reflection of critical thinking can be traced through students’ ability to express their thought in the form of arguments. Good arguments must be supported not only by convincing claims and careful choice of supporting details but also on the rhetorical pattern which is free from fallacious statements. Identifying the fallacious statement which include recognizing the fallacy varieties assures the quality of their critical thinking. This study investigates the fallacious statements in the argumentative writing of the students of UIN Maulana Malik Ibrahim Malang who argue on global issues. The faulty reasoning is found in terms of the the discussion on the topics...

Charlene Tan

This chapter introduces key concepts in critical thinking using films and music videos. It focuses on the critical thinking skills needed for the identification, analysis and evaluation of arguments. Based on 12 key questions, readers are introduced to core features of an argument such as “premise”, “conclusion” and “assumption”. The main types of arguments and the criteria for evaluating these arguments are also discussed. Throughout the chapter, films such as A Beautiful Mind, Bowling for Columbine and CSI: Miami, and music videos of John Lennon’s “Imagine”, Britney Spears’ “Toxic”, Michael Jackson’s “Billie Jean” and others are used to illustrate the concepts.

Abdulvahit Çakır

The aim of this study was to explore the impact of raising awareness about reasoning fallacies on the development of critical reading skills of the first grade students in the ELT department, Gazi Faculty of Education. It was evaluated via a 56-question reasoning fallacies test confining seven questions to each fallacy studied in this research. Although there are numerous kinds of fallacies, between 14 and 191 to be more precise, the common ones were chosen in accordance with the reasoning fallacies test for practical reasons. In addition to this, during the literature review, some other common fallacies were determined and included in this dissertation. This study compared the students trained explicitly about questioning the arguments and argumentative texts on the one side and the students in the ordinary reading classes following the regular syllabus in terms of awareness about reasoning fallacies. A true experimental design was used to collect data through pre- and post-tests. ...

Robert Gass

The ability to think, reason, and argue well depends, among other things, on students’ ability to identify and avoid informal fallacies. This exercise enhances students’ understanding of fallacies through an experiential activity in which they construct fallacies on their own and identify fallacies created by other students. In the process, they consider the ethical and practical considerations of encountering fallacies in &quot;real life.&quot; The exercise can be used in a variety of college courses, including public speaking, logic, debate, argumentation, critical thinking, writing, rhetoric, and others.

Sandra Dwyer

Hal Campbell

Critical thinking by definition can be explained as the determination of whether we should accept, reject, or suspend judgment about a claim and the degree of confidence with which we should accept or reject it. Critical thinking helps us to formulate a judgment as to whether a position, theory, or idea is incomplete or unclear, insufficiently supported by the contentions made in its behalf, or whether the argument is unconvincing, or simply wrong. One of the principle tenants of critical thinking is that the ideas, arguments, and conclusions being offered are critiqued and not the person making them. As you can tell from the dogma and discourse going on in the media these days, this isn’t a widely embraced approach amongst politicians, pundits, reporters, and the general public. It is perfectly acceptable to come to a conclusion about the person making the argument, but that comes much later and is based upon the position they took on the issue and the rationale they used to argue their point. It is also a measure of how often they are perceived as inaccurate, uninformed, or simply incorrect about an issue that they postulate, and what strategies they use routinely to manipulate the people they are endeavoring to persuade to their point of view.

Inquiry: Critical Thinking Across the Disciplines

Mark Battersby , Sharon Bailin

Sabina Saldanha

RELATED PAPERS

Mykola Nikolaev

ahmed alaya

Nelson Neto

mamajona jona

Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy

salama M abdelhady

Proceedings of the National Academy of Sciences of the United States of America

Wybren Buma

Frontiers in Psychology

Teresa Bajo

Acta Ophthalmologica

Carlos Neves

Jurnal Darma Agung

Proceedings 15th International Conference on Pattern Recognition. ICPR-2000

Y. Aloimonos

2018 Conference on Precision Electromagnetic Measurements (CPEM 2018)

François Tricot

Main Issues Of Pedagogy And Psychology

anahit lalabekyan

Proceedings of the Ninth ACM Conference on Learning @ Scale

Adetunji Adeniran

International Conference on Eurasian Economies 2017

Ülcay Ecenur Avcı

Melly Puspita

Saudi Journal of Gastroenterology

Deborah Eade

Budi Azhari

Wojciech Otrebski

Simon Wessely

Juan F Gallardo Lancho

Journal of Oral Research

Adelene Jawa

Organohalogen …

Roland Verhe

American Journal of Case Reports

Sanjai Nagendra

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 12 January 2022

The psychological drivers of misinformation belief and its resistance to correction

  • Ullrich K. H. Ecker   ORCID: orcid.org/0000-0003-4743-313X 1 ,
  • Stephan Lewandowsky   ORCID: orcid.org/0000-0003-1655-2013 2 ,
  • John Cook 3 ,
  • Philipp Schmid   ORCID: orcid.org/0000-0003-2966-0806 4 ,
  • Lisa K. Fazio   ORCID: orcid.org/0000-0002-0415-4862 5 ,
  • Nadia Brashier 6 , 7 ,
  • Panayiota Kendeou 8 ,
  • Emily K. Vraga 9 &
  • Michelle A. Amazeen   ORCID: orcid.org/0000-0003-0167-7323 10  

Nature Reviews Psychology volume  1 ,  pages 13–29 ( 2022 ) Cite this article

286k Accesses

348 Citations

1711 Altmetric

Metrics details

  • Communication
  • Social behaviour

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Similar content being viewed by others

peer pressure fallacy critical thinking

Toolbox of individual-level interventions against online misinformation

peer pressure fallacy critical thinking

A meta-analysis of correction effects in science-relevant misinformation

peer pressure fallacy critical thinking

Misinformation: susceptibility, spread, and interventions to immunize the public

Introduction.

Misinformation — which we define as any information that turns out to be false — poses an inevitable challenge for human cognition and social interaction because it is a consequence of the fact that people frequently err and sometimes lie 1 . However, this fact is insufficient to explain the rise of misinformation, and its subsequent influence on memory and decision-making, as a major challenge in the twenty-first century 2 , 3 , 4 . Misinformation has been identified as a contributor to various contentious events, ranging from elections and referenda 5 to political or religious persecution 6 and to the global response to the COVID-19 pandemic 7 .

The psychology and history of misinformation cannot be fully grasped without taking into account contemporary technology. Misinformation helped bring Roman emperors to power 8 , who used messages on coins as a form of mass communication 9 , and Nazi propaganda heavily relied on the printed press, radio and cinema 10 . Today, misinformation campaigns can leverage digital infrastructure that is unparalleled in its reach. The internet reaches billions of individuals and enables senders to tailor persuasive messages to the specific psychological profiles of individual users 11 , 12 . Moreover, social media users’ exposure to information that challenges their worldviews can be limited when communication environments foster confirmation of previous beliefs — so-called echo chambers 13 , 14 . Although there is some controversy about echo chambers and their impact on people’s beliefs and behaviours 12 , 15 , the internet is an ideal medium for the fast spread of falsehoods at the expense of accurate information 16 . However, the prevalence of misinformation cannot be attributed only to technology: conventional efforts to combat misinformation have also not been as successful as hoped 2 — these include educational efforts that focus on merely conveying factual knowledge and corrective efforts that merely retract misinformation.

For decades, science communication has relied on an information deficit model when responding to misinformation, focusing on people’s misunderstanding of, or lack of access to, facts 17 . Thus, a thorough and accessible explanation of facts should overcome the impact of misinformation. However, the information deficit model ignores the cognitive, social and affective drivers of attitude formation and truth judgements 18 , 19 , 20 . For example, some individuals deny the existence of climate change or reject vaccinations despite being aware of a scientific consensus to the contrary 21 , 22 . This rejection of science is not the result of mere ignorance but is driven by factors such as conspiratorial mentality, fears, identity expression and motivated reasoning — reasoning driven more by personal or moral values than objective evidence 19 , 23 , 24 , 25 , 26 . Thus, to understand the psychology of misinformation and how it might be countered, it is essential to consider the cognitive architecture and social context of individual decision makers.

In this Review, we describe the cognitive, social and affective processes that make misinformation stick and leave people vulnerable to the formation of false beliefs. We review the theoretical models that have been proposed to explain misinformation’s resistance to correction. We provide guidance on countering misinformation, including educational and pre-emptive interventions, refutations and psychologically informed technological solutions. Finally, we return to the broader societal trends that have contributed to the rise of misinformation and discuss its practical implications on journalism, education and policymaking.

Different types of misinformation exist — for example, misinformation that goes against scientific consensus or misinformation that contradicts simple, objectively true facts. Moreover, the term disinformation is often specifically used for the subset of misinformation that is spread intentionally 27 . More research is needed on the extent to which different types of misinformation might be associated with differential psychological impacts and barriers for revision, and to establish the extent to which people infer intentionality and how this might affect their processing of the false information. Thus, in this Review we do not draw a sharp distinction between misinformation and disinformation, or different types of misinformation. We use the term misinformation as an umbrella term referring to any information that turns out to be false and reserve the term disinformation for misinformation that is spread with intention to harm or deceive.

Drivers of false beliefs

The formation of false beliefs all but requires exposure to false information. However, lack of access to high-quality information is not necessarily the primary precursor to false-belief formation; a range of cognitive, social and affective factors influence the formation of false beliefs (Fig.  1 ). False beliefs generally arise through the same mechanisms that establish accurate beliefs 28 , 29 . When deciding what is true, people are often biased to believe in the validity of information 30 , and ‘go with their gut’ and intuitions instead of deliberating 31 , 32 . For example, in March 2020, 31% of Americans agreed that COVID-19 was purposefully created and spread 33 , despite the absence of any credible evidence for its intentional development. People are likely to have encountered conspiracy theories about the source of the virus multiple times, which might have contributed to this widespread belief because simply repeating a claim makes it more believable than presenting it only once 34 , 35 . This illusory truth effect arises because people use peripheral cues such as familiarity (a signal that a message has been encountered before) 36 , processing fluency (a signal that a message is either encoded or retrieved effortlessly) 37 , 38 and cohesion (a signal that the elements of a message have references in memory that are internally consistent) 39 as signals for truth, and the strength of these cues increases with repetition. Thus, repetition increases belief in both misinformation and facts 40 , 41 , 42 , 43 . Illusory truth can persist months after first exposure 44 , regardless of cognitive ability 45 and despite contradictory advice from an accurate source 46 or accurate prior knowledge 18 , 47 .

figure 1

Some of the main cognitive (green) and socio-affective (orange) factors that can facilitate the formation of false beliefs when individuals are exposed to misinformation. Not all factors will always be relevant, but multiple factors often contribute to false beliefs.

Another ‘shortcut’ for truth might involve defaulting to one’s own personal views. Overall belief in news headlines is higher when the news headlines complement the reader’s worldview 48 . Political partisanship can also contribute to false memories for made-up scandals 49 . However, difficulties discerning true from false news headlines can also arise from intuitive (or ‘lazy’) thinking rather than the impact of worldviews 48 . In one study, participants received questions (‘If you’re running a race and you pass the person in second place, what place are you in?’) with intuitive, but incorrect, answers (‘first place’). Participants who answered these questions correctly were better able to discern fake from real headlines than participants who answered these questions incorrectly, independently of whether the headlines aligned with their political ideology 50 . A link has also been reported between intuitive thinking and greater belief in COVID-19 being a hoax, and reduced adherence to public health measures 51 .

Similarly, allowing people to deliberate can improve their judgements. If quick evaluation of a headline is followed by an opportunity to rethink, belief in fake news — but not factual news — is reduced 52 . Likewise, encouraging people to ‘think like fact checkers’ leads them to rely more on their own prior knowledge instead of heuristics. For example, prior exposure to statements such as ‘Deer meat is called veal’ makes these statements seem truer than similar statements encountered for the first time, even when people know the truth (in this case that the correct term is venison 47 ). However, asking people to judge whether the statement is true at initial exposure protects them from subsequently accepting contradictions of well-known facts 53 .

The information source also provides important social cues that influence belief formation. In general, messages are more persuasive and seem more true when they come from sources perceived to be credible rather than non-credible 42 . People trust human information sources more if they perceive the source as attractive, powerful and similar to themselves 54 . These source judgements are naturally imperfect — people believe in-group members more than out-group members 55 , tend to weigh opinions equally regardless of the competence of those expressing them 56 and overestimate how much their beliefs overlap with other people’s, which can lead to the perception of a false consensus 57 . Experts and political elites are trusted by many and have the power to shape public perceptions 58 , 59 ; therefore, it can be especially damaging when leaders make false claims. For example, false claims about public health threats such as COVID-19 made by political leaders can reduce the perceived threat of the virus as well as the perceived efficacy of countermeasures, decreasing adherence to public health measures 60 , 61 .

Moreover, people often overlook, ignore, forget or confuse cues about the source of information 62 . For example, for online news items, a logo banner specifying the publisher (for example, a reputable media outlet or a dubious web page) has been found not to decrease belief in fake news or increase belief in factual news 63 . In the aggregate, groups of laypeople perform as well as professional fact checkers at categorizing news outlets as trustworthy, hyper-partisan or fake 64 . However, when acting alone, individuals — unlike fact checkers — tend to disregard the quality of the news outlet and judge a headline’s accuracy based primarily on the plausibility of the content 63 . Similarly, although people are quick to distrust others who share fake news 65 , they frequently forget information sources 66 . This tendency is concerning: even though a small number of social media accounts spread an outsized amount of misleading content 67 , 68 , 69 , if consumers do not remember the dubious origin, they might not discount the content accordingly.

The emotional content of the information shared also affects false-belief formation. Misleading content that spreads quickly and widely (‘virally’) on the internet often contains appeals to emotion, which can increase persuasion. For example, messages that aim to generate fear of harm can successfully change attitudes, intentions and behaviours under certain conditions if recipients feel they can act effectively to avoid the harm 70 . Moreover, according to a preprint that has not been peer-reviewed, ‘happy thoughts’ are more believable than neutral ones 71 . People seem to understand the association between emotion and persuasion, and naturally shift towards more emotional language when attempting to convince others 72 . For example, anti-vaccination activists frequently use emotional language 73 . Emotion can be persuasive because it distracts readers from potentially more diagnostic cues, such as source credibility. In one study, participants read positive, neutral and negative headlines about the actions of specific people; social judgements about the people featured in the headlines were strongly determined by emotional valence of the headline but unaffected by trustworthiness of the news source 74 .

Inferences about information are also affected by one’s own emotional state. People tend to ask themselves ‘How do I feel about this claim?’, which can lead to influences of a person’s mood on claim evaluation 75 . Using feelings as information can leave people susceptible to deception 76 , and encouraging people to ‘rely on their emotions’ increases their vulnerability to misinformation 77 . Likewise, some specific emotional states such as a happy mood can make people more vulnerable to deception 78 and illusory truth 79 . Thus, one functional feature of a sad mood might be that it reduces gullibility 80 . Anger has also been shown to promote belief in politically concordant misinformation 81 as well as COVID-19 misinformation 82 . Finally, social exclusion, which is likely to induce a negative mood, can increase susceptibility to conspiratorial content 83 , 84 .

In sum, the drivers of false beliefs are multifold and largely overlooked by a simple information deficit model. The drivers include cognitive factors, such as use of intuitive thinking and memory failures; social factors, such as reliance on source cues to determine truth; and affective factors, such as the influence of mood on credulity. Although we have focused on false-belief formation here, the psychology behind sharing misinformation is a related area of active study (Box  1 ).

Box 1 Why people share misinformation

Online misinformation transmission involves both a receiver (the person encountering the misinformation) and a sender (the person making or sharing the misinformation). Thus, it is crucial to consider why people share misinformation with others. On social media, sharing is often dictated by what captures attention. Moral-emotional words such as ‘fight’, ‘greed’, ‘evil’ and ‘punish’ are prioritized in early visual attention over other arousing words 276 and also lead to increased sharing. For example, adding a single moral-emotional word to tweets about contentious political issues such as gun control increases retweets by 20% 277 . An angry mood can also boost misinformation sharing 82 . Because social media algorithms promote content that is likely to be shared, the interplay of psychological tendencies and technological optimization can thus easily lead to viral spread of misinformation online.

‘Lazy’ or intuitive thinking can also lead people to share content that they might recognize as false if they thought about it more. Accordingly, asking people to explain how they know that news headlines are true or false reduces sharing of false political headlines 278 , and brief accuracy nudges — simple interventions that prompt people to consider the accuracy of the information they encounter or share — can reduce sharing of false news about politics 207 and COVID-19 (ref. 279 ). These studies suggest that to the extent that people pay attention to accuracy, they are likely to share things they genuinely believe. Most people report that they would need to be paid to share false news; even when stories favour their political views, they worry about possible reputation costs from sharing false news 65 . Those reputation costs are real — over half of social media users report that they have stopped following someone who posted ‘made-up news and information’ 280 .

If a person’s focus is not on information veracity, they might share misinformation for other reasons 201 . Indeed, 14% of respondents in a 2016 US survey admitted to knowingly sharing false news 281 . There are some innocuous reasons to intentionally spread falsehoods; for example, it is tempting to share information that would be ‘interesting (or consequential) if true’ 282 . Likewise, findings from a preprint that has not been peer-reviewed suggest that people might share positive but questionable claims that could make others feel better, such as ‘A cat saved a woman’s life by scaring off a bear trying to attack her’ 71 . There are also self-serving motives for sharing, such as to signal group membership 283 or for self-promotion 260 . Finally, some people share misinformation to fuel moral outrage in others 277 , 284 . One non-peer reviewed preprint suggests that some people share hostile political rumours and conspiracy theories to incite chaos; this desire to ‘watch the world burn’ is even stronger following social exclusion 285 . With these alternative goals in mind, the viral nature of misinformation does not occur despite its low veracity but because of its ability to fulfil other psychological needs 11 .

Barriers to belief revision

A tacit assumption of the information deficit model is that false beliefs can easily be corrected by providing relevant facts. However, misinformation can often continue to influence people’s thinking even after they receive a correction and accept it as true. This persistence is known as the continued influence effect (CIE) 85 , 86 , 87 , 88 .

In the typical CIE laboratory paradigm, participants are presented with a report of an event (for example, a fire) that contains a critical piece of information related to the event’s cause (‘the fire was probably caused by arson’). That information might be subsequently challenged by a correction, which can take the form of a retraction (a simple negation, such as ‘it is not true that arson caused the fire’) or a refutation (a more detailed correction that explains why the misinformation was false). When reasoning about the event later (for example, responding to questions such as ‘what should authorities do now?’), individuals often continue to rely on the critical information even after receiving — and being able to recall — a correction 89 . Variants of this paradigm have used false real-world claims or urban myths 90 , 91 , 92 . Corrected misinformation can also continue to influence the amount a person is willing to pay for a consumer product or their propensity to promote a social media post 93 , 94 , 95 . The CIE might be an influential factor in the persistence of beliefs that there is a link between vaccines and autism despite strong evidence discrediting this link 96 , 97 or that weapons of mass destruction were found in Iraq in 2003 despite no supporting evidence 98 . The CIE has primarily been conceptualized as a cognitive effect, with social and affective underpinnings.

Cognitive factors

Theoretical accounts of the CIE draw heavily on models of memory in which information is organized in interconnected networks and the availability of information is determined by its level of activation 99 , 100 (Fig.  2 ). When information is encoded into memory and then new information that discredits it is learned, the original information is not simply erased or replaced 101 . Instead, misinformation and corrective information coexist and compete for activation. For example, misinformation that a vaccine has caused an unexpectedly large number of deaths might be incorporated with knowledge related to diseases, vaccinations and causes of death. A subsequent correction that the information about vaccine-caused deaths was inaccurate will also be added to memory and is likely to result in some knowledge revision. However, the misinformation will remain in memory and can potentially be reactivated and retrieved later on.

figure 2

a | Integration account of continued influence. The correction had the representational strength to compete with or even dominate the misinformation (‘myth’) but was not integrated into the relevant mental model. Depending on the available retrieval cues, this lack of integration can lead to unchecked misinformation retrieval and reliance. b | Retrieval account of continued influence. Integration has taken place but the myth is represented in memory more strongly, and thus dominates the corrective information in the competition for activation and retrieval. Note that the two situations are not mutually exclusive: avoiding continued influence might require both successful integration and retrieval of the corrective information.

One school of thought — the integration account — suggests that the CIE arises when a correction is not sufficiently encoded and integrated with the misinformation in the memory network (Fig.  2a ). There is robust evidence that integration of the correction and misinformation is a necessary, albeit not sufficient, condition for memory updating and knowledge revision 100 . This view implies that a successful revision requires detecting a conflict between the misinformation and the correction, the co-activation of both representations in memory, and their subsequent integration 102 . Evidence for this account comes from the success of interventions that bolster conflict detection, co-activation, and integration of misinformation and correction 103 , 104 . Assuming that information integration relies on processing in working memory (the short-term store used to briefly hold and manipulate information in the service of thinking and reasoning), the finding that lower working memory capacity predicts greater susceptibility to the CIE is also in line with this account 105 (although it has not been replicated 106 ). This theory further assumes that as the amount of integrated correct information increases, memory for the correction becomes stronger, at the expense of memory for the misinformation 102 . Thus, both the interconnectedness and the amount of correct information can influence the success of memory revision.

An alternative account is based on the premise that the CIE arises from selective retrieval of the misinformation even when corrective information is present in memory (Fig.  2b ). For example, it has been proposed that a retraction causes the misinformation representation to be tagged as false 107 . The misinformation can be retrieved without the false tag, but the false tag cannot be retrieved without concurrent retrieval of the misinformation. One instantiation of this selective-retrieval view appeals to a dual-process mechanism, which assumes that retrieval can occur based on an automatic, effortless process signalling information familiarity (‘I think I have heard this before’) or a more strategic, effortful process of recollection that includes contextual detail (‘I read about this in yesterday’s newspaper’) 108 . According to this account of continued influence, the CIE can arise if there is automatic, familiarity-driven retrieval of the misinformation (for example, in response to a cue), without explicit recollection of the corrective information and associated post-retrieval suppression of the misinformation 107 , 109 .

Evidence for this account comes from studies demonstrating that the CIE increases as a function of factors associated with increased familiarity (such as repetition) 107 and reduced recollection (such as advanced participant age and longer study-test delays) 92 . Neuroimaging studies have suggested that activity during retrieval, when participants answer inference questions about an encoded event — but not when the correction is encoded — is associated with continued reliance on corrected misinformation 110 , 111 . This preliminary neuroimaging evidence generally supports the selective-retrieval account of the CIE, although it suggests that the CIE is driven by misinformation recollection rather than misinformation familiarity, which is at odds with the dual-process interpretation.

Both of these complementary theoretical accounts of the CIE can explain the superiority of detailed refutations over retractions 92 , 112 , 113 . Provision of additional corrective information can strengthen the activation of correct information in memory or provide more detail to support recollection of the correction 89 , 103 , which makes a factual correction more enduring than the misinformation 90 . Because a simple retraction will create a gap in a person’s mental model, especially in situations that require a causal explanation (for example, a fire must be caused by something), a refutation that can fill in details of a causal, plausible, simple and memorable alternative explanation will reduce subsequent recall of the retracted misinformation.

Social and affective factors

These cognitive accounts do not explicitly consider the influence of social and affective mechanisms on the CIE. One socio-affective factor is source credibility, the perceived trustworthiness and expertise of the sources providing the misinformation and correction. Although source credibility has been to found to exert little influence on acceptance of misinformation if the source is a media outlet 63 , 114 , there is generally strong evidence that credibility has significant impact on acceptance of misinformation from non-media sources 42 , 88 , 115 .

The credibility of a correction source also matters for (post-correction) misinformation reliance 116 , although perhaps less than the credibility of the misinformation source 88 . The effectiveness of factual corrections might depend on perceived trustworthiness rather than perceived expertise of the correction source 117 , 118 , although perceived expertise might matter more in science-related contexts, such as health misinformation 119 , 120 . It can also be quite rational to discount a correction if the correction source is low in credibility 121 , 122 . Further complicating matters, the perceived credibility of a source varies across recipients. In extreme cases, people with strong conspiratorial ideation tendencies might mistrust any official source (for example, health authorities) 19 , 26 . More commonly, people tend to trust sources that are perceived to share their values and worldviews 54 , 55 .

A second key socio-affective factor is worldview — a person’s values and belief system that grounds their personal and sociocultural identity. Corrections attacking a person’s worldview can be ineffective 123 or backfire 25 , 124 . Such corrections can be experienced as attacking one’s identity, resulting in a chain reaction of appraisals and emotional responses that hinder information revision 19 , 125 . For example, if a message is appraised as an identity threat (for example, a correction that the risks of a vaccine do not outweigh the risks of a disease might be perceived as an identity threat by a person identifying as an anti-vaxxer), this can lead to intense negative emotions that motivate strategies such as discrediting the source of the correction, ignoring the worldview-inconsistent evidence or selectively focusing on worldview-bolstering evidence 24 , 126 . However, how a person’s worldview influences misinformation corrections is still hotly debated (Box  2 ), and there is a developing consensus that even worldview-inconsistent corrections typically have some beneficial impact 91 , 127 , 128 , 129 , 130 , 131 .

The third socio-affective factor that influences the CIE is emotion. One study found that corrections can produce psychological discomfort that motivates a person to disregard the correction to reduce the feeling of discomfort 132 . Misinformation conveying negative emotions such as fear or anger might be particularly likely to evoke a CIE 133 , 134 . This influence might be due to a general negativity bias 11 , 135 or more specific emotional influences. For example, misinformation damaging the reputation of a political candidate might spark outrage or contempt, which might promote continued influence of this misinformation (in particular among non-supporters) 134 . However, there seems to be little continued influence of negative misinformation on impression formation when the person subjected to the false allegation is not a disliked politician, perhaps because reliance on corrected misinformation might be seen as biased or judgemental (that is, it might be frowned upon to judge another person even though allegations have been proven false) 136 .

Other studies have compared emotive and non-emotive events — for example, a plane crash falsely assumed to have been caused by a terror attack, resulting in many fatalities, versus a technical fault, resulting in zero fatalities — and found no impact of misinformation emotiveness on the magnitude of the CIE 137 . Moreover, just as a sad mood can protect against initial misinformation belief 80 , it also seems to facilitate knowledge revision when a correction is encountered 138 . People who exhibit both subclinical depression and rumination tendencies have even been shown to exhibit particularly efficient correction of negative misinformation relative to control individuals, presumably because the salience of negative misinformation to this group facilitates revision 139 .

Finally, there is evidence that corrections can also benefit from emotional recalibration. For example, when misinformation downplays a risk or threat (for example, misinformation that a serious disease is relatively harmless), corrections that provide a more accurate risk evaluation operate partly through their impact on emotions such as hope, anger and fear. This emotional mechanism might help correction recipients realign their understanding of the situation with reality (for example, to realize they have underestimated the real threat) 113 , 140 . Likewise, countering disinformation that seeks to fuel fear or anger can benefit from a downward adjustment of emotional arousal; for example, refutations of vaccine misinformation can reduce anti-vaccination attitudes by mitigating misinformation-induced anger 141 .

Box 2 The elusive backfire effects

There have been concerns that corrective interventions might cause harm by inadvertently strengthening misconceptions and ironically enhancing reliance on the very misinformation that is being corrected. However, these concerns are largely overstated. Specifically, three types of ostensible ‘backfire effects’ have been discussed: the overkill backfire effect, the familiarity backfire effect, and the worldview backfire effect 89 .

Only one study has investigated the potential overkill backfire effect, thought to result from a correction using too many counterarguments. This study found that corrections of dubious claims were more (rather than less) potent when more counterarguments were used, so long as those counterarguments were relevant 286 . Thus, the overkill backfire effect does not have empirical support.

The familiarity backfire effect is thought to result from a correction that unintentionally boosts the familiarity of the misinformation being corrected. This effect is characterized as an increase in misinformation belief following a correction, relative to a pre-correction baseline or no-exposure control condition. There are some findings that repeating corrections might lead to a tendency to recall false claims as true, especially after a 3-day delay or in older adults (age 70+ years) 287 . Likewise, it has been argued that presenting ‘myths versus facts’ flyers that repeat to-be-debunked misinformation when correcting it could lead to familiarity backfire effects after a mere 30 min 288 . However, these findings have not been replicated 107 , 289 or remain unpublished. Other putative familiarity backfire effects did not compare the backfire condition with a proper baseline (for reviews see 92 , 256 ). Strong evidence against familiarity backfire comes from findings that explicit reminders of misinformation enhance the effect of corrections 104 , 290 . Although some researchers have argued that familiarity backfire might occur when a correction spreads novel misinformation to new audiences 185 , only one study has found support for this claim (and only in one of two experiments) 291 , with other studies finding no evidence 112 , 151 , 222 . Other demonstrations of familiarity backfire effects in the context of vaccine misinformation might be driven by worldview rather than familiarity 292 . In sum, misinformation familiarity contributes to the CIE but does not typically produce backfire effects.

The backfire effect of greatest concern is arguably the worldview backfire effect, thought to arise when people dismiss and counterargue against corrections of false beliefs that are central to their identity 126 , 293 . Early demonstrations of worldview backfire effects 124 , 294 , 295 drew much attention from the academy and beyond, but have proven difficult to replicate 81 , 128 , 130 , partially due to unreliable methods 256 , 296 . Although findings of worldview backfire effects continue to be reported occasionally 25 , 297 , overall the potential threat of worldview backfire effects seems limited and should not generally discourage debunking.

Interventions to combat misinformation

As discussed in the preceding section, interventions to combat misinformation must overcome various cognitive, social and affective barriers. The most common type of correction is a fact-based correction that directly addresses inaccuracies in the misinformation and provides accurate information 90 , 102 , 112 , 142 (Fig.  3 ). A second approach is to address the logical fallacies common in some types of disinformation — for example, corrections that highlight inherently contradictory claims such as ‘global temperature cannot be measured accurately’ and ‘temperature records show it has been cooling’ (Fig.  4 ). Such logic-based corrections might offer broader protection against different types of misinformation that use the same fallacies and misleading tactics 21 , 143 . A third approach is to undermine the plausibility of the misinformation or the credibility of its source 144 . Multiple approaches can be combined into a single correction — for example, highlighting both the factual and logical inaccuracies in the misinformation or undermining source credibility and underscoring factual errors 94 , 95 , 145 . However, most research to date has considered each approach separately and more research is required to test synergies between these strategies.

figure 3

How various barriers to belief updating can be overcome by specific communication strategies applied during correction, using event and health misinformation as examples. Colour shading is used to show how specific strategies are applied in the example corrections.

figure 4

How various barriers to belief updating can be overcome by specific communication strategies applied during correction, using climate change misinformation as an example. Colour shading is used to show how specific strategies are applied in the example corrections.

More generally, two strategies that can be distinguished are pre-emptive intervention (prebunking) and reactive intervention (debunking). Prebunking seeks to help people recognize and resist subsequently encountered misinformation, even if it is novel. Debunking emphasizes responding to specific misinformation after exposure to demonstrate why it is false. The effectiveness of these corrections is influenced by a range of factors, and there are mixed results regarding their relative efficacy. For example, in the case of anti-vaccination conspiracy theories, prebunking has been found to be more effective than debunking 146 . However, other studies have found debunking to outperform prebunking 87 , 95 , 142 . Reconciling these findings might require considering both the specific type of correction and its placement in time. For example, when refuting climate misinformation, one study found that fact-based debunking outperformed fact-based prebunking, whereas logic-based prebunking and debunking were equally effective 147 .

Some interventions, particularly those in online contexts, are hybrid or borderline cases. For example, if a misleading social media post is tagged with ‘false’ 148 and appears alongside a comment with a corrective explanation, this might count as both prebunking (owing to the tag, which is likely to have been processed before the post) and debunking (owing to the comment, which is likely to have been processed after the post).

Prebunking interventions

The simplest prebunking interventions involve presenting factually correct information 149 , 150 , a pre-emptive correction 142 , 151 or a generic misinformation warning 99 , 148 , 152 , 153 before the misinformation. More sophisticated interventions draw on inoculation theory, a framework for pre-emptive interventions 154 , 155 , 156 . This theory applies the principle of vaccination to knowledge, positing that ‘inoculating’ people with a weakened form of persuasion can build immunity against subsequent persuasive arguments by engaging people’s critical-thinking skills (Fig.  5 ).

figure 5

‘Inoculation’ treatment can help people prepare for subsequent misinformation exposure. Treatment typically highlights the risks of being misled, alongside a pre-emptive refutation. The refutation can be fact-based, logic-based or source-based. Inoculation has been shown to increase misinformation detection and facilitate counterarguing and dismissal of false claims, effectively neutralizing misinformation. Additionally, inoculation can build immunity across topics and increase the likelihood of people talking about the issue targeted by the refutation (post-inoculation talk).

An inoculation intervention combines two elements. The first element is warning recipients of the threat of misleading persuasion. For example, a person could be warned that many claims about climate change are false and intentionally misleading. The second element is identifying the techniques used to mislead or the fallacies that underlie the false arguments to refute forthcoming misinformation 157 , 158 . For example, a person might be taught that techniques used to mislead include selective use (‘cherry-picking’) of data (for example, only showing temperatures from outlier years to create the illusion that global temperatures have dropped) or the use of fake experts (for example, scientists with no expertise in climate science). Understanding how those misleading persuasive techniques are applied equips a person with the cognitive tools to ward off analogous persuasion attempts in the future.

Because one element of inoculation is highlighting misleading argumentation techniques, its effects can generalize across topics, providing an ‘umbrella’ of protection 159 , 160 . For example, an inoculation against a misleading persuasive technique used to cast doubt on science demonstrating harm from tobacco was found to convey resistance against the same technique when used to cast doubt on climate science 143 . Moreover, inoculated people are more likely to talk about the target issue than non-inoculated people, an outcome referred to as post-inoculation talk 161 . Post-inoculation talk is more likely to be negative than talk among non-inoculated people, which promotes misinformation resistance both within and between individuals because people’s evaluations tend to weight negative information more strongly than positive information 162 .

Inoculation theory has also been used to explain how strategies designed to increase information literacy and media literacy could reduce the effects of misinformation. Information literacy — the ability to effectively find, understand, evaluate and use information — has been linked to the ability to detect misleading news 163 and reduced sharing of misinformation 164 . Generally, information literacy and media literacy (which focuses on knowledge and skills for the reception and dissemination of information through the media) interventions are designed to improve critical thinking 165 and the application of such interventions to spaces containing many different types of information might help people identify misinformation 166 .

One successful intervention focused on lateral reading — consulting external sources to examine the origins and plausibility of a piece of information, or the credibility of an information source 115 , 167 , 168 . A separate non-peer-reviewed preprint suggests that focusing on telltale signs of online misinformation (including lexical cues, message simplicity and blatant use of emotion) can help people identify fake news 169 . However, research to date suggests that literacy interventions do not always mitigate the effects of misinformation 170 , 171 , 172 , 173 . Whereas most work has used relatively passive inoculation and literacy interventions, applications that engage people more actively have shown promise — specifically, app-based or web-based games 174 , 175 , 176 , 177 . More work is needed to consider what types of literacy interventions are most effective for conferring resistance to different types of misinformation in the contemporary media and information landscape 178 .

In sum, the prebunking approach provides a great tool to act pre-emptively and help people build resistance to misinformation in a relatively general manner. However, the advantage of generalizability can also be a weakness, because it is often specific pieces of misinformation that cause concern, which call for more specific responses.

Debunking interventions

Whereas pre-emptive interventions can equip people to recognize and resist misinformation, reactive interventions retrospectively target concrete instances of misinformation. For example, if a novel falsehood that a vaccine can lead to life-threatening side effects in pregnant women begins to spread, then this misinformation must be addressed using specific counter-evidence. Research broadly finds that direct corrections are effective in reducing — although frequently not eliminating — reliance on the misinformation in a person’s reasoning 86 , 87 . The beneficial effects of debunking can last several weeks 92 , 100 , 179 , although the effects can wear off quicker 145 . There is also evidence that corrections that reduce misinformation belief can have downstream effects on behaviours or intentions 94 , 95 , 180 , 181 — such as a person’s inclination to share a social media post or their voting intentions — but not always 91 , 96 , 182 .

Numerous best practices for debunking have emerged 90 , 145 , 183 . First, the most important element of a debunking correction is to provide a factual account that ideally includes an alternative explanation for why something happened 85 , 86 , 99 , 102 , 184 . For example, if a fire was thought to have been caused by negligence, then providing a causal alternative (‘there is evidence for arson’) is more effective than a retraction (‘there was no negligence’). In general, more detailed refutations work better than plain retractions that do not provide any detail on why the misinformation is incorrect 92 , 100 , 112 , 113 . It can be beneficial to lead with the correction rather than repeat the misinformation to prioritize the correct information and set a factual frame for the issue. However, a preprint that has not been peer-reviewed suggests that leading with the misinformation can be just as, or even more, effective if no pithy fact is available 150 .

Second, the misinformation should be repeated to demonstrate how it is incorrect and to make the correction salient. However, the misinformation should be prefaced with a warning 99 , 148 and repeated only once in order not to boost its familiarity unnecessarily 104 . It is also good to conclude by repeating and emphasizing the accurate information to reinforce the correction 185 .

Third, even though credibility matters less for correction sources compared with misinformation sources 88 , corrections are ideally delivered by or associated with high-credibility sources 116 , 117 , 118 , 119 , 120 , 186 . There is also emerging evidence that corrections are more impactful when they come from a socially connected source (for example, a connection on social media) rather than a stranger 187 .

Fourth, corrections should be paired with relevant social norms, including injunctive norms (‘protecting the vulnerable by getting vaccinated is the right thing to do’) and descriptive norms (‘over 90% of parents are vaccinating their children’) 188 , as well as expert consensus (‘doctors and medical societies around the world agree that vaccinations are important and safe’) 189 , 190 , 191 , 192 . One study found a benefit to knowledge revision if corrective evidence was endorsed by many others on social media, thus giving the impression of normative backing 193 .

Fifth, the language used in a correction is important. Simple language and informative graphics can facilitate knowledge revision, especially if fact comprehension might be otherwise difficult or if the person receiving the correction has a strong tendency to counterargue 194 , 195 , 196 , 197 . When speaking directly to misinformed individuals, empathic communication should be used rather than wielding expertise to argue directives 198 , 199 .

Finally, it has been suggested that worldview-threatening corrections can be made more palatable by concurrently providing an identity affirmation 145 , 200 , 201 . Identity affirmations involve a message or task (for example, writing a brief essay about one’s strengths and values) that highlights important sources of self-worth. These exercises are assumed to protect and strengthen the correction recipient’s self-esteem and the value of their identity, thereby reducing the threat associated with the correction and associated processing biases. However, evidence for the utility of identity affirmations in the context of misinformation corrections is mixed 194 , so firm recommendations cannot yet be made.

In sum, debunking is a valuable tool to address specific pieces of misinformation and largely reduces misinformation belief. However, debunking will not eliminate the influence of misinformation on people’s reasoning at a group level. Furthermore, even well-designed debunking interventions might not have long-lasting effects, thus requiring repeated intervention.

Corrections on social media

Misinformation corrections might be especially important in social media contexts because they can reduce false beliefs not just in the target of the correction but among everyone that sees the correction — a process termed observational correction 119 . Best practices for corrections on social media echo many best practices offline 112 , but also include linking to expert sources and correcting quickly and early 202 . There is emerging evidence that online corrections can work both pre-emptively and reactively, although this might depend on the type of correction 147 .

Notably, social media corrections are more effective when they are specific to an individual piece of content rather than a generalized warning 148 . Social media corrections are effective when they come from algorithmic sources 203 , from expert organizations such as a government health agency 119 , 204 , 205 or from multiple other users on social media 206 . However, particular care must be taken to avoid ostracizing people when correcting them online. To prevent potential adverse effects on people’s online behaviour, such as sharing of misleading content, gentle accuracy nudges that prompt people to consider the accuracy of the information they encounter or highlight the importance of sharing only true information might be preferable to public corrections that might be experienced as embarrassing or confrontational 181 , 207 .

In sum, social media users should be aware that corrections can be effective in this arena and have the potential to reduce false beliefs in people they are connected with as well as bystanders. By contrast, confronting strangers is less likely to be effective. Given the effectiveness of algorithmic corrections, social media companies and regulators should promote implementation and evaluation of technical solutions to misinformation on social media.

Practical implications

Even if optimal prebunking or debunking interventions are deployed, no intervention can be fully effective or reach everyone with the false belief. The contemporary information landscape brings particular challenges: the internet and social media have enabled an exponential increase in misinformation spread and targeting to precise audiences 14 , 16 , 208 , 209 . Against this backdrop, the psychological factors discussed in this Review have implications for practitioners in various fields — journalists, legislators, public health officials and healthcare workers — as well as information consumers.

Implications for practitioners

Combatting misinformation involves a range of decisions regarding the optimal approach (Fig.  6 ). When preparing to counter misinformation, it is important to identify likely sources. Although social media is an important misinformation vector 210 , traditional news organizations can promote misinformation via opinion pieces 211 , sponsored content 212 or uncritical repetition of politician statements 213 . Practitioners must anticipate the misinformation themes and ensure suitable fact-based alternative accounts are available for either prebunking or a quick debunking response. Organizations such as the International Fact-Checking Network or the World Health Organization often form coalitions in the pursuit of this endeavour 214 .

figure 6

Different strategies for countering misinformation are available to practitioners at different time points. If no misinformation is circulating but there is potential for it to emerge in the future, practitioners can consider possible misinformation sources and anticipate misinformation themes. Based on this assessment, practitioners can prepare fact-based alternative accounts, and either continue monitoring the situation while preparing for a quick response, or deploy pre-emptive (prebunking) or reactive (debunking) interventions, depending on the traction of the misinformation. Prebunking can take various forms, from simple warnings to more involved literacy interventions. Debunking can start either with a pithy counterfact that recipients ought to remember or with dismissal of the core ‘myth’. Debunking should provide a plausible alternative cause for an event or factual details, preface the misinformation with a warning and explain any logical fallacies or persuasive techniques used to promote the misinformation. Debunking should end with a factual statement.

Practitioners must be aware that simple retractions will be insufficient to mitigate the impact of misinformation, and that the effects of interventions tend to wear off over time 92 , 145 , 152 . If possible, practitioners must therefore be prepared to act repeatedly 179 . Creating engaging, fact-based narratives can provide a foundation for effective correction 215 , 216 . However, a narrative format is not a necessary ingredient 140 , 217 , and anecdotes and stories can also be misleading 218 .

Practitioners can also help audiences discriminate between facts and opinion, which is a teachable skill 170 , 219 . Whereas most news consumers do not notice or understand content labels forewarning that an article is news, opinion or advertising 220 , 221 , more prominent labelling can nudge readers to adjust their comprehension and interpretation accordingly. For example, labelling can lead readers to be more sceptical of promoted content 220 . However, even when forewarnings are understood, they do not reliably eliminate the content’s influence 99 , 153 .

If pre-emptive correction is not possible or ineffective, practitioners should take a reactive approach. However, not every piece of misinformation needs to be a target for correction. Due to resource limitations and opportunity costs, corrections should focus on misinformation that circulates among a substantive portion of the population and carries potential for harm 183 . Corrections do not generally increase false beliefs among individuals who were previously unfamiliar with the misinformation 222 . However, if the risk of harm is minimal, there is no need to debunk misinformation that few people are aware of, which could potentially raise the profile of its source.

Implications for information consumers

Information consumers also have a role to play in combatting misinformation by avoiding contributing to its spread. For instance, people must be aware that they might encounter not only relatively harmless misinformation, such as reporting errors, outdated information and satire, but also disinformation campaigns designed to instil fear or doubt, discredit individuals, and sow division 2 , 26 , 223 , 224 . People must also recognize that disinformation can be psychologically targeted through profit-driven exploitation of personal data and social media algorithms 12 . Thoughtless sharing can amplify misinformation that might confuse and deceive others. Sharing misinformation can also contribute to the financial rewards sought by misinformation producers, and deepen ideological divides that disenfranchise voters, encourage violence and, ultimately, harm democratic processes 2 , 170 , 223 , 225 , 226 .

Thus, while engaged with content, individuals should slow down, think about why they are engaging and interrogate their visceral response. People who thoughtfully seek accurate information are more likely to successfully avoid misinformation compared with people who are motivated to find evidence to confirm their pre-existing beliefs 50 , 227 , 228 . Attending to the source and considering its credibility and motivation, along with lateral reading strategies, also increase the likelihood of identifying misinformation 115 , 167 , 171 . Given the benefits of persuading onlookers through observational correction, everyone should be encouraged to civilly, carefully and thoughtfully correct online misinformation where they encounter it (unless they deem it a harmless fringe view) 119 , 206 . All of these recommendations are also fundamental principles of media literacy 166 . Indeed, a theoretical underpinning of media literacy is that understanding the aims of media protects individuals from some adverse effects of being exposed to information through the media, including the pressure to adopt particular beliefs or behaviours 170 .

Implications for policymakers

Ultimately, even if practitioners and information consumers apply all of these strategies to reduce the impact of misinformation, their efforts will be stymied if media platforms continue to amplify misinformation 14 , 16 , 208 , 209 , 210 , 211 , 212 , 213 . These platforms include social media platforms such as YouTube, which are geared towards maximizing engagement even if this means promoting misinformation 229 , and traditional media outlets such as television news channels, where misinformation can negatively impact audiences. For example, two non-peer-reviewed preprints have found that COVID-19 misinformation on Fox News was causally associated with reduced adherence to public health measures and a larger number of COVID-19 cases and deaths 230 , 231 . It is, therefore, important to scrutinize whether the practices and algorithms of media platforms are optimized to promote misinformation or truth.

In this space, policymakers should consider enhanced regulation. These regulations might include penalties for creating and disseminating disinformation where intentionality and harm can be established, and mandating platforms to be more proactive, transparent and effective in their dealings with misinformation. With regards to social media specifically, companies should be encouraged to ban repeat offenders from their platforms, and to generally make engagement with and sharing of low-quality content more difficult 12 , 232 , 233 , 234 , 235 . Regulation must not result in censorship, and proponents of freedom of speech might disagree with attempts to regulate content. However, freedom of speech does not include the right to amplification of that speech. Furthermore, being unknowingly subjected to disinformation can be seen as a manipulative attack on freedom of choice and the right to be well informed 236 . These concerns must be balanced. A detailed summary of potential regulatory interventions can be found elsewhere 237 , 238 .

Other strategies have the potential to reduce the impact of misinformation without regulation of media content. Undue concentration of ownership and control of both social and traditional media facilitate the dissemination of misinformation 239 . Thus, policymakers are advised to support a diverse media landscape and adequately fund independent public broadcasters. Perhaps the most important approach to slowing the spread of misinformation is substantial investment in education, particularly to build information literacy skills in schools and beyond 240 , 241 , 242 , 243 . Another tool in the policymaker’s arsenal is interventions targeted more directly at behaviour, such as nudging policies and public pledges to honour the truth (also known as self-nudging) for policymakers and consumers alike 12 , 244 , 245 .

Overall, solutions to misinformation spread must be multipronged and target both the supply (for example, more efficient fact-checking and changes to platform algorithms and policies) and the consumption (for example, accuracy nudges and enhanced media literacy) of misinformation. Individually, each intervention might only incrementally reduce the spread of misinformation, but one preprint that has not been peer-reviewed suggests that combinations of interventions can have a substantial impact 246 .

More broadly speaking, any intervention to strengthen public trust in science, journalism, and democratic institutions is an intervention against the impacts of misinformation 247 , 248 . Such interventions might include enhancing transparency in science 249 , 250 and journalism 251 , more rigorous fact-checking of political advertisements 252 , and reducing the social inequality that breeds distrust in experts and contributes to vulnerability to misinformation 253 , 254 .

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants 255 . Researchers should also avoid relying on one-item questions with relatively low reliability 256 . Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires 93 , 94 , 95 . Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences 7 , 82 , 205 , 257 , 258 , 259 , 260 , 261 , 262 , 263 .

Second, most existing work has focused on explicit misinformation and text-based materials. Thus, the cognitive impacts of other types of misinformation, including subtler types of misdirection such as paltering (misleading while technically saying the truth) 95 , 264 , 265 , 266 , doctored images 267 , deepfake videos 268 and extreme patterns of misinformation bombardment 223 , are currently not well understood. Non-text-based corrections, such as videos or cartoons, also deserve more exploration 269 , 270 .

Third, additional translational research is needed to explore questions about causality, including the causal impacts of misinformation and corrections on beliefs and behaviours. This research should also employ non-experimental methods 230 , 231 , 271 , such as observational causal inference (research aiming to establish causality in observed real-world data) 272 , and test the impact of interventions in the real world 145 , 174 , 181 , 207 . These studies are especially needed over the long term — weeks to months, or even years — and should test a range of outcome measures, for example those that relate to health and political behaviours, in a range of contexts. Ultimately, the success of psychological research into misinformation should be linked not only to theoretical progress but also to societal impact 273 .

Finally, even though the field has a reasonable understanding of the cognitive mechanisms and social determinants of misinformation processing, knowledge of the complex interplay between cognitive and social dynamics is still limited, as is insight into the role of emotion. Future empirical and theoretical work would benefit from development of an overarching theoretical model that aims to integrate cognitive, social and affective factors, for example by utilizing agent-based modelling approaches. This approach might also offer opportunities for more interdisciplinary work 257 at the intersection of psychology, political science 274 and social network analysis 275 , and the development of a more sophisticated psychology of misinformation.

DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M. & Epstein, J. A. Lying in everyday life. J. Personal. Soc. Psychol. 70 , 979–995 (1996).

Google Scholar  

Lewandowsky, S., Ecker, U. K. H. & Cook, J. Beyond misinformation: understanding and coping with the post-truth era. J. Appl. Res. Mem. Cogn. 6 , 353–369 (2017).

Zarocostas, J. How to fight an infodemic. Lancet 395 , 676 (2020).

PubMed   PubMed Central   Google Scholar  

Lazer, D. M. J. et al. The science of fake news. Science 359 , 1094–1096 (2018).

PubMed   Google Scholar  

Bennett, W. L. & Livingston, S. The disinformation order: disruptive communication and the decline of democratic institutions. Eur. J. Commun. 33 , 122–139 (2018).

Whitten-Woodring, J., Kleinberg, M. S., Thawnghmung, A. & Thitsar, M. T. Poison if you don’t know how to use it: Facebook, democracy, and human rights in Myanmar. Int. J. Press Politics 25 , 407–425 (2020).

Roozenbeek, J. et al. Susceptibility to misinformation about COVID-19 around the world. R. Soc. Open Sci. 7 , 201199 (2020).

Rich, J. in Private and Public Lies. The Discourse of Despotism and Deceit in the Graeco-Roman World (Impact of Empire 11) (eds Turner, A. J., Kim On Chong-Cossard, J. H. & Vervaet, F. J.) Vol. 11 167–191 (Brill Academic, 2010).

Hekster, O. in The Representation and Perception of Roman Imperial Power (eds. de Blois, L., Erdkamp, P., Hekster, O., de Kleijn, G. & Mols, S.) 20–35 (J. C. Gieben, 2013).

Herf, J. The Jewish War: Goebbels and the antisemitic campaigns of the Nazi propaganda ministry. Holocaust Genocide Stud. 19 , 51–80 (2005).

Acerbi, A. Cognitive attraction and online misinformation. Palgrave Commun. 5 , 15 (2019).

Kozyreva, A., Lewandowsky, S. & Hertwig, R. Citizens versus the internet: confronting digital challenges with cognitive tools. Psychol. Sci. Public Interest. 21 , 103–156 (2020).

Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A. & Bonneau, R. Tweeting from left to right: is online political communication more than an echo chamber? Psychol. Sci. 26 , 1531–1542 (2015).

Del Vicario, M. et al. The spreading of misinformation online. Proc. Natl Acad. Sci. USA 113 , 554–559 (2016).

Garrett, R. K. The echo chamber distraction: disinformation campaigns are the problem not audience fragmentation. J. Appl. Res. Mem. Cogn. 6 , 370–376 (2017).

Vosoughi, S., Roy, D. & Aral, S. The spread of true and false news online. Science 359 , 1146–1151 (2018).

Simis, M. J., Madden, H., Cacciatore, M. A. & Yeo, S. K. The lure of rationality: why does the deficit model persist in science communication? Public Underst. Sci. 25 , 400–414 (2016).

Fazio, L. K., Brashier, N. M., Payne, B. K. & Marsh, E. J. Knowledge does not protect against illusory truth. J. Exp. Psychol. 144 , 993–1002 (2015).

Hornsey, M. J. & Fielding, K. S. Attitude roots and jiu jitsu persuasion: understanding and overcoming the motivated rejection of science. Am. Psychol. 72 , 459 (2017).

Nisbet, E. C., Cooper, K. E. & Garrett, R. K. The partisan brain: how dissonant science messages lead conservatives and liberals to (dis)trust science. Ann. Am. Acad. Political Soc. Sci. 658 , 36–66 (2015).

Schmid, P. & Betsch, C. Effective strategies for rebutting science denialism in public discussions. Nat. Hum. Behav. 3 , 931–939 (2019).

Hansson, S. O. Science denial as a form of pseudoscience. Stud. History Philos. Sci. A 63 , 39–47 (2017).

Amin, A. B. et al. Association of moral values with vaccine hesitancy. Nat. Hum. Behav. 1 , 873–880 (2017).

Lewandowsky, S. & Oberauer, K. Motivated rejection of science. Curr. Dir. Psychol. Sci. 25 , 217–222 (2016).

Trevors, G. & Duffy, M. C. Correcting COVID-19 misconceptions requires caution. Educ. Res. 49 , 538–542 (2020).

Lewandowsky, S. Conspiracist cognition: chaos convenience, and cause for concern. J. Cult. Res. 25 , 12–35 (2021).

Lewandowsky, S., Stritzke, W. G. K., Freund, A. M., Oberauer, K. & Krueger, J. I. Misinformation, disinformation, and violent conflict: from Iraq and the war on terror to future threats to peace. Am. Psychol. 68 , 487–501 (2013).

Marsh, E. J., Cantor, A. D. & Brashier, N. M. Believing that humans swallow spiders in their sleep. Psychol. Learn. Motiv. 64 , 93–132 (2016).

Rapp, D. N. The consequences of reading inaccurate information. Curr. Dir. Psychol. Sci. 25 , 281–285 (2016).

Pantazi, M., Kissine, M. & Klein, O. The power of the truth bias: false information affects memory and judgment even in the absence of distraction. Soc. Cogn. 36 , 167–198 (2018).

Brashier, N. M. & Marsh, E. J. Judging truth. Annu. Rev. Psychol. 71 , 499–515 (2020).

Prike, T., Arnold, M. M. & Williamson, P. The relationship between anomalistic belief misperception of chance and the base rate fallacy. Think. Reason. 26 , 447–477 (2020).

Uscinski, J. E. et al. Why do people believe COVID-19 conspiracy theories? Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-015 (2020).

Article   Google Scholar  

Dechêne, A., Stahl, C., Hansen, J. & Wänke, M. The truth about the truth: a meta-analytic review of the truth effect. Personal. Soc. Psychol. Rev. 14 , 238–257 (2010).

Unkelbach, C., Koch, A., Silva, R. R. & Garcia-Marques, T. Truth by repetition: explanations and implications. Curr. Dir. Psychol. Sci. 28 , 247–253 (2019).

Begg, I. M., Anas, A. & Farinacci, S. Dissociation of processes in belief: source recollection, statement familiarity, and the illusion of truth. J. Exp. Psychol. Gen. 121 , 446–458 (1992).

Unkelbach, C. Reversing the truth effect: learning the interpretation of processing fluency in judgments of truth. J. Exp. Psychol. Learn. Memory Cogn. 33 , 219–230 (2007).

Wang, W. C., Brashier, N. M., Wing, E. A., Marsh, E. J. & Cabeza, R. On known unknowns: fluency and the neural mechanisms of illusory truth. J. Cognit. Neurosci. 28 , 739–746 (2016).

Unkelbach, C. & Rom, S. C. A referential theory of the repetition-induced truth effect. Cognition 160 , 110–126 (2017).

Pennycook, G., Cannon, T. D. & Rand, D. G. Prior exposure increases perceived accuracy of fake news. J. Exp. Psychol. Gen. 147 , 1865–1880 (2018).

Unkelbach, C. & Speckmann, F. Mere repetition increases belief in factually true COVID-19-related information. J. Appl. Res. Mem. Cogn. 10 , 241–247 (2021).

Nadarevic, L., Reber, R., Helmecke, A. J. & Köse, D. Perceived truth of statements and simulated social media postings: an experimental investigation of source credibility, repeated exposure, and presentation format. Cognit. Res. Princ. Implic. 5 , 56 (2020).

Fazio, L. K., Rand, D. G. & Pennycook, G. Repetition increases perceived truth equally for plausible and implausible statements. Psychonomic Bull. Rev. 26 , 1705–1710 (2019).

Brown, A. S. & Nix, L. A. Turning lies into truths: referential validation of falsehoods. J. Exp. Psychol. Learn. Memory Cogn. 22 , 1088–1100 (1996).

De keersmaecker, J. et al. Investigating the robustness of the illusory truth effect across individual differences in cognitive ability, need for cognitive closure, and cognitive style. Pers Soc. Psychol. Bull. 46 , 204–215 (2020).

Unkelbach, C. & Greifeneder, R. Experiential fluency and declarative advice jointly inform judgments of truth. J. Exp. Soc. Psychol. 79 , 78–86 (2018).

Fazio, L. K. Repetition increases perceived truth even for known falsehoods. Collabra Psychol. 6 , 38 (2020).

Pennycook, G. & Rand, D. G. The psychology of fake news. Trends Cognit. Sci. 25 , 388–402 (2021).

Murphy, G., Loftus, E. F., Grady, R. H., Levine, L. J. & Greene, C. M. False memories for fake news during Ireland’s abortion referendum. Psychol. Sci. 30 , 1449–1459 (2019).

Pennycook, G. & Rand, D. G. Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188 , 39–50 (2019).

Stanley, M. L., Barr, N., Peters, K. & Seli, P. Analytic-thinking predicts hoax beliefs and helping behaviors in response to the COVID-19 pandemic. Think. Reas. 27 , 464–477 (2020).

Bago, B., Rand, D. G. & Pennycook, G. Fake news, fast and slow: deliberation reduces belief in false (but not true) news headlines. J. Exp. Psychol. Gen. 149 , 1608–1613 (2020).

Brashier, N. M., Eliseev, E. D. & Marsh, E. J. An initial accuracy focus prevents illusory truth. Cognition 194 , 104054 (2020).

Briñol, P. & Petty, R. E. Source factors in persuasion: a self-validation approach. Eur. Rev. Soc. Psychol. 20 , 49–96 (2009).

Mackie, D. M., Worth, L. T. & Asuncion, A. G. Processing of persuasive in-group messages. J. Pers. Soc. Psychol. 58 , 812–822 (1990).

Mahmoodi, A. et al. Equality bias impairs collective decision-making across cultures. Proc. Natl Acad. Sci. USA 112 , 3835–3840 (2015).

Marks, G. & Miller, N. Ten years of research on the false-consensus effect: an empirical and theoretical review. Psychol. Bull. 102 , 72–90 (1987).

Brulle, R. J., Carmichael, J. & Jenkins, J. C. Shifting public opinion on climate change: an empirical assessment of factors influencing concern over climate change in the U.S. 2002–2010. Clim. Change 114 , 169–188 (2012).

Lachapelle, E., Montpetit, É. & Gauvin, J.-P. Public perceptions of expert credibility on policy issues: the role of expert framing and political worldviews. Policy Stud. J. 42 , 674–697 (2014).

Dada, S., Ashworth, H. C., Bewa, M. J. & Dhatt, R. Words matter: political and gender analysis of speeches made by heads of government during the COVID-19 pandemic. BMJ Glob. Health 6 , e003910 (2021).

Chung, M. & Jones-Jang, S. M. Red media, blue media, Trump briefings, and COVID-19: examining how information sources predict risk preventive behaviors via threat and efficacy. Health Commun. https://doi.org/10.1080/10410236.2021.1914386 (2021).

Article   PubMed   Google Scholar  

Mitchell, K. J. & Johnson, M. K. Source monitoring 15 years later: what have we learned from fMRI about the neural mechanisms of source memory? Psychol. Bull. 135 , 638–677 (2009).

Dias, N., Pennycook, G. & Rand, D. G. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-001 (2020).

Pennycook, G. & Rand, D. G. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc. Natl Acad. Sci. USA 116 , 2521–2526 (2019).

Altay, S., Hacquin, A.-S. & Mercier, H. Why do so few people share fake news? It hurts their reputation. N. Media Soc. https://doi.org/10.1177/1461444820969893 (2020).

Rahhal, T. A., May, C. P. & Hasher, L. Truth and character: sources that older adults can remember. Psychol. Sci. 13 , 101–105 (2002).

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B. & Lazer, D. Fake news on Twitter during the 2016 U.S. presidential election. Science 363 , 374–378 (2019).

Stanford University Center for an Informed Public, Digital Forensic Research Lab, Graphika, & Stanford Internet Observatory. The long fuse: misinformation and the 2020 election. Stanford Digital Repository https://purl.stanford.edu/tr171zs0069 (2021).

Jones, M. O. Disinformation superspreaders: the weaponisation of COVID-19 fake news in the Persian Gulf and beyond. Glob. Discourse 10 , 431–437 (2020).

Tannenbaum, M. B. et al. Appealing to fear: a meta-analysis of fear appeal effectiveness and theories. Psychol. Bull. 141 , 1178–1204 (2015).

Altay, S. & Mercier, H. Happy thoughts: the role of communion in accepting and sharing epistemically suspect beliefs. psyarxiv https://psyarxiv.com/3s4nr/ (2020).

Rocklage, M. D., Rucker, D. D. & Nordgren, L. F. Persuasion, emotion, and language: the intent to persuade transforms language via emotionality. Psychol. Sci. 29 , 749–760 (2018).

Chou, W.-Y. S. & Budenz, A. Considering emotion in COVID-19 vaccine communication: addressing vaccine hesitancy and fostering vaccine confidence. Health Commun. 35 , 1718–1722 (2020).

Baum, J. & Abdel, R. R. Emotional news affects social judgments independent of perceived media credibility. Soc. Cognit. Affect. Neurosci. 16 , 280–291 (2021).

Kim, H., Park, K. & Schwarz, N. Will this trip really be exciting? The role of incidental emotions in product evaluation. J. Consum. Res. 36 , 983–991 (2010).

Forgas, J. P. Happy believers and sad skeptics? Affective influences on gullibility. Curr. Dir. Psychol. Sci. 28 , 306–313 (2019).

Martel, C., Pennycook, G. & Rand, D. G. Reliance on emotion promotes belief in fake news. Cognit. Res. Princ. Implic. 5 , 47 (2020).

Forgas, J. P. & East, R. On being happy and gullible: mood effects on skepticism and the detection of deception. J. Exp. Soc. Psychol. 44 , 1362–1367 (2008).

Koch, A. S. & Forgas, J. P. Feeling good and feeling truth: the interactive effects of mood and processing fluency on truth judgments. J. Exp. Soc. Psychol. 48 , 481–485 (2012).

Forgas, J. P. Don’t worry be sad! On the cognitive, motivational, and interpersonal benefits of negative mood. Curr. Dir. Psychol. Sci. 22 , 225–232 (2013).

Weeks, B. E. Emotions, partisanship, and misperceptions: how anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. J. Commun. 65 , 699–719 (2015).

Han, J., Cha, M. & Lee, W. Anger contributes to the spread of COVID-19 misinformation. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-39 (2020).

Graeupner, D. & Coman, A. The dark side of meaning-making: how social exclusion leads to superstitious thinking. J. Exp. Soc. Psychol. 69 , 218–222 (2017).

Poon, K.-T., Chen, Z. & Wong, W.-Y. Beliefs in conspiracy theories following ostracism. Pers. Soc. Psychol. Bull. 46 , 1234–1246 (2020).

Johnson, H. M. & Seifert, C. M. Sources of the continued influence effect: when misinformation in memory affects later inferences. J. Exp. Psychol. Lear. Memory Cogn. 20 , 1420–1436 (1994).

Chan, M.-P. S., Jones, C. R., Jamieson, K. H. & Albarracín, D. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol. Sci. 28 , 1531–1546 (2017).

Walter, N. & Murphy, S. T. How to unring the bell: a meta-analytic approach to correction of misinformation. Commun. Monogr. 85 , 423–441 (2018).

Walter, N. & Tukachinsky, R. A meta-analytic examination of the continued influence of misinformation in the face of correction: how powerful is it, why does it happen, and how to stop it? Commun. Res. 47 , 155–177 (2020).

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N. & Cook, J. Misinformation and its correction: continued influence and successful debiasing. Psychol.Sci. Public. Interest. 13 , 106–131 (2012).

Barrera, O., Guriev, S., Henry, E. & Zhuravskaya, E. Facts, alternative facts, and fact checking in times of post-truth politics. J. Public. Econ. 182 , 104123 (2020).

Swire, B., Berinsky, A. J., Lewandowsky, S. & Ecker, U. K. H. Processing political misinformation: comprehending the Trump phenomenon. R. Soc. Open. Sci. 4 , 160802 (2017).

Swire, B., Ecker, U. K. H. & Lewandowsky, S. The role of familiarity in correcting inaccurate information. J. Exp. Psychol. Learn. Memory Cogn. 43 , 1948–1961 (2017).

Hamby, A., Ecker, U. K. H. & Brinberg, D. How stories in memory perpetuate the continued influence of false information. J. Consum. Psychol. 30 , 240–259 (2019).

MacFarlane, D., Tay, L. Q., Hurlstone, M. J. & Ecker, U. K. H. Refuting spurious COVID-19 treatment claims reduces demand and misinformation sharing. J. Appl. Res. Mem. Cogn. 10 , 248–258 (2021).

Tay, L. Q., Hurlstone, M. J., Kurz, T. & Ecker, U. K. H. A comparison of prebunking and debunking interventions for implied versus explicit misinformation. Brit. J. Psychol . (in the press).

Nyhan, B., Reifler, J., Richey, S. & Freed, G. L. Effective messages in vaccine promotion: a randomized trial. Pediatrics 133 , e835–e842 (2014).

Poland, G. A. & Spier, R. Fear misinformation, and innumerates: how the Wakefield paper, the press, and advocacy groups damaged the public health. Vaccine 28 , 2361–2362 (2010).

Lewandowsky, S., Stritzke, W. G. K., Oberauer, K. & Morales, M. Memory for fact, fiction, and misinformation. Psychol. Sci. 16 , 190–195 (2005).

Ecker, U. K. H., Lewandowsky, S. & Tang, D. T. W. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem. Cogn. 38 , 1087–1100 (2010).

Kendeou, P., Walsh, E. K., Smith, E. R. & OBrien, E. J. Knowledge revision processes in refutation texts. Discourse Process. 51 , 374–397 (2014).

Shtulman, A. & Valcarcel, J. Scientific knowledge suppresses but does not supplant earlier intuitions. Cognition 124 , 209–215 (2012).

Kendeou, P., Butterfuss, R., Kim, J. & Boekel, M. V. Knowledge revision through the lenses of the three-pronged approach. Mem. Cogn. 47 , 33–46 (2019).

Ithisuphalap, J., Rich, P. R. & Zaragoza, M. S. Does evaluating belief prior to its retraction influence the efficacy of later corrections? Memory 28 , 617–631 (2020).

Ecker, U. K. H., Hogan, J. L. & Lewandowsky, S. Reminders and repetition of misinformation: helping or hindering its retraction? J. Appl. Res. Mem. Cogn. 6 , 185–192 (2017).

Brydges, C. R., Gignac, G. E. & Ecker, U. K. H. Working memory capacity, short-term memory capacity, and the continued influence effect: a latent-variable analysis. Intelligence 69 , 117–122 (2018).

Sanderson, J. A., Gignac, G. E. & Ecker, U. K. H. Working memory capacity, removal efficiency and event specific memory as predictors of misinformation reliance. J. Cognit. Psychol. 33 , 518–532 (2021).

Ecker, U. K. H., Lewandowsky, S., Swire, B. & Chang, D. Correcting false information in memory: manipulating the strength of misinformation encoding and its retraction. Psychon. Bull. Rev. 18 , 570–578 (2011).

Yonelinas, A. P. The nature of recollection and familiarity: Aa review of 30 years of research. J. Mem. Lang. 46 , 441–517 (2002).

Butterfuss, R. & Kendeou, P. Reducing interference from misconceptions: the role of inhibition in knowledge revision. J. Educ. Psychol. 112 , 782–794 (2020).

Brydges, C. R., Gordon, A. & Ecker, U. K. H. Electrophysiological correlates of the continued influence effect of misinformation: an exploratory study. J. Cognit. Psychol. 32 , 771–784 (2020).

Gordon, A., Quadflieg, S., Brooks, J. C. W., Ecker, U. K. H. & Lewandowsky, S. Keeping track of ‘alternative facts’: the neural correlates of processing misinformation corrections. NeuroImage 193 , 46–56 (2019).

Ecker, U. K. H., O’Reilly, Z., Reid, J. S. & Chang, E. P. The effectiveness of short-format refutational fact-checks. Br. J. Psychol. 111 , 36–54 (2020).

van der Meer, T. G. L. A. & Jin, Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. 35 , 560–575 (2020).

Wintersieck, A., Fridkin, K. & Kenney, P. The message matters: the influence of fact-checking on evaluations of political messages. J. Political Mark. 20 , 93–120 (2021).

Amazeen, M. & Krishna, A. Correcting vaccine misinformation: recognition and effects of source type on misinformation via perceived motivations and credibility. SSRN https://doi.org/10.2139/ssrn.3698102 (2020).

Vraga, E. K. & Bode, L. I do not believe you: how providing a source corrects health misperceptions across social media platforms. Inf. Commun. Soc. 21 , 1337–1353 (2018).

Ecker, U. K. H. & Antonio, L. M. Can you believe it? An investigation into the impact of retraction source credibility on the continued influence effect. Mem. Cogn. 49 , 631–644 (2021).

Guillory, J. J. & Geraci, L. Correcting erroneous inferences in memory: the role of source credibility. J. Appl. Res. Mem. Cogn. 2 , 201–209 (2013).

Vraga, E. K. & Bode, L. Using expert sources to correct health misinformation in social media. Sci. Commun. 39 , 621–645 (2017).

Zhang, J., Featherstone, J. D., Calabrese, C. & Wojcieszak, M. Effects of fact-checking social media vaccine misinformation on attitudes toward vaccines. Prev. Med. 145 , 106408 (2021).

Connor Desai, S. A., Pilditch, T. D. & Madsen, J. K. The rational continued influence of misinformation. Cognition 205 , 104453 (2020).

O’Rear, A. E. & Radvansky, G. A. Failure to accept retractions: a contribution to the continued influence effect. Mem. Cogn. 48 , 127–144 (2020).

Ecker, U. K. H. & Ang, L. C. Political attitudes and the processing of misinformation corrections. Political Psychol. 40 , 241–260 (2019).

Nyhan, B. & Reifler, J. When corrections fail: the persistence of political misperceptions. Political Behav. 32 , 303–330 (2010).

Trevors, G. The roles of identity conflict, emotion, and threat in learning from refutation texts on vaccination and immigration. Discourse Process. https://doi.org/10.1080/0163853X.2021.1917950 (2021).

Prasad, M. et al. There must be a reason: Osama, Saddam, and inferred justification. Sociol. Inq. 79 , 142–162 (2009).

Amazeen, M. A., Thorson, E., Muddiman, A. & Graves, L. Correcting political and consumer misperceptions: the effectiveness and effects of rating scale versus contextual correction formats. J. Mass. Commun. Q. 95 , 28–48 (2016).

Ecker, U. K. H., Sze, B. K. N. & Andreotta, M. Corrections of political misinformation: no evidence for an effect of partisan worldview in a US convenience sample. Philos. Trans. R. Soc. B: Biol. Sci. 376 , 20200145 (2021).

Nyhan, B., Porter, E., Reifler, J. & Wood, T. J. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behav. 42 , 939–960 (2019).

Wood, T. & Porter, E. The elusive backfire effect: mass attitudes’ steadfast factual adherence. Political Behav. 41 , 135–163 (2018).

Yang, Q., Qureshi, K. & Zaman, T. Mitigating the backfire effect using pacing and leading. arxiv https://arxiv.org/abs/2008.00049 (2020).

Susmann, M. W. & Wegener, D. T. The role of discomfort in the continued influence effect of misinformation. Memory Cogn. https://doi.org/10.3758/s13421-021-01232-8 (2021).

Cobb, M. D., Nyhan, B. & Reifler, J. Beliefs don’t always persevere: how political figures are punished when positive information about them is discredited. Political Psychol. 34 , 307–326 (2013).

Thorson, E. Belief echoes: the persistent effects of corrected misinformation. Political Commun. 33 , 460–480 (2016).

Jaffé, M. E. & Greifeneder, R. Negative is true here and now but not so much there and then. Exp. Psychol. 67 , 314–326 (2020).

Ecker, U. K. H. & Rodricks, A. E. Do false allegations persist? Retracted misinformation does not continue to influence explicit person impressions. J. Appl. Res. Mem. Cogn. 9 , 587–601 (2020).

Ecker, U. K. H., Lewandowsky, S. & Apai, J. Terrorists brought down the plane! No actually it was a technical fault: processing corrections of emotive information. Q. J. Exp. Psychol. 64 , 283–310 (2011).

Trevors, G., Bohn-Gettler, C. & Kendeou, P. The effects of experimentally induced emotions on revising common vaccine misconceptions. Q. J. Exp. Psychol. https://doi.org/10.1177/17470218211017840 (2021).

Chang, E. P., Ecker, U. K. H. & Page, A. C. Not wallowing in misery — retractions of negative misinformation are effective in depressive rumination. Cogn. Emot. 33 , 991–1005 (2019).

Sangalang, A., Ophir, Y. & Cappella, J. N. The potential for narrative correctives to combat misinformation. J. Commun. 69 , 298–319 (2019).

Featherstone, J. D. & Zhang, J. Feeling angry: the effects of vaccine misinformation and refutational messages on negative emotions and vaccination attitude. J. Health Commun. 25 , 692–702 (2020).

Brashier, N. M., Pennycook, G., Berinsky, A. J. & Rand, D. G. Timing matters when correcting fake news. Proc. Natl Acad. Sci. USA 118 , e2020043118 (2021).

Cook, J., Lewandowsky, S. & Ecker, U. K. H. Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence. PLoS ONE 12 , e0175799 (2017).

Hughes, M. G. et al. Discrediting in a message board forum: the effects of social support and attacks on expertise and trustworthiness. J. Comput. Mediat. Commun. 19 , 325–341 (2014).

Paynter, J. et al. Evaluation of a template for countering misinformation — real-world autism treatment myth debunking. PLoS ONE 14 , e0210746 (2019).

Jolley, D. & Douglas, K. M. Prevention is better than cure: addressing anti-vaccine conspiracy theories. J. Appl. Soc. Psychol. 47 , 459–469 (2017).

Vraga, E. K., Kim, S. C., Cook, J. & Bode, L. Testing the effectiveness of correction placement and type on Instagram. Int. J. Press Politics 25 , 632–652 (2020).

Clayton, K. et al. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behav. 42 , 1073–1095 (2019).

Dai, Y., Yu, W. & Shen, F. The effects of message order and debiasing information in misinformation correction. Int. J. Commun. 15 , 21 (2021).

Swire-Thompson, B. et al. Evidence for a limited role of correction format when debunking misinformation. OSF https://osf.io/udny9/ (2021).

Gordon, A., Ecker, U. K. H. & Lewandowsky, S. Polarity and attitude effects in the continued-influence paradigm. J. Mem. Lang. 108 , 104028 (2019).

Grady, R. H., Ditto, P. H. & Loftus, E. F. Nevertheless partisanship persisted: fake news warnings help briefly, but bias returns with time. Cogn. Res. Princ. Implic. 6 , 52 (2021).

Schmid, P., Schwarzer, M. & Betsch, C. Weight-of-evidence strategies to mitigate the influence of messages of science denialism in public discussions. J. Cogn. 3 , 36 (2020).

Compton, J., van der Linden, S., Cook, J. & Basol, M. Inoculation theory in the post-truth era: extant findings and new frontiers for contested science misinformation, and conspiracy theories. Soc. Personal. Psychol. Compass 15 , e12602 (2021).

Lewandowsky, S. & van der Linden, S. Countering misinformation and fake news through inoculation and prebunking. Eur. Rev. Soc. Psychol. https://doi.org/10.1080/10463283.2021.1876983 (2021).

Roozenbeek, J., van der Linden, S. & Nygren, T. Prebunking interventions based on the psychological theory of inoculation can reduce susceptibility to misinformation across cultures. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016//mr-2020-008 (2020).

Maertens, R., Roozenbeek, J., Basol, M. & van der Linden, S. Long-term effectiveness of inoculation against misinformation: three longitudinal experiments. J. Exp. Psychol. Appl. 27 , 1–16 (2020).

van der Linden, S., Leiserowitz, A., Rosenthal, S. & Maibach, E. Inoculating the public against misinformation about climate change. Glob. Chall. 1 , 1600008 (2017).

Parker, K. A., Ivanov, B. & Compton, J. Inoculation’s efficacy with young adults’ risky behaviors: can inoculation confer cross-protection over related but untreated issues? Health Commun. 27 , 223–233 (2012).

Lewandowsky, S. & Yesilada, M. Inoculating against the spread of Islamophobic and radical-Islamist disinformation. Cognit. Res. Princ. Implic. 6 , 57 (2021).

Ivanov, B. et al. The general content of postinoculation talk: recalled issue-specific conversations following inoculation treatments. West. J. Commun. 79 , 218–238 (2015).

Amazeen, M. A. & Vargo, C. J. Sharing native advertising on Twitter: content analyses examining disclosure practices and their inoculating influence. Journal. Stud. 22 , 916–933 (2021).

Jones-Jang, S. M., Mortensen, T. & Liu, J. Does media literacy help identification of fake news? Information literacy helps but other literacies don’t. Am. Behav. Sci. 65 , 371–388 (2019).

Khan, M. L. & Idris, I. K. Recognise misinformation and verify before sharing: a reasoned action and information literacy perspective. Behav. Inf. Technol. 38 , 1194–1212 (2019).

Machete, P. & Turpin, M. The use of critical thinking to identify fake news: a systematic literature review. Lecture Notes Comput. Sci. 12067 , 235–246 (2020).

Vraga, E. K., Tully, M., Maksl, A., Craft, S. & Ashley, S. Theorizing news literacy behaviors. Commun. Theory 31 , 1–21 (2020).

Wineburg, S., McGrew, S., Breakstone, J. & Ortega, T. Evaluating information: the cornerstone of civic online reasoning. SDR https://purl.stanford.edu/fv751yt5934 (2016).

Breakstone, J. et al. Lateral reading: college students learn to critically evaluate internet sources in an online course. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-56 (2021).

Choy, M. & Chong, M. Seeing through misinformation: a framework for identifying fake online news. arxiv https://arxiv.org/abs/1804.03508 (2018).

Amazeen, M. A. & Bucy, E. P. Conferring resistance to digital disinformation: the inoculating influence of procedural news knowledge. J. Broadcasting Electron. Media 63 , 415–432 (2019).

Guess, A. M. et al. A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proc. Natl Acad. Sci. USA 117 , 15536–15545 (2020).

Hameleers, M. Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Inf. Commun. Soc. https://doi.org/10.1080/1369118x.2020.1764603 (2020).

Tully, M., Vraga, E. K. & Bode, L. Designing and testing news literacy messages for social media. Mass. Commun. Soc. 23 , 22–46 (2019).

Roozenbeek, J. & van der Linden, S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 5 , 65 (2019).

Roozenbeek, J. & van der Linden, S. Breaking Harmony Square: a game that inoculates against political misinformation. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-47 (2020).

Micallef, N., Avram, M., Menczer, F. & Patil, S. Fakey. Proc. ACM Human Comput. Interact. 5 , 1–27 (2021).

Katsaounidou, A., Vrysis, L., Kotsakis, R., Dimoulas, C. & Veglis, A. MAthE the game: a serious game for education and training in news verification. Educ. Sci. 9 , 155 (2019).

Mihailidis, P. & Viotty, S. Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in post-fact society. Am. Behav. Sci. 61 , 441–454 (2017).

Carnahan, D., Bergan, D. E. & Lee, S. Do corrective effects last? Results from a longitudinal experiment on beliefs toward immigration in the U.S. Political Behav. 43 , 1227–1246 (2021).

Wintersieck, A. L. Debating the truth. Am. Politics Res. 45 , 304–331 (2017).

Mosleh, M., Martel, C., Eckles, D. & Rand, D. in Proc. 2021 CHI Conf. Human Factors Computing Systems 2688–2700 (ACM, 2021).

Swire-Thompson, B., Ecker, U. K. H., Lewandowsky, S. & Berinsky, A. J. They might be a liar but they’re my liar: source evaluation and the prevalence of misinformation. Political Psychol. 41 , 21–34 (2019).

Lewandowsky, S. et al. The Debunking Handbook 2020 (George Mason Univ., 2020)

Kendeou, P., Smith, E. R. & O’Brien, E. J. Updating during reading comprehension: why causality matters. J. Exp. Psychol. Learn. Memory Cogn. 39 , 854–865 (2013).

Schwarz, N., Newman, E. & Leach, W. Making the truth stick & the myths fade: lessons from cognitive psychology. Behav. Sci. Policy 2 , 85–95 (2016).

Van Boekel, M., Lassonde, K. A., O’Brien, E. J. & Kendeou, P. Source credibility and the processing of refutation texts. Mem. Cogn. 45 , 168–181 (2017).

Margolin, D. B., Hannak, A. & Weber, I. Political fact-checking on Twitter: when do corrections have an effect? Political Commun. 35 , 196–219 (2017).

Schultz, P. W., Nolan, J. M., Cialdini, R. B., Goldstein, N. J. & Griskevicius, V. The constructive, destructive, and reconstructive power of social norms. Psychol. Sci. 18 , 429–434 (2007).

Chinn, S., Lane, D. S. & Hart, P. S. In consensus we trust? Persuasive effects of scientific consensus communication. Public Underst. Sci. 27 , 807–823 (2018).

Lewandowsky, S., Gignac, G. E. & Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nat. Clim. Change 3 , 399–404 (2013).

van der Linden, S. L., Clarke, C. E. & Maibach, E. W. Highlighting consensus among medical scientists increases public support for vaccines: evidence from a randomized experiment. BMC Public Health 15 , 1207 (2015).

van der Linden, S., Leiserowitz, A. & Maibach, E. Scientific agreement can neutralize politicization of facts. Nat. Hum. Behav. 2 , 2–3 (2017).

Vlasceanu, M. & Coman, A. The impact of social norms on health-related belief update. Appl. Psychol. Health Well-Being https://doi.org/10.1111/aphw.12313 (2021).

Nyhan, B. & Reifler, J. The roles of information deficits and identity threat in the prevalence of misperceptions. J. Elect. Public Opin. Parties 29 , 222–244 (2018).

Danielson, R. W., Sinatra, G. M. & Kendeou, P. Augmenting the refutation text effect with analogies and graphics. Discourse Process. 53 , 392–414 (2016).

Dixon, G. N., McKeever, B. W., Holton, A. E., Clarke, C. & Eosco, G. The power of a picture: overcoming scientific misinformation by communicating weight-of-evidence information with visual exemplars. J. Commun. 65 , 639–659 (2015).

van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D. & Maibach, E. W. How to communicate the scientific consensus on climate change: plain facts, pie charts or metaphors? Clim. Change 126 , 255–262 (2014).

Steffens, M. S., Dunn, A. G., Wiley, K. E. & Leask, J. How organisations promoting vaccination respond to misinformation on social media: a qualitative investigation. BMC Public Health 19 , 1348 (2019).

Hyland-Wood, B., Gardner, J., Leask, J. & Ecker, U. K. H. Toward effective government communication strategies in the era of COVID-19. Humanit. Soc. Sci. Commun. 8 , 30 (2021).

Sherman, D. K. & Cohen, G. L. Accepting threatening information: self-affirmation and the reduction of defensive biases. Curr. Direct. Psychol. Sci. 11 , 119–123 (2002).

Carnahan, D., Hao, Q., Jiang, X. & Lee, H. Feeling fine about being wrong: the influence of self-affirmation on the effectiveness of corrective information. Hum. Commun. Res. 44 , 274–298 (2018).

Vraga, E. K. & Bode, L. Correction as a solution for health misinformation on social media. Am. J. Public Health 110 , S278–S280 (2020).

Bode, L. & Vraga, E. K. In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J. Commun. 65 , 619–638 (2015).

Vraga, E. K. & Bode, L. Addressing COVID-19 misinformation on social media preemptively and responsively. Emerg. Infect. Dis. 27 , 396–403 (2021).

Vijaykumar, S. et al. How shades of truth and age affect responses to COVID-19 (mis)information: randomized survey experiment among WhatsApp users in UK and Brazil. Humanit. Soc. Sci.Commun. 8 , 88 (2021).

Bode, L. & Vraga, E. K. See something say something: correction of global health misinformation on social media. Health Commun. 33 , 1131–1140 (2017).

Pennycook, G. et al. Shifting attention to accuracy can reduce misinformation online. Nature 592 , 590–595 (2021).

Matz, S. C., Kosinski, M., Nave, G. & Stillwell, D. J. Psychological targeting as an effective approach to digital mass persuasion. Proc. Natl Acad. Sci. USA 114 , 12714–12719 (2017).

Vargo, C. J., Guo, L. & Amazeen, M. A. The agenda-setting power of fake news: a big data analysis of the online media landscape from 2014 to 2016. N. Media Soc. 20 , 2028–2049 (2018).

Allington, D., Duffy, B., Wessely, S., Dhavan, N. & Rubin, J. Health-protective behavior, social media usage and conspiracy belief during the COVID-19 public health emergency. Psychol. Med. 51 , 1763–1769 (2020).

Cook, J., Bedford, D. & Mandia, S. Raising climate literacy through addressing misinformation: case studies in agnotology-based learning. J. Geosci. Educ. 62 , 296–306 (2014).

Amazeen, M. A. News in an era of content confusion: effects of news use motivations and context on native advertising and digital news perceptions. Journal. Mass. Commun. Q. 97 , 161–187 (2020).

Lawrence, R. G. & Boydstun, A. E. What we should really be asking about media attention to Trump. Political Commun. 34 , 150–153 (2016).

Schmid, P., MacDonald, N. E., Habersaat, K. & Butler, R. Commentary to: How to respond to vocal vaccine deniers in public. Vaccine 36 , 196–198 (2018).

Shelby, A. & Ernst, K. Story and science. Hum. Vaccines Immunother. 9 , 1795–1801 (2013).

Lazić, A. & Žeželj, I. A systematic review of narrative interventions: lessons for countering anti-vaccination conspiracy theories and misinformation. Public Underst. Sci. 30 , 644–670 (2021).

Ecker, U. K. H., Butler, L. H. & Hamby, A. You don’t have to tell a story! A registered report testing the effectiveness of narrative versus non-narrative misinformation corrections. Cognit. Res. Princ. Implic. 5 , 64 (2020).

Van Bavel, J. J., Reinero, D. A., Spring, V., Harris, E. A. & Duke, A. Speaking my truth: why personal experiences can bridge divides but mislead. Proc. Natl Acad. Sci. USA 118 , e2100280118 (2021).

Merpert, A., Furman, M., Anauati, M. V., Zommer, L. & Taylor, I. Is that even checkable? An experimental study in identifying checkable statements in political discourse. Commun. Res. Rep. 35 , 48–57 (2017).

Amazeen, M. A. & Wojdynski, B. W. Reducing native advertising deception: revisiting the antecedents and consequences of persuasion knowledge in digital news contexts. Mass. Commun. Soc. 22 , 222–247 (2019).

Peacock, C., Masullo, G. M. & Stroud, N. J. What’s in a label? The effect of news labels on perceived credibility. Journalism https://doi.org/10.1177/1464884920971522 (2020).

Ecker, U. K. H., Lewandowsky, S. & Chadwick, M. Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect. Cognit. Res. Princ. Implic. 5 , 41 (2020).

McCright, A. M. & Dunlap, R. E. Combatting misinformation requires recognizing its types and the factors that facilitate its spread and resonance. J. Appl. Res. Mem. Cogn. 6 , 389–396 (2017).

Oreskes, N. & Conway, E. M. Defeating the merchants of doubt. Nature 465 , 686–687 (2010).

Golovchenko, Y., Hartmann, M. & Adler-Nissen, R. State media and civil society in the information warfare over Ukraine: citizen curators of digital disinformation. Int. Aff. 94 , 975–994 (2018).

Tandoc, E. C., Lim, Z. W. & Ling, R. Defining fake news. Digit. Journal. 6 , 137–153 (2017).

Mosleh, M., Pennycook, G., Arechar, A. A. & Rand, D. G. Cognitive reflection correlates with behavior on Twitter. Nat. Commun. 12 , 921 (2021).

Scheufele, D. A. & Krause, N. M. Science audiences misinformation, and fake news. Proc. Natl Acad. Sci. USA 116 , 7662–7669 (2019).

Yesilada, M. & Lewandowsky, S. A systematic review: the YouTube recommender system and pathways to problematic content. psyarxiv https://psyarxiv.com/6pv5c/ (2021).

Bursztyn, L., Rao, A., Roth, C. & Yanagizawa-Drott, D. Misinformation during a pandemic. NBER https://www.nber.org/papers/w27417 (2020).

Simonov, A., Sacher, S., Dubé, J.-P. & Biswas, S. The persuasive effect of Fox News: non-compliance with social distancing during the COVID-19 pandemic. NBER https://www.nber.org/papers/w27237 (2020).

Bechmann, A. Tackling disinformation and infodemics demands media policy changes. Digit. Journal. 8 , 855–863 (2020).

Marsden, C., Meyer, T. & Brown, I. Platform values and democratic elections: how can the law regulate digital disinformation? Comput. Law Security Rev. 36 , 105373 (2020).

Saurwein, F. & Spencer-Smith, C. Combating disinformation on social media: multilevel governance and distributed accountability in Europe. Digit. Journal. 8 , 820–841 (2020).

Tenove, C. Protecting democracy from disinformation: normative threats and policy responses. Int. J. Press Politics 25 , 517–537 (2020).

Reisach, U. The responsibility of social media in times of societal and political manipulation. Eur. J. Oper. Res. 291 , 906–917 (2021).

Lewandowsky, S. et al. Technology and democracy: understanding the influence of online technologies on political behaviour and decision-making. Publ. Office Eur. Union https://doi.org/10.2760/593478 (2020).

Blasio, E. D. & Selva, D. Who is responsible for disinformation? European approaches to social platforms’ accountability in the post-truth era. Am. Behav. Scientist 65 , 825–846 (2021).

Pickard, V. Restructuring democratic infrastructures: a policy approach to the journalism crisis. Digit. J. 8 , 704–719 (2020).

Barzilai, S. & Chinn, C. A. A review of educational responses to the post-truth condition: four lenses on post-truth problems. Educ. Psychol. 55 , 107–119 (2020).

Lee, N. M. Fake news, phishing, and fraud: a call for research on digital media literacy education beyond the classroom. Commun. Educ. 67 , 460–466 (2018).

Sinatra, G. M. & Lombardi, D. Evaluating sources of scientific evidence and claims in the post-truth era may require reappraising plausibility judgments. Educ. Psychol. 55 , 120–131 (2020).

Vraga, E. K. & Bode, L. Leveraging institutions, educators, and networks to correct misinformation: a commentary on Lewandowsky, Ecker, and Cook. J. Appl. Res. Mem. Cogn. 6 , 382–388 (2017).

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R. & Hertwig, R. How behavioural sciences can promote truth, autonomy and democratic discourse online. Nat. Hum. Behav. 4 , 1102–1109 (2020).

Tsipursky, G., Votta, F. & Mulick, J. A. A psychological approach to promoting truth in politics: the pro-truth pledge. J. Soc. Political Psychol. 6 , 271–290 (2018).

Bak-Coleman, J. B. et al. Combining interventions to reduce the spread of viral misinformation. OSF https://osf.io/preprints/socarxiv/4jtvm/ (2021).

Ognyanova, K., Lazer, D., Robertson, R. E. & Wilson, C. Misinformation in action: fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-024 (2020).

Swire-Thompson, B. & Lazer, D. Public health and online misinformation: challenges and recommendations. Annu. Rev. Public Health 41 , 433–451 (2020).

Boele-Woelki, K., Francisco, J. S., Hahn, U. & Herz, J. How we can rebuild trust in science and why we must. Angew. Chem. Int. Ed. 57 , 13696–13697 (2018).

Klein, O. et al. A practical guide for transparency in psychological science. Collabra Psychol. 4 , 20 (2018).

Masullo, G. M., Curry, A. L., Whipple, K. N. & Murray, C. The story behind the story: examining transparency about the journalistic process and news outlet credibility. Journal. Pract. https://doi.org/10.1080/17512786.2020.1870529 (2021).

Amazeen, M. A. Checking the fact-checkers in 2008: predicting political ad scrutiny and assessing consistency. J. Political Mark. 15 , 433–464 (2014).

Hahl, O., Kim, M. & Sivan, E. W. Z. The authentic appeal of the lying demagogue: proclaiming the deeper truth about political illegitimacy. Am. Sociol. Rev. 83 , 1–33 (2018).

Jaiswal, J., LoSchiavo, C. & Perlman, D. C. Disinformation, misinformation and inequality-driven mistrust in the time of COVID-19: lessons unlearned from AIDS denialism. AIDS Behav. 24 , 2776–2780 (2020).

Cheon, B. K., Melani, I. & Hong, Y. How USA-centric is psychology? An archival study of implicit assumptions of generalizability of findings to human nature based on origins of study samples. Soc. Psychol. Personal. Sci. 11 , 928–937 (2020).

Swire-Thompson, B., DeGutis, J. & Lazer, D. Searching for the backfire effect: measurement and design considerations. J. Appl. Res. Mem. Cogn. 9 , 286–299 (2020).

Wang, Y., McKee, M., Torbica, A. & Stuckler, D. Systematic literature review on the spread of health-related misinformation on social media. Soc. Sci. Med. 240 , 112552 (2019).

Bastani, P. & Bahrami, M. A. COVID-19 related misinformation on social media: a qualitative study from Iran. J. Med. Internet Res. https://doi.org/10.2196/18932 (2020).

Arata, N. B., Torneo, A. R. & Contreras, A. P. Partisanship, political support, and information processing among President Rodrigo Duterte’s supporters and non-supporters. Philippine Political Sci. J. 41 , 73–105 (2020).

Islam, A. K. M. N., Laato, S., Talukder, S. & Sutinen, E. Misinformation sharing and social media fatigue during COVID-19: an affordance and cognitive load perspective. Technol. Forecast. Soc. Change 159 , 120201 (2020).

Xu, Y., Wong, R., He, S., Veldre, A. & Andrews, S. Is it smart to read on your phone? The impact of reading format and culture on the continued influence of misinformation. Mem. Cogn. 48 , 1112–1127 (2020).

Lyons, B., Mérola, V., Reifler, J. & Stoeckel, F. How politics shape views toward fact-checking: evidence from six European countries. Int. J. Press Politics 25 , 469–492 (2020).

Porter, E. & Wood, T. J. The global effectiveness of fact-checking: evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proc. Natl Acad. Sci. USA 118 , e2104235118 (2021).

Ecker, U. K. H., Lewandowsky, S., Chang, E. P. & Pillai, R. The effects of subtle misinformation in news headlines. J. Exp. Psychol. Appl. 20 , 323–335 (2014).

Powell, D., Bian, L. & Markman, E. M. When intents to educate can misinform: inadvertent paltering through violations of communicative norms. PLoS ONE 15 , e0230360 (2020).

Rich, P. R. & Zaragoza, M. S. The continued influence of implied and explicitly stated misinformation in news reports. J. Exp. Psychol. Learn. Memory Cogn. 42 , 62–74 (2016).

Shen, C. et al. Fake images: the effects of source intermediary and digital media literacy on contextual assessment of image credibility online. N. Media Soc. 21 , 438–463 (2018).

Barari, S., Lucas, C. & Munger, K. Political deepfakes are as credible as other fake media and (sometimes) real media. OSF https://osf.io/cdfh3/ (2021).

Young, D. G., Jamieson, K. H., Poulsen, S. & Goldring, A. Fact-checking effectiveness as a function of format and tone: evaluating FactCheck.org and FlackCheck.org. Journal. Mass. Commun. Q. 95 , 49–75 (2017).

Vraga, E. K., Kim, S. C. & Cook, J. Testing logic-based and humor-based corrections for science health, and political misinformation on social media. J. Broadcasting Electron. Media 63 , 393–414 (2019).

Dunn, A. G. et al. Mapping information exposure on social media to explain differences in HPV vaccine coverage in the United States. Vaccine 35 , 3033–3040 (2017).

Marinescu, I. E., Lawlor, P. N. & Kording, K. P. Quasi-experimental causality in neuroscience and behavioural research. Nat. Hum. Behav. 2 , 891–898 (2018).

Van Bavel, J. J. et al. Political psychology in the digital (mis)information age: a model of news belief and sharing. Soc. Issues Policy Rev. 15 , 84–113 (2021).

Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D. & Rich, R. F. Misinformation and the currency of democratic citizenship. J. Politics 62 , 790–816 (2000).

Shelke, S. & Attar, V. Source detection of rumor in social network: a review. Online Soc. Netw. Media 9 , 30–42 (2019).

Brady, W. J., Gantman, A. P. & Van Bavel, J. J. Attentional capture helps explain why moral and emotional content go viral. J. Exp. Psychol. Gen. 149 , 746–756 (2020).

Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A. & Van Bavel, J. J. Emotion shapes the diffusion of moralized content in social networks. Proc. Natl Acad. Sci. USA 114 , 7313–7318 (2017).

Fazio, L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-009 (2020).

Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G. & Rand, D. G. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol. Sci. 31 , 770–780 (2020).

Pew Research Center. Many Americans Say Made-up News is a Critical Problem That Needs to be Fixed https://www.journalism.org/wp-content/uploads/sites/8/2019/06/PJ_2019.06.05_Misinformation_FINAL-1.pdf (2019).

Pew Research Center. Many Americans Believe Fake News is Sowing Confusion https://www.journalism.org/wp-content/uploads/sites/8/2016/12/PJ_2016.12.15_fake-news_FINAL.pdf (2016).

Altay, S., Araujo, Ede & Mercier, H. If this account is true, it is most enormously wonderful: interestingness-if-true and the sharing of true and false news. Digital Journal. https://doi.org/10.1080/21670811.2021.1941163 (2021).

Brady, W. J., Crockett, M. J. & Van Bavel, J. J. The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online. Perspect. Psychol. Sci. 15 , 978–1010 (2020).

Crockett, M. J. Moral outrage in the digital age. Nat. Hum. Behav. 1 , 769–771 (2017).

Petersen, M. B., Osmundsen, M. & Arceneaux, K. The “need for chaos” and motivations to share hostile political rumors. psyarxiv https://psyarxiv.com/6m4ts/ (2020).

Ecker, U. K. H., Lewandowsky, S., Jayawardana, K. & Mladenovic, A. Refutations of equivocal claims: no evidence for an ironic effect of counterargument number. J. Appl. Res. Mem. Cogn. 8 , 98–107 (2019).

Skurnik, I., Yoon, C., Park, D. C. & Schwarz, N. How warnings about false claims become recommendations. J. Consum. Res. 31 , 713–724 (2005).

Schwarz, N., Sanna, L. J., Skurnik, I. & Yoon, C. Metacognitive experiences and the intricacies of setting people straight: implications for debiasing and public information campaigns. Adv. Exp. Soc. Psychol. 39 , 127–161 (2007).

Cameron, K. A. et al. Patient knowledge and recall of health information following exposure to facts and myths message format variations. Patient Educ. Counsel. 92 , 381–387 (2013).

Wahlheim, C. N., Alexander, T. R. & Peske, C. D. Reminders of everyday misinformation statements can enhance memory for and belief in corrections of those statements in the short term. Psychol. Sci. 31 , 1325–1339 (2020).

Autry, K. S. & Duarte, S. E. Correcting the unknown: negated corrections may increase belief in misinformation. Appl. Cognit. Psychol. 35 , 960–975 (2021).

Pluviano, S., Watt, C. & Della Sala, S. Misinformation lingers in memory: failure of three pro-vaccination strategies. PLoS ONE 12 , e0181640 (2017).

Taber, C. S. & Lodge, M. Motivated skepticism in the evaluation of political beliefs. Am. J. Political. Sci. 50 , 755–769 (2006).

Nyhan, B., Reifler, J. & Ubel, P. A. The hazards of correcting myths about health care reform. Med. Care 51 , 127–132 (2013).

Hart, P. S. & Nisbet, E. C. Boomerang effects in science communication. Commun. Res. 39 , 701–723 (2011).

Swire-Thompson, B., Miklaucic, N., Wihbey, J., Lazer, D. & DeGutis, J. Backfire effects after correcting misinformation are strongly associated with reliability. J. Exp. Psychol. Gen . (in the press).

Zhou, J. Boomerangs versus javelins: how polarization constrains communication on climate change. Environ. Politics 25 , 788–811 (2016).

Download references

Acknowledgements

U.K.H.E. acknowledges support from the Australian Research Council (Future Fellowship FT190100708). S.L. acknowledges support from the Alexander von Humboldt Foundation, the Volkswagen Foundation (large grant ‘Reclaiming individual autonomy and democratic discourse online’) and the Economic and Social Research Council (ESRC) through a Knowledge Exchange Fellowship. S.L. and P.S. acknowledge support from the European Commission (Horizon 2020 grant agreement No. 964728 JITSUVAX).

Author information

Authors and affiliations.

School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia

  • Ullrich K. H. Ecker

School of Psychological Science, University of Bristol, Bristol, UK

  • Stephan Lewandowsky

Climate Change Communication Research Hub, Monash University, Melbourne, Victoria, Australia

Media and Communication Science, University of Erfurt, Erfurt, Germany

Philipp Schmid

Department of Psychology and Human Development, Vanderbilt University, Nashville, TN, USA

Lisa K. Fazio

Department of Psychology, Harvard University, Cambridge, MA, USA

Nadia Brashier

Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA

Department of Educational Psychology, University of Minnesota, Minneapolis, MN, USA

Panayiota Kendeou

Hubbard School of Journalism and Mass Communication, University of Minnesota, Minneapolis, MN, USA

Emily K. Vraga

College of Communication, Boston University, Boston, MA, USA

Michelle A. Amazeen

You can also search for this author in PubMed   Google Scholar

Contributions

U.K.H.E., S.L. and J.C. were lead authors. Section leads worked on individual sections with the lead authors: P.S. on ‘Introduction’; L.K.F. (with N.B.) on ‘Drivers of false beliefs’; P.K. on ‘Barriers to belief revision’; E.K.V. on ‘Interventions to combat misinformation’; M.A.A. on ‘Practical implications’. Authors are ordered in this manner. All authors commented on and revised the entire manuscript before submission. J.C. developed the figures.

Corresponding author

Correspondence to Ullrich K. H. Ecker .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information.

Nature Reviews Psychology thanks M. Hornsey, M. Zaragoza and J. Zhang for their contribution to the peer review of this work.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Related links

International Fact-Checking Network: https://www.poynter.org/ifcn/

World Health Organization: https://www.who.int/news-room/fact-sheets

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. The psychological drivers of misinformation belief and its resistance to correction. Nat Rev Psychol 1 , 13–29 (2022). https://doi.org/10.1038/s44159-021-00006-y

Download citation

Accepted : 30 September 2021

Published : 12 January 2022

Issue Date : January 2022

DOI : https://doi.org/10.1038/s44159-021-00006-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Naive skepticism scale: development and validation tests applied to the chilean population.

  • Rodrigo Ferrer-Urbina
  • Yasna Ramírez
  • Geraldy Sepúlveda-Páez

Psicologia: Reflexão e Crítica (2024)

Knowledge and trust of mothers regarding childhood vaccination in Rwanda

  • Edward Mbonigaba
  • Simiao Chen

BMC Public Health (2024)

Memory and belief updating following complete and partial reminders of fake news

  • Paige L. Kemp
  • Alyssa H. Sinclair
  • Christopher N. Wahlheim

Cognitive Research: Principles and Implications (2024)

Thinking clearly about misinformation

  • Li Qian Tay

Communications Psychology (2024)

Source-credibility information and social norms improve truth discernment and reduce engagement with misinformation online

  • Lucy H. Butler

Scientific Reports (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

peer pressure fallacy critical thinking

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

3.7: Logical Fallacies

  • Last updated
  • Save as PDF
  • Page ID 173063

  • Terri Pantuso, Emilie Zickel, & Melanie Gagich
  • Texas A&M Univesrity

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

As previously noted, using ethos, pathos, and logos in an argument does not mean that the argument made is necessarily a good one. In academia, especially, we care a lot about making our arguments logically sound; we care about logos. We seek to create work that is rooted in rational discourse. We seek to produce our own rational discourse. We value carefully researched, methodically crafted work. Thus, to be a strong academic writer, one should seek to avoid logical fallacies, which are flaws in reasoning.

To refer to something as a fallacy means to say that it is false. Think of the concept of a logical fallacy as something that makes an argument problematic, open to attack, or weak. In academic discourse, logical fallacies are seen as failures – as things we want to avoid.

Thinking about fallacies can be confusing because we see them all the time: in advertising, in conversation, in political discourse. Fallacies are everywhere. But as students of rhetoric, part of our job is to spend time identifying these fallacies in both our own writing and in others’ as a way to avoid them.

When reading or listening to an argument, be cognizant of when the reasoning relies upon one of these fallacies of logic. If it does, question the source and the information presented carefully.

As you draft ideas for your own arguments, test each of your reasons/claims against these definitions. If you find that you have used any of these fallacies to build your argument, revise for clarity.

Select five (5) of the logical fallacies presented above and write an example for each. Then, in a brief statement explain the nature of the fallacies you have written.

Query \(\PageIndex{1}\)

Query \(\pageindex{2}\), query \(\pageindex{3}\).

This section contains material from:

Gagich, Melanie and Emilie Zickel. “Logical Fallacies.” In A Guide to Rhetoric, Genre, and Success in First-Year Writing , by Melanie Gagich and Emilie Zickel. Cleveland: MSL Academic Endeavors. Accessed July 2019. https://pressbooks.ulib.csuohio.edu/csu-fyw-rhetoric/chapter/logical-fallacies/ . Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .

peer pressure fallacy critical thinking

  • Register or Log In
  • 0) { document.location='/search/'+document.getElementById('quicksearch').value.trim().toLowerCase(); }">

Chapter 2 Answer Key to Select Chapter Exercises

Exercise 2.1 2. The essential ingredients of critical thinking are a systematic approach, involving the evaluation or formulation of claims, based on rational standards. 3. Peer pressure is a kind of group pressure to accept a statement or act in a certain way. Specifically, such group pressure is called ‘peer pressure’ when the pressure to conform comes from one’s peers. 4. Fake news is false information masquerading as news. 9. A kind of biased thinking in which we notice certain things and ignore others, even though we should be noticing both. The remedy is to make a conscious effort to look for opposing evidence. 10. We may ignore facts that contradict our beliefs and search out facts that support them. 13. The fallacy of arguing that a claim must be true merely because a substantial number of people believe it. 19. The view that we know much less than we think we do or nothing at all. Exercise 2.2 2. Self-interested 5. Group pressure 8. Face-saving 9. Group pressure Exercise 2.3 2. B, C 5. D Exercise 2.4 2. Peer pressure. Possible negative consequence: Harm to Prathamesh’s self-esteem; risk of alienating a valuable player. 6. Appeal to popularity. Possible negative consequence: Damage to the politician’s reputation as a decisive decision-maker; establishment of a precedent in which politicians cave in to pressure. Exercise 2.5 3. Face-saving. Possible negative consequence: public opinion of Antonio may sink even lower, and political allies may choose to distance themselves from him. 5. Face-saving; possible negative consequences: Dishonesty, which misleads people and sets Justin up for self-deception or future dishonesty.

Select your Country

Open Education Sociology Dictionary

Biases, Fallacies, & Critical Thinking Information Resources

To improve your understanding of any topic, you need to identify your own biases and fallacies in arguments.

By doing so, your overall critical thinking skills will improve, and your ability to effectively apply sociological concepts will increase.

Table of Contents

Everyone is biased. Biases affect decisions we make every day.

We are socialized and enculturated to think in certain ways. Our place (locality) and time (temporality) determine our beliefs , norms , and  values , which creates and sustains our biases.

Our biases are regularly manipulated by the media and governments to sway opinions.

Whilst bias cannot be eliminated, its impact can be limited. By identifying common biases, we can change ourselves and improve our understanding of the world.

There are many identified and researched cognitive biases. It is not necessary to become an expert in identifying them all.

YourBias.is has free PDF posters and flashcards to help you learn to recognize biases. Start with these biases:

  • Perhaps the most common and nefarious but also the easiest to recognize and thus avoid
  • Important for students (and all teachers) to recognize this in themselves
  • Framing Effect
  • Fundamental Attribution Error
  • In-group Bias

Additional Information

  • Mental Floss – 20 Cognitive Biases That Affect Your Decisions: mentalfloss.com
  • Psychology Today – 12 Common Biases that Affect How We Make Everyday Decisions: psychologytoday.com
  • Practical Psychology – 12 Cognitive Biases Explained: How to Think Better and More Logically Removing Bias: youtube.com
  • The Associated Press
  • Christian Science Monitor

A fallacy is (basically) a flaw in reasoning .

Fallacies can be intentional or unintentional. Politicians and the media can use fallacies to manipulate the public. Manipulation through intentional fallacies (and biases) is easy to do and a long-established form of propaganda and social control . Often students use unintentional fallacies in their writing or responses to class discussion. It takes time and effort to learn how to identify fallacies.

Fallacies are easy to find on most bumper stickers around election time, t-shirts at political rallies, or from a relative at any family gathering.

There are an immense amount of identified fallacies. However, you can begin with these common fallacies.

YourLogicalFallacyIs.com has free PDF posters and flashcards to help you learn to recognize fallacies. Start with these fallacies:

  • Appeal to Authority
  • Appeal to Emotion
  • Black or White (also called False Dilemma)
  • A media personality’s or pundit’s best friend.
  • A politician’s best friend
  • A great general overview with videos
  • Internet Encyclopedia of Philosophy – Fallacies: iep.utm.edu
  • Medium – 12 Common Fallacies Used in Social Research: medium.com
  • Stanford Encyclopedia of Philosophy – Fallacies: plato.stanford.edu
  • Wireless Philosophy – Fallacies: Formal and Informal: youtube.com
  • The Writing Center – University of North Carolina Chapel Hill – Fallacies: writingcenter.unc.edu

Critical Thinking

  • Khan Academy – Fundamentals: Introduction to Critical Thinking: khanacademy.org
  • Macat – What is Critical Thinking?: youtube.com
  • TED-Ed – 5 Tips to Improve your Critical Thinking: youtube.com
  • University of Oxford – Critical Reasoning for Beginners: podcasts.ox.ac.uk

Citing the OESD : Please see the front page for general citation information or any definition for specific citation information.

IMAGES

  1. Critical Thinking: The Fallacy of Peer Pressure

    peer pressure fallacy critical thinking

  2. PPT

    peer pressure fallacy critical thinking

  3. PPT

    peer pressure fallacy critical thinking

  4. Logic and critical thinking Chapter 5 Fallacy part 1 fallacy of relevance by Afan Oromo

    peer pressure fallacy critical thinking

  5. Peer Pressure

    peer pressure fallacy critical thinking

  6. Improve your students' critical thinking and debate skills with this

    peer pressure fallacy critical thinking

VIDEO

  1. When Peer Pressure Works In Your Favor @tradesbysci

  2. APPEAL TO EMOTION

  3. This Is Exactly How Positive Thinking Works

  4. Karl Popper's Philosophy of Science and Falsifiability #philosophy #quote

  5. Master Logical Fallacies Like a Pro!

  6. Critical Thinking About Coincidences (4/5)

COMMENTS

  1. Critical Thinking: The Fallacy of Peer Pressure

    This video is designed to help students, lifelong learners and professionals understand the Fallacy of Peer Pressure -- a common mistake in reasoning and arg...

  2. Fallacious Reasoning

    Bandwagon Appeal: Presenting the thoughts of a group to persuade others to think the same way based on peer pressure. False Analogy: Comparing two unalike things based on a trivial similarity to prove a point. Listen carefully to the definitions of each fallacy and the examples provided, then attempt the drag and drop activity that follows.

  3. Critical Thinking and Decision-Making: Logical Fallacies

    Sometimes logical fallacies are intentionally used to try and win a debate. In these cases, they're often presented by the speaker with a certain level of confidence.And in doing so, they're more persuasive: If they sound like they know what they're talking about, we're more likely to believe them, even if their stance doesn't make complete logical sense.

  4. Bandwagon Fallacy (29 Examples + Definition)

    That's the Bandwagon Fallacy at play. Simply put, you're led to believe something is true or good because a lot of people are doing it. You might commonly know this as peer pressure. In psychology terms, this fallacy taps into our social nature. We're wired to seek approval and fit in, making us susceptible to group thinking. But remember ...

  5. 'Why is this hard, to have critical thinking?' Exploring the factors

    However, a student's cultural background may conflict with some elements of the Western model of critical thinking, specifically independence of thought (Facione, 1990), tolerance of ambiguity (Davies and Barnett, 2015) and the ability to resist peer pressure and authority (Claris and Riley, 2012). Students may come from a culture where the ...

  6. PDF The Thinker's Guide To Fallacies

    The Foundation for Critical Thinking. To understand the human mind, understand self-deception. Anon. The word 'fallacy' derives from two Latin words, fallax ("deceptive") and fallere ("to deceive"). This is an important concept in human life because much human thinking deceives itself while deceiving others. The human mind has no ...

  7. Fallacies

    The study of fallacies is an application of the principles of critical thinking. Being familiar with typical fallacies can help us avoid them and help explain other people's mistakes. There are different ways of classifying fallacies. Broadly speaking, we might divide fallacies into four kinds:

  8. (PDF) Logical Fallacies: How They Undermine Critical Thinking and How

    Common Logical Fallacies That Interfere With Critical Thinking Logical fallacies should not be confused with cognitive biases. A logical fallacy occurs in the moment and is either intentionally or unintentionally used to win a dispute through 3 the use of unfounded assertions, invalid inferences, unsupported conclusions, or groundless arguments

  9. Peer interaction and the learning of critical thinking skills

    The notion of critical thinking is broad-ranging and encompasses a variety of thinking skills (for example, identifying assumptions, identifying and dealing with equivocation, making value judgements, analysing arguments, asking and answering questions of clarification or challenge, judging the credibility of a source and so on).

  10. 3.3: Fallacies

    The study of fallacies is an application of the principles of critical thinking. Being familiar with typical fallacies can help us avoid them. We would also be in a position to explain other people's mistakes. There are different ways of classifying fallacies. Broadly speaking, we might divide fallacies into four kinds.

  11. The psychological drivers of misinformation belief and its ...

    A second approach is to address the logical fallacies common in some types of disinformation — for example, corrections that highlight inherently contradictory claims such as 'global ...

  12. Herd Mentality: The Pros and Cons of Peer Pressure ...

    🐏 Herd Mentality: The Pros and Cons of Peer Pressure & Understanding the 🐲 Bandwagon Effect. ... To resist the bandwagon effect, it is important to encourage critical thinking and independent evaluation of information. This can be achieved by teaching people to question assumptions, consider alternative perspectives, and evaluate evidence

  13. 3.7: Logical Fallacies

    Thinking about fallacies can be confusing because we see them all the time: in advertising, in conversation, in political discourse. Fallacies are everywhere. ... This is a fallacy that assumes one will follow the crowd, sort of by peer pressure. Consider the old adage "Everybody's doing it!" The problem with this type of fallacy is that ...

  14. Logical Fallacies: How They Undermine Critical Thinking and How to

    Abstract. This paper explains how to recognize and steer clear of numerous common logical fallacies, ranging from ad hominem arguments to wishful thinking, that can damage an argument. Critical ...

  15. Chapter 2 Answer Key to Select Chapter Exercises

    The essential ingredients of critical thinking are a systematic approach, involving the evaluation or formulation of claims, based on rational standards. 3. Peer pressure is a kind of group pressure to accept a statement or act in a certain way. ... The fallacy of arguing that a claim must be true merely because a substantial number of people ...

  16. Biases, Fallacies, & Critical Thinking Information Resources

    Bias. Everyone is biased. Biases affect decisions we make every day. We are socialized and enculturated to think in certain ways. Our place (locality) and time (temporality) determine our beliefs, norms, and values, which creates and sustains our biases. Our biases are regularly manipulated by the media and governments to sway opinions.

  17. Critical Thinking Chapter 2. Flashcards

    Critical Thinking Chapter 2. Peer pressure. Click the card to flip 👆. When the pressure to conform comes from your peers. Click the card to flip 👆. 1 / 10.

  18. Bandwagon effect

    The bandwagon effect is a psychological phenomenon where people adopt certain behaviors, styles, or attitudes simply because others are doing so. More specifically, it is a cognitive bias by which public opinion or behaviours can alter due to particular actions and beliefs rallying amongst the public. It is a psychological phenomenon whereby the rate of uptake of beliefs, ideas, fads and ...

  19. The Power of Critical Thinking: Chapter 2 Flashcards

    Study with Quizlet and memorize flashcards containing terms like Appeal to common Practice, Peer Pressure, Philosophical Skepticism and more. ... Critical Thinking - Chapter 5 Fallacies. 37 terms. kristen_sedlacek. Preview. HW 4 EX 3.5. 9 terms. Raquel_Garcia53. Preview. PHARM 195: Four Dimensions of Renewal. 13 terms. rionacharisison.

  20. Guide to the Most Common Logical Fallacies

    Other names: Personal attack, name-calling. Definition and explanation: Latin for "to the person," the ad hominem fallacy is a personal attack. Essentially, instead of addressing the substance of an argument, someone is attempting to discredit the argument by attacking the source. The ad hominem is one of the most common logical fallacies.

  21. Chapter 2: Obstacles to Critical Thinking Flashcards

    Peer pressure. Click the card to flip 👆 ... Concise Guide to Critical Thinking CH 2: Obstacles to Critical Thinking. 20 terms. andreirenzi. Preview. chapter 3. 71 terms. Ileens1997. Preview. ... the fallacy of accepting or rejecting a claim based solely on what groups of people generally do or how they behave (when the action or behavior is ...

  22. Chapter 2: Obstacles to Critical Thinking Flashcards

    The view that truth is relative societies. Philosophical Skepticism. Belief that we know much less than we think we do or nothing at all. Philosophical Skeptics. Thinkers who raise doubts about how much we know. Study with Quizlet and memorize flashcards containing terms like Peer Pressure, Appeal to Popularity, Appeal to Common Practice and more.

  23. Critical Thinking Chapter 2 Flashcards

    psychological obstacle, self-interested thinking decide to make a claim solely on the grounds that it advances, or coincides with, our interests peer pressure pressure to conform from your peers