Develop Good Habits

5 Post Hoc Fallacy Examples (and How to Respond to This Argument)

There might be affiliate links on this page, which means we get a small commission of anything you buy. As an Amazon Associate we earn from qualifying purchases. Please do your own research before making any online purchase.

When I was 28, I got laid off from my favorite full-time job. I left work that day in tears, thinking my professional life was over.

I felt betrayed by my colleagues, with whom I had become extremely close. My boss, a big sister of sorts, all of a sudden felt like a stranger.

I went home…and I threw away my outfit.

I think I assumed the outfit was bad luck. Everything at work had been going fine up until that point, so it must have been something that I had done differently that day to cue this fate. That morning, I had chosen to put on that outfit, and not long after, I lost my job.

As it turns out, the company had lost its largest account just days prior, and I was one of many who suffered the consequences of that. But, at that moment when I was changing out of my work clothes, I assumed a post hoc ergo propter hoc fallacy–more commonly known as just a post hoc fallacy.

While this logical fallacy may sound complicated, it’s actually just simple enough that we all fall victim to it at one point or another.

So in this article, we will define the post hoc fallacy and look at five examples of what it looks like in everyday life. Then we will review some strategies that you can use to respond to this type of fallacious reasoning if you come across it in your daily life.

Let’s get to it.

Table of Contents

What Is the Post Hoc Fallacy?

The literal translation of the Latin phrase “post hoc ergo propter hoc” is, “after this, therefore because of this.” The argument attempts to assign the causality of a situation to a previously occurring event, i.e. X occurred before Y, so X caused Y. The issue here is that it makes the grave mistake of assuming correlation equals causation, and I’m sure you remember reciting in school that this is not the case.

The post hoc fallacy concludes that because one factor happened before another, the initial factor must have caused the events to come. Now, there are a lot of cases in which this is true: you exercise, therefore you sweat; it’s nighttime, therefore you feel sleepy; you come to a red light, you stop. This basically defines the habit loop that we know so well . But when two events only have time as a common factor, the word “therefore” becomes incorrect.

When two things are correlated, there is some sort of link between them. However, causation means that one event caused the next to happen. As humans, we feel a strong motive to make sense of our surroundings and experiences, and assigning cause to things that happen in sequence feels logical. Doing so allows us to link facts with each other, which makes it easier for our brains to understand and accept them.

However, we live in a complex world, and what is easy to understand and accept isn’t always accurate.

Let’s look at how this might play out.

5 Post Hoc Fallacy Examples

1. in medicine.

Many post hoc fallacies have been explored in the search for causes and cures for diseases. For example, when searching for the cause of malaria, people once observed that those who went out at night tended to develop this disease. This led people to believe that the night air caused this disease , leading people to go to great lengths to shut the “bad air” out of their homes at night to avoid becoming infected.

It was later determined that mosquitoes carried this disease, which made sense because the mosquitoes came out at night, so the people they were infecting were also those who were out at night. But prior to making this connection, the cause of malaria was attributed to a factor that was based solely on a temporal ordering of events.

2. Sports Superstitions

There are a lot of rituals and superstitions in sports that people believe will help them win. Take baseball for example. Many players perform elaborate rituals when they step up to play because they believe they’re good luck, and as long as they perform their ritual, they will succeed at making their play.

Some sports players also carry a lucky charm of sorts with them while they’re playing. It has been said that Michael Jordan wore his UNC basketball shorts under his uniform during every NBA game. But would his basketball skills disappear one day if he didn’t have his lucky shorts? In a game with so many variables at play, holding onto a sense of control through a ritual can make players feel empowered.  So while these things may not actually impact how they play, sports players hold onto the belief that they give them some sort of leg up in the competition.

3. COVID-19 and Strokes

Some have claimed that having the coronavirus can also cause people to have a stroke. While contracting COVID-19 and having a stroke may have some common factors , when you look at the bigger picture, you can see that one does not cause the other. Most people who recover from COVID-19 do not suffer a stroke and return to their normal health within a few weeks.

However, because more people–and more younger people–reported having strokes over the past year, a causal relationship was assigned. But just because these two health complications happen under the same conditions doesn’t mean that one causes the other. There are other variables that link the coronavirus with having a stroke that explain why there is not a cause and effect relationship between the two.

In 2005 and 2006, iPods got blamed for rising crime rates in New York City and nationwide. While these little devices may have been a hot commodity back then, there is no telling what every factor was that led to this increase in crime. Some point to the fact that people who have iPods are often distracted as they’re looking down and fiddling on their little devices, all the while their hearing is impaired due to whatever is playing through their earbuds at the time, which makes them an easier target for theft.

post hoc fallacy examples in sports | post hoc fallacy in crimes | post hoc fallacy economics

The economy has also been named the sole factor in determining crime rates. Many believe this makes sense–crime rates should drop when the economy is doing well and rise when jobs are few and far between. But there's actually not a lot of evidence to prove that the economy has an impact on crime, and a lot of anecdotal evidence that points against this claim

5. GMOs 

With an increasing population and decreasing amount of fertile land, scientists have turned to genetically modifying various plants and animals to yield more usable food that can resist disease, be resistant to droughts, and generally feed more people. And while most people have been eating foods with GMOs in them on a regular basis for years now, there are some who pinpoint this to be the reason for rising rates of cancer .

Now, it’s true that there are some correlations between rising cancer rates and the use of GMOs; however, it cannot be claimed that GMOs are the sole cause that people are more rapidly developing cancer.

I will say, as in some of these other cases, the observation of a correlation may be a starting point for further investigation. But you can’t assume one event leads to the next because of the time sequence.

Let’s look at how you can respond to this argument if you come across it in everyday life.

How to Respond to the Post Hoc Fallacy

As with other logical fallacies, the best way to respond to the post hoc fallacy is with evidence or facts. If you’re the one making the argument, back up whatever you have to say with evidence aside from the temporal order of events. Observing that one event frequently precedes another can suggest a relationship, but you have to dig deeper to explore the relationship between the two events and any other factors surrounding them if you want to claim that one causes the other.

So, if you are claiming that A causes B, you need more information about how A caused B than just a timeline.

If you’re responding to this fallacy, point out the common factor of time and ask for further proof that there is a relationship between two events. To counter the argument, identify its flaw in reasoning and then explain why the logic is flawed. If you don’t have an opposing argument, present the speaker with facts from this website , which shows chance correlations that are obviously not due to cause and effect. This can help prove that correlation does not equal causation, and therefore, the post hoc fallacy is a cognitive bias.

Most importantly, pay attention to any tendency you may have to assume one event caused the next. Look at the relationship between the two events and try to decipher what caused each of them. Question your assumptions, be open to accepting other explanations, and be willing to change your mind.

Final Thoughts on the Post Hoc Fallacy

Logical fallacies such as the post hoc fallacy pollute sound reasoning all the time. Because of this, it’s important to know about logical fallacies and how you can respond to one, should it come up.

In this article, we learned that you can’t look at a sequence of events to determine a cause. After reading the examples, you may have been able to relate with an example from your personal life. Next time you come across this form of logic, address it head on and challenge the speaker to present solid evidence to back up their case.

Here Are Other Posts About Logical Fallacies

  • 5 Appeal to Nature Fallacy Examples in Media and Life
  • 6 Outcome Bias Examples That Can Negatively Impact Your Decisions
  • 7 Self-Serving Bias Examples You See Throughout Life
  • 7 Omission Bias Examples That Negatively Impact Your Life
  • 6 Authority Bias Examples That Might Impact Your Decisions
  • 5 Burden of Proof Fallacy Examples
  • 5 Appeal to Tradition Fallacy Examples in Life
  • 5 Appeal to Authority Logical Fallacy Examples
  • 7 False Cause Fallacy Examples
  • 7 Appeal to Ignorance Fallacy Examples
  • 7 Appeal to Common Sense Logical Fallacy Examples
  • Gambler’s Fallacy: 5 Examples and How to Avoid It
  • 5 Appeal to Anger Fallacy Examples Throughout Life
  • 7 Halo Effect Bias Examples in Your Daily Life
  • 7 Poisoning the Well Examples Throughout Your Life
  • 7 Survivorship Bias Examples You See in the Real World
  • 7 Dunning Kruger Effect Examples in Your Life
  • 7 Either Or (“False Dilemma”) Fallacy Examples in Real Life
  • 5 Cui Bono Fallacy Examples to Find Out “Who Will Benefit”
  • 6 Anchoring Bias Examples That Impact Your Decisions
  • 7 Virtue Signaling Examples in Everyday Life
  • 7 Cherry Picking Fallacy Examples for When People Ignore Evidence
  • 9 Circular Reasoning Examples (or “Begging the Question”) in Everyday Life
  • 9 Appeal to Emotion Logical Fallacy Examples
  • 9 Appeal to Pity Fallacy (“Ad Misericordiam”) Examples in Everyday Life
  • 9 Loaded Question Fallacy Examples in Life and Media
  • 9 Confirmation Bias Fallacy Examples In Everyday Life
  • 9 Bandwagon Fallacy Examples to Prevent Poor Decisions
  • 5 Red Herring Fallacy Examples to Fight Irrelevant Information
  • 9 Middle Ground Fallacy Examples to Spot During an Argument
  • 5 False Equivalence Examples to Know Before Your Next Argument
  • 7 Hasty Generalization Fallacy Examples & How to Respond to Them
  • 6 Straw Man Fallacy Examples & How You Can Respond
  • 6 False Dichotomy Examples & How to Counter Them
  • 7 Slippery Slope Fallacy Examples (And How to Counter Them)
  • What is the Planning Fallacy?
  • How to Overcome the “Sunk Cost Fallacy” Mindset

Finally, if you want a simple process to counter the logical fallacies and cognitive biases you encounter in life, then follow this 7-step process to develop the critical thinking skills habit .

post hoc critical thinking example

Connie Mathers is a professional editor and freelance writer. She holds a Bachelor's Degree in Marketing and a Master’s Degree in Social Work. When she is not writing, Connie is either spending time with her daughter and two dogs, running, or working at her full-time job as a social worker in Richmond, VA.

post hoc fallacy examples | post hoc fallacy examples in medicine | post hoc examples statistics

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

7.4: Fallacies

  • Last updated
  • Save as PDF
  • Page ID 67187

  • Jim Marteney
  • Los Angeles Valley College via ASCCC Open Educational Resources Initiative (OERI)

A fallacy is an error in reasoning. A fallacy indicates there is a problem with the logic of deductive or inductive reasoning. This differs from a factual error, which is simply being wrong about the facts. To be more specific, a fallacy is an “argument” in which the premises given for the conclusion do not provide the needed degree of support.

A fallacy is a mistake in the way that the final conclusion of the argument, or any intermediate conclusions, are logically related to their supporting premises. When there is a fallacy in an argument, the argument is said to be unsound or invalid

The presence of a logical fallacy in an argument does not necessarily imply anything about the argument’s premises or its conclusion. Both may actually be correct, but the argument is still invalid because the conclusion does not follow from the premises using the inference principles of the argument.

Recognizing fallacies is often difficult, and indeed fallacious arguments often persuade their intended audience. Detecting and avoiding fallacious reasoning will at least prevent adoption of some erroneous conclusions.

Types of Fallacies

Fallacies are usually recognized in isolation, but woven into the context of an argument they may pass unnoticed, unless the critical thinker is on guard against them. Some advocates openly use fallacies in order to exploit an unknowing audience, but many times we use fallacies unintentionally. Many fallacies exist. Here is a few of the most common ones used in everyday argumentation.

False Dilemma The False Dilemma fallacy occurs when an argument offers a false range of choices and requires that you pick one of them. Usually, the False Dilemma fallacy takes this form: Either A or B is true. If A is not true, then B is true. “Either you love me or hate me . ” The range is false because there may be other, unstated choices which would only serve to undermine the original argument. If you agree to pick one of those choices, you accept the premise that those choices are indeed the only ones possible. Seeing something as "black and white” is an example of a false dilemma.

Appeal to Emotion This fallacy is committed when someone manipulates peoples’ emotions in order to get them to accept a claim. More formally, this sort of “reasoning” involves the substitution of various means of producing strong emotions in place of evidence for a claim. Here the attempt is to transfer a positive emotion you have on one thing to the object or belief that is being argued.

This sort of “reasoning” is very common in politics and it serves as the basis for a large portion of modern advertising. Most political speeches are aimed at generating feelings in people, so that these feelings will get them to vote or act a certain way. How many times will you see pictures of American flags in a political commercial? The flag and other traditional images are aimed at getting the audience emotionally involved. In the case of advertising, the commercials are aimed at evoking emotions that will influence people to buy certain products. Beer commercials frequently include people at parties to get the potential consumers excited about the product. In many cases, such speeches and commercials are notoriously free of real evidence.

Non-sequitur The phrase “non-sequitur” is Latin for “it does not follow.” If an inference is made that does not logically follow from the premises of the preceding argument, then the inference is a non-sequitur. For example, “I am wearing my lucky hat today, nothing can go wrong . ” Though the term “non-sequitur” can be used broadly as an informal fallacy to describe any unwarranted conclusion, it is most often used when a statement openly contradicts itself and just makes no sense.

Slippery Slope This fallacy reduces an argument to absurdity by extending it beyond its reasonable limits. This is an abuse of causal reasoning by trying to link events that normally have very little to do with each other. For example: legalizing marijuana will lead to the legalization of cocaine. If you legalize cocaine, you’ll be able to buy crack and every other drug at your local 7-11. In this argument, it is asserted that the legalization of marijuana will eventually lead to purchasing crack at local 7-11’s. Once one accepts the legalization of marijuana, then one is assumed to be on the slippery slope towards the legalization and availability of every other drug. In a Slippery Slope argument, you suggest that a series of events will occur leading to an undesirable conclusion instead of just one step as in Causal Reasoning.

Ad Hominem Translated from Latin to English, “Ad Hominem” means “against the man” or “against the person.” An ad hominem fallacy consists of saying that someone’s argument is wrong purely because of something about the person rather than about the argument itself. You will hear people on the radio and television dismiss comments by people they label as a conservative or a liberal, just because of how they label that person. Merely insulting another person or questioning the credibility of someone does not necessarily constitute an ad hominem fallacy. For this fallacy to exist it must be clear that the purpose of the characterization is to discredit the person offering the argument, in an attempt to invite others to then discount his or her arguments.

The Ad Hominem fallacy was employed by those who wanted to silence 16-year-old Climate Change activist Greta Thunberg. Those who disagreed with her argued that she should be ignored as she is just a child.

Hasty Generalization This fallacy occurs when an arguer bases a conclusion on too few examples, that are not necessarily typical of the conclusion being made. For instance, “My two boyfriends have never shown any concern for my feelings. Therefore, all men are insensitive, selfish, and emotionally uncaring.” Or, “I read about this man who got worms from eating sushi. I always knew that sushi was not good to eat." Without more examples, these arguments can be considered fallacies.

Circular Reasoning The fallacy of circular reasoning is the assertion or repeated assertion of a conclusion, without giving reasons in its support. In other words, supporting a premise with a premise, instead of a conclusion. It may imply that the conclusion is self-evident or rephrase the conclusion to sound like a reason. Circular reasoning creates an illusion of support by simply asserting its conclusion as though it were a reason, or by reasserting the same claim in different words. For example, “Kerosene is combustible; therefore, it burns.” Or, “George Clooney is the best actor we have ever had, because he is the greatest actor of all time.”

Appeal to Ignorance In this fallacy, the arguer claims that something is valid only because it hasn’t been proven false. This fallacy errs by trying to make this argument in a context in which the burden of proof falls on the arguer to show that his or her position is actually accurate, not just that it has not yet been shown false. The argument mistakes lack of evidence for evidence to the contrary. In effect, the argument says, “No one knows it is accurate. Therefore, it is false.” For example, “There is no proof that hand gun legislation will reduce crime. Therefore, outlawing handguns would be a futile gesture." Or, "We have no evidence that God doesn’t exist, therefore, God must exist." Ignorance about something says nothing about its existence or non-existence.

Plato and a Platypus Walk into a Bar

clipboard_e62a80e553ce1af405c5aab2b08be50ba.png

In their book authors Thomas Cathcart and Daniel Klein illustrate logical principles and fallacies using classic jokes. For example, to illustrate the fallacy of post hoc ergo propter hoc, they use the following:

“In general, we’re deceived by post hoc ergo propter hoc because we fail to notice that there’s another cause at work.

A New York boy is being led through the swamps of Louisiana by his cousin. ‘Is it true that an alligator won’t attack you if you carry a flashlight?’ asks the city boy.

His cousin replies, ‘Depends on how fast you carry the flashlight.’

The city boy saw the flashlight as a propter when it was only a prop.” 1

Bandwagon The name “bandwagon fallacy” comes from the phrase “jump on the bandwagon” or “climb on the bandwagon” a bandwagon being a wagon big enough to hold a band of musicians. In past political campaigns, candidates would ride a bandwagon through town, and people would show support for the candidate by climbing aboard the wagon. The phrase has come to refer to joining a cause because of its popularity. For example, trying to convince you that you should do something because everyone else is doing it, is a bandwagon fallacy. "Everybody is buying a Tesla car, so should you."

Post hoc ergo propter hoc The post hoc ergo propter hoc, “after this, therefore because of this,” fallacy is based upon the mistaken notion that simply because one thing happens after another, the first event was a cause of the second event. Post hoc reasoning is the basis for many superstitions and erroneous beliefs.

For example, California earthquakes always happen after unusual weather patterns. Or, Allison always scores a goal when she wears her red and white soccer shoes. Or, I wore my Packers shirt and my Packers team won. I now wear my Packers shirt for every game. These are all, post hoc ergo propter hoc fallacies

Appeal to Pity With this fallacy, the arguer tries to get people to agree with his or her conclusion by evoking pity and sympathy either with the situation or with the situation of some third party. By appealing to people's ability to sympathize with others, a powerful emotive force can be created. Unfortunately, however serious another person's problems are, that does not automatically make their claims any more logical. My sympathy for that situation does not create a reasonable basis for believing his or her claims. For example, "I really need this job since my grandmother is sick" or "I should receive an 'A' in this class. After all, if I don't get an 'A' I won't get the scholarship that I need." These appeals evoke emotions, but are not necessarily logical.

Straw-Man Fallacy The arguer attacks an argument that is different from, and usually weaker than, the opposition’s best argument. To distort or misrepresent an argument one is trying to refute is called the straw man fallacy. In a straw man fallacy, the opponents argument is distorted, misquoted, exaggerated, misrepresented or simply made up. This makes the argument easier to defeat, and can also be used to make opponents look like ignorant extremists. The refutation may appear to be a good one to someone unfamiliar with the original argument.

Logical fallacies are errors of reasoning, errors which may be recognized and corrected by critical thinkers. Fallacies may be created unintentionally, or they may be created intentionally in order to deceive other people. The vast majority of the commonly identified fallacies involve arguments, although some involve explanations, or definitions, or other products of reasoning. Sometimes the term fallacy is used even more broadly to indicate any false belief or cause of a false belief. A fallacy is an argument that sometimes fools human reasoning, but is not logically valid.

In his book, PERSUASION: THEORY AND PRACTICE, Kenneth Anderson writes,

“Logical appeals are powerful forces in persuasion. However, logic alone is rarely sufficient to yield persuasion. Desires and needs of receivers affect and determine what they will accept as logical demonstration. Thus, it is possible for one person to report that he or she is convinced by the logic used while another person remains horrified at the lack of logic presented.” 2

You can have high quality evidence, but lead to incorrect conclusions because your argument has poor reasoning. You always want to create the “soundest” or most logical argument possible. And you also want to examine the logic of others presentations to determine what fallacies might be evident.

  • Cathcart, Thomas, and Daniel Klein. Plato and a Platypus Walk into a Bar. New York: Penguin Books, 2007.
  • Anderson, Kenneth. Persuasion: Theory and Practice . Boston: American Press, 1983.

[T13] The post hoc fallacy

Module: Basic statistics

  • T01. Basic concepts
  • T02. The rules of probability
  • T03. The game show puzzle
  • T04. Expected values
  • T05. Probability and utility
  • T06. Cooperation
  • T07. Summarizing data
  • T08. Samples and biases
  • T09. Sampling error
  • T10. Hypothesis testing
  • T11. Correlation
  • T12. Simpson's paradox
  • T14. Controlled trials
  • T15. Bayesian confirmation

Quote of the page

Popular pages

  • What is critical thinking?
  • What is logic?
  • Hardest logic puzzle ever
  • Free miniguide
  • What is an argument?
  • Knights and knaves puzzles
  • Logic puzzles
  • What is a good argument?
  • Improving critical thinking
  • Analogical arguments

Could there be a common cause explanation for the fact that the rate of television ownership in a country is correlated with its average life expectancy?

Read the following excerpts from the South China Morning Post ("True believers' luckless stars", August 7, 2000):

No matter whether you are Pisces, Virgo or Cancer, the warning for this week is the same for every astrological sign--poring over your horoscope can damage your mental health. If you take star signs too seriously or if you worry unduly about walking under a ladder, you are stunting your intelligence and making yourself depressed. Psychologists have discovered a strong link between belief in superstition and poor exam performance. They also found that those students who took account of black cats and Friday the 13th were more likely to be neurotic, depressed and have a lower IQ than their more sceptical counterparts.

Is this an example of the post hoc fallacy? If so, what might explain the observed correlations?

homepage • top • contact • sitemap

© 2004-2024 Joe Lau & Jonathan Chan

Post Hoc Ergo Propter Hoc

We learned in Lesson 14 that to make a strong causal argument you need the cause to precede the effect. In other words, if problem A causes result B, cause A had to occur before result B. However, this is not the only factor in determining cause. Just because one event precedes another does not mean that it caused it. When you wrongly make that assumption, you commit the fallacy known as post hoc, ergo propter hoc.

This fallacy, like the chicken and egg, has to do with cause and effect. Often called post hoc, it means in Latin, "after this, therefore because of this," and occurs when an assumption is made that, because one event precedes another, the first event must have caused the later one. The fallacy, sometimes referred to as false cause, looks like this:

1. Event A precedes event B.

2. Event A caused event B.

To make a strong causal argument, you must account for all relevant details. For example, every time Ahmed tries to open a video program on his computer, it crashes. He concludes that the program is causing the computer to crash. However, computers are complex machines, and there could be many other causes for the crashes. The fact that the opening of one program always precedes the crash is a good possibility for cause, but it cannot be maintained as the one and only cause until a stronger link is made. To avoid the post hoc fallacy, he would need to show that all of the many other possibilities for the cause of the crashing have been evaluated and proven to be irrelevant.

Superstitions are another example of post hoc fallacies. Some superstitions are widely held, such as "if you break a mirror, you will have seven years of bad luck." Others are more personal, such as the wearing of a lucky article of clothing. However, all of them are post hoc fallacies because they do not account for the many other possible causes of the effect. Bad luck could happen to someone who breaks a mirror, but bad things also happen to those who do not. The superstition does not account for why the breaking of the mirror causes something bad to happen to the person who broke it. In these cases of superstitions, the real cause is usually coincidence.

How can you strengthen an argument and keep it from becoming an example of the post hoc fallacy? First, show that the effect would not occur if the cause did not occur. For example, if I don't strike the match, it will not catch on fire. Second, be certain there is no other cause that could result in the effect. Are there any sources of flame near the match? Do matches spontaneously catch fire? Is there anything else that could cause it to catch fire? If the answer is no, then there is no post hoc fallacy.

■ I took three Echinacea tablets every day when my cold started. Within a week, my cold was gone, thanks to the Echinacea.

■ I wanted to do well on the test, so I used my lucky pen. It worked again! I got an A.

■ Last night I had a dream that there was a car accident in my town. When I read the paper this morning, I found out a car accident did happen last night. My dreams predict the future.

Continue reading here: What Is a Judgment Call

Was this article helpful?

Related Posts

  • Chicken and Egg Confusing Cause and Effect
  • Venn Diagram - Critical Thinking
  • Effective Persuasion Techniques
  • Avoid Making Assumptions - Critical Thinking

Readers' Questions

What happens if one event or period of time precedes another problem?
If one event or period of time precedes another problem, it means that the problem occurs after the event or period of time has occurred. In such a situation, the problem is likely to be influenced by or connected to the preceding event or time period in some way. This connection can provide insights into the cause and effect relationship between the event and the problem, allowing for a better understanding and potential solutions. However, it is important to note that simply because one event precedes a problem does not necessarily mean that the event caused the problem. Further investigation and analysis may be required to determine the exact relation and causation.

What Is a Post Hoc Logical Fallacy?

  • An Introduction to Punctuation
  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

Post hoc (a shortened form of post hoc, ergo propter hoc ) is a logical  fallacy in which one event is said to be the cause of a later event simply because it occurred earlier. "Although two events might be consecutive," says Madsen Pirie in " How to Win Every Argument ," "we cannot simply assume that the one would not have occurred without the other."

Why Post Hoc Is a Fallacy

Post hoc is a fallacy because correlation does not equal causation. You cannot blame your friends for a rain delay just because every time they go with you to a ballgame it storms and play is delayed. Likewise, the fact that a pitcher bought new socks before he pitched a winning game does not mean that new socks cause a pitcher to throw faster.

The Latin expression  post hoc, ergo propter hoc  can be translated literally as "after this, therefore because of this." The concept can also be called faulty causation,   the fallacy of false cause,   arguing from succession alone  or assumed causation.

Post Hoc Examples: Medicine

The search for causes of diseases is rife with post hoc examples. Not only are medical researchers constantly seeking causes of or cures for medical maladies, but patients are also on the lookout for anything—no matter how unlikely—that might help to alleviate their symptoms. In some cases, there is also a desire to find a cause outside of genetics or luck that can be blamed for health or developmental challenges.

The long search for the cause of malaria was fraught with post hoc fallacies. "It was observed that persons who went out at night often developed the malady. So, on the best post hoc reasoning, night air was assumed to be the cause of malaria, and elaborate precautions were taken to shut it out of sleeping quarters," explained author Stuart Chase in "Guides to Straight Thinking." "Some scientists, however, were skeptical of this theory. A long series of experiments eventually proved that malaria was caused by the bite of the anopheles mosquito. Night air entered the picture only because mosquitoes preferred to attack in the dark."

During the early 2000s, the search for a cause of autism led to vaccines, though no scientific link has been found between the administration of vaccines and the onset of autism. The time that children are vaccinated and the time they're diagnosed do closely correlate, however, leading upset parents to assign blame to the immunizations, for lack of a better explanation.  

Post Hoc Variation: Inflated Causality

In the inflated causality version of post hoc, the proposed idea tries to boil down a happening to one singular cause, when in actuality, the event is more complex than that. However, the idea is not completely untrue, which is why it's called inflated rather than just completely faulty. For example, each of these explanations is incomplete:

  • Attributing the cause of World War II to only Adolf Hitler's hatred of the Jews
  • Suggesting that John F. Kennedy won the presidency over Richard Nixon exclusively because of the debate on TV
  • Believing that the cause of the Reformation was simply Martin Luther posting his theses
  • Explaining that the U.S. Civil War was fought only because of the institution of slavery

Economics is a complex issue, so it can be a fallacy to attribute any particular happening to just one cause, whether it be the latest unemployment statistics or one policy being the magic fuel for economic growth.

Post Hoc Examples: Crime

In a search for reasons for increased crime, a "New York Times" article by Sewell Chan entitled "Are iPods to Blame for Rising Crime?" September 27, 2007) looked at a report that appeared to blame iPods:

"The report suggests that 'the rise in violent offending and the explosion in the sales of iPods and other portable media devices is more than coincidental,' and asks, rather provocatively, 'Is There an iCrime Wave?' The report notes that nationally, violent crime fell every year from 1993 to 2004, before rising in 2005 and 2006, just as 'America’s streets filled with millions of people visibly wearing, and being distracted by, expensive electronic gear.' Of course, as any social scientist will tell you, correlation and causation are not the same thing."
  • Chan, Sewell. “Are IPods to Blame for Rising Crime?”  The New York Times , The New York Times, 27 Sept. 2007, cityroom.blogs.nytimes.com/2007/09/27/are-ipods-to-blame-for-rising-crime/.
  • Chase, Stuart.  Guides to Straight Thinking . Phoenix House, 1959.
  • Pirie, Madsen.  How to Win Every Argument: the Use and Abuse of Logic . Continuum, 2016.
  • What is a Logical Fallacy?
  • Circular Reasoning Definition and Examples
  • Non Sequitur (Fallacy)
  • Undistributed Middle (Fallacy)
  • Oversimplification and Exaggeration Fallacies
  • The Dicto Simpliciter Logical Fallacy
  • How Logical Fallacy Invalidates Any Argument
  • Appeal to Authority Is a Logical Fallacy
  • Stacking the Deck Logical Fallacy
  • Amphiboly in Grammar and Logic
  • Complex Question Fallacy
  • Fallacies of Relevance: Appeal to Authority
  • What is Tu Quoque (Logical Fallacy) in Rhetoric?
  • Logical Fallacies: Begging the Question
  • False Analogy (Fallacy)
  • Slippery Slope Fallacy - Definition and Examples

Post Hoc Reasoning in Arguments

Definition of post hoc reasoning.

Post hoc reasoning is a type of incorrect thinking that happens when someone believes that if one event happens after another, the first event must be the cause of the second one. Like thinking rain is caused by washing your car because every time you wash your car, it rains later. This is a sneaky mistake in logic because it seems to make sense but can lead you to the wrong conclusion.

The full Latin term, “post hoc ergo propter hoc,” helps us understand this error better. It means “after this, therefore because of this.” This kind of reasoning trips us up because humans like to find simple patterns. So, if “A” comes before “B,” it’s tempting to say “A” made “B” happen, even though that might not be true. It’s like blaming a soccer team’s loss on the new shoes someone wore that day, even if the loss had nothing to do with them.

How to Guide – Spotting Post Hoc Reasoning

  • Look at the sequence: Did one event really cause the other just because it came first?
  • Ask for evidence: Is there real proof the first thing caused the second?
  • Search for alternatives: Could there be another reason why the second thing happened?
  • Think critically: Just because two things happen together, doesn’t mean one caused the other.

Types of Post Hoc Reasoning

While post hoc reasoning is already a specific kind of logical error, it shows up in various contexts, making it look different in different situations. These scenarios demonstrate how this reasoning can mistakenly be used for other errors that seem related, such as thinking something is lucky or unlucky just because of what followed.

Examples of Post Hoc Reasoning In Arguments

  • If a soccer team wins a match after their coach buys them new uniforms, they might think the uniforms brought them good luck and caused the win. This is an example of post hoc reasoning because the team is connecting the win to the new uniforms without considering their skills or the other team’s performance.
  • Imagine you eat a cookie and then you get a headache. You might think the cookie caused the headache, but it could just be a coincidence or maybe you were already starting to get a headache for another reason. This shows post hoc reasoning as you’re linking the cookie to your headache without enough evidence.
  • When someone takes vitamin C and their cold gets better, they might believe the vitamin C cured them. This is post hoc reasoning because colds often improve over time, and without clear evidence, you can’t be sure the vitamin C was the real cure.

Why is it Important?

Understanding post hoc reasoning stops us from making quick judgments without the facts. If we believe something caused something else just because it came first, we might make decisions based on false beliefs. This kind of thinking could lead to incorrect medical choices, wasteful spending on products that don’t work, or supporting the wrong causes. By learning to look for strong evidence and considering other possibilities, we protect ourselves from these mistakes and develop a habit of thinking properly about cause and effect in our everyday lives.

The roots of post hoc reasoning dig deep into our history of trying to make sense of the world around us. Our brains naturally look for patterns and connections, which is why this error in logic has stuck around for such a long time. Philosophers have long explored this concept when discussing the right ways to understand cause and effect.

Controversies

While post hoc reasoning itself isn’t a hot-button issue, where people get tripped up is in how much evidence is needed to support a cause-and-effect relationship. Some argue that just because an event doesn’t always lead to the same outcome, it doesn’t mean it never can. But without clear evidence, it’s just assumption, not conclusion.

Understanding Post Hoc Reasoning

  • Correlation does not imply causation: Just because two things occur together doesn’t mean one is the result of the other.
  • Common in superstitions: Superstitions, like knocking on wood for good luck, often come from seeing a pattern and wrongly thinking it’s a cause.
  • Scientists try to avoid it: Reliable scientific studies and experiments are designed specifically to prevent this type of error and find out true causes.

Remember, when someone suggests that A caused B simply because A came first, consider other possibilities and ask for proof. This way of thinking helps you navigate the world more wisely, whether you’re making a big life decision or just figuring out what to have for lunch.

Related Topics with Explanations

Post hoc reasoning connects to other logical fallacies and important concepts that can help us sharpen our critical thinking skills. Here’s a look:

  • Correlation vs. Causation: Understanding that events occurring together don’t prove one caused the other.
  • Confirmation Bias : The tendency to only accept information that supports what you already believe, which can make post hoc reasoning more convincing.
  • Critical Thinking: The skill of evaluating arguments and evidence with an open mind, which helps to avoid falling into the post hoc reasoning trap.

Post hoc reasoning is a common mistake, but knowing about it can help us avoid jumping to false conclusions. It challenges us to ask questions and look for real causes instead of simply linking events by their order. By being aware of this fallacy and practicing good thinking habits, we can make smarter decisions and better understand the world around us—and that’s a valuable skill for anyone, at any age.

The Skeptic's Dictionary

  • FAQ & Interviews

Topical Indexes

  • Alternatives to Medicine
  • Critical Thinking
  • Cryptozoology
  • ETs & UFOs
  • Frauds, Hoaxes, Conspiracies
  • Junk Science
  • Logic & Perception
  • Science & Philosophy
  • Supernatural

Other Writings

  • Unnatural Acts blog
  • Skeptimedia
  • Mass Media Funk
  • Mass Media Bunk
  • What's the Harm?
  • Newsletter Archives
  • Internet Bunk
  • Too good to be true
  • Skeptical Essays
  • Book Reviews
  • Suburban Myths
  • In Memoriam

Other Resources

  • Mysteries and Science for Kids
  • Podcast: Unnatural Virtue archive
  • A Skeptic's Halloween
  • Editor's Notes
  • Get involved
  • Future Topics?
  • Permission to print
  • Site Statistics

From Abracadabra to Zombies | View All

Post hoc fallacy.

The post hoc ergo propter hoc (after this therefore because of this) fallacy is based upon the mistaken notion that simply because one thing happens after another, the first event was a cause of the second event. Post hoc reasoning is the basis for many superstitions and erroneous beliefs.

Many events follow sequential patterns without being causally related. For example, you have a cold, so you drink fluids and two weeks later your cold goes away. You have a headache so you stand on your head and six hours later your headache goes away. You put acne medication on a pimple and three weeks later the pimple goes away. You perform some task exceptionally well after forgetting to bathe, so the next time you have to perform the same task you don't bathe. A solar eclipse occurs so you beat your drums to make the gods spit back the sun. The sun returns, proving to you the efficacy of your action.

You use your dowsing stick and then you find water. You imagine heads coming up on a coin toss and heads comes up. You rub your lucky charm and what you wish for comes true. You lose your lucky charm and you strike out six times. You have a "vision" that a body is going to be found near water or in a field and later a body is found near water or in a field. You have a dream that an airplane crashes and an airplane crashes the next day or crashed the night before.

However, sequences don't establish a probability of causality any more than correlations do. Coincidences happen. Occurring after an event is not sufficient to establish that the prior event caused the later one. To establish the probability of a causal connection between two events, controls must be established to rule out other factors such as chance or some unknown causal factor. Anecdotes aren't sufficient because they rely on intuition and subjective interpretation. A controlled study is necessary to reduce the chance of error from self-deception.

post hoc parables

Andy's story : For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to an acupuncturist and got some relief but it didn't last. A friend recommended the alkaline diet. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried aromatherapy, dolphin therapy, and therapeutic touch. Still, I suffered. I finally got relief after six years from a chiropractor. You are an idiot for criticizing chiropractors. Chiropractic was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Betty's story : For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended the alkaline diet. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried aromatherapy, dolphin therapy, and therapeutic touch. Still, I suffered. I finally got relief after six years from an acupuncturist. You are an idiot for criticizing acupuncture. Acupuncture was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Chuck's story : For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended the alkaline diet. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried acupuncture, dolphin therapy, and therapeutic touch. Still, I suffered. I finally got relief after six years from an aromatherapist. You are an idiot for criticizing aromatherapy. Aromatherapy was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain. 

Dora's story : For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended aromatherapy. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried acupuncture, dolphin therapy, and therapeutic touch. Still, I suffered. I finally got relief after six years from the alkaline diet. You are an idiot for criticizing the alkaline diet. The alkaline diet was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Edgar's story :  For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended aromatherapy. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried acupuncture, dolphin therapy, and the alkaline diet. Still, I suffered. I finally got relief after six years from therapeutic touch. You are an idiot for criticizing therapeutic touch. Therapeutic touch was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Fiona's stor y:  For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended aromatherapy. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried acupuncture, therapeutic touch, and the alkaline diet. Still, I suffered. I finally got relief after six years from dolphin therapy. You are an idiot for criticizing dolphin therapy. Dolphin therapy was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Gary's story :  For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended aromatherapy. At first I thought this was the answer, but again it didn't last.  I tried acupuncture, therapeutic touch, dolphin therapy, and the alkaline diet. Still, I suffered. I finally got relief after six years from healing prayer. A friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me. Finally, I was cured. You are an idiot for criticizing healing prayer. Healing prayer was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

See also ad hoc hypothesis , confirmation bias , control study , communal reinforcement, Occam's razor , placebo effect , regressive fallacy , selective thinking , self-deception , subjective validation , testimonials, and wishful thinking.

further reading

Browne, M. Neil & Stuart M. Keeley. Asking the Right Questions: A Guide to Critical Thinking (Prentice Hall, 1997) .

Carroll, Robert Todd. Becoming a Critical Thinker - A Guide for the New Millennium (Boston: Pearson Custom Publishing, 2000).

Damer. T. Edward. Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments 4th edition (Wadsworth Pub Co, 2001) .

Giere, Ronald, Understanding Scientific Reasoning , 4th ed, (New York, Holt Rinehart, Winston: 1998).

“Now he’s in a home for extensive therapy,” she said.

Whoever said a little knowledge is a dangerous thing knew what she was talking about.

Last updated 18-Nov-2015

Books by R. T. Carroll

cover The Critical Thinker's Dictionary

OTHER LANGUAGES

  • Dutch voor kinderen
  • French  

Print versions available in Dutch , Russian , Japanese , and Korean .

Frequently asked questions

What is an example of post hoc fallacy.

An example of post hoc fallacy is the following line of reasoning:

“Yesterday I had ice cream, and today I have a terrible stomachache. I’m sure the ice cream caused this.”

Although it is possible that the ice cream had something to do with the stomachache, there is no proof to justify the conclusion other than the order of events. Therefore, this line of reasoning is fallacious.

Frequently asked questions: Fallacies

The appeal to purity or no true Scotsman fallacy is an attempt to defend a generalisation about a group from a counterexample by shifting the definition of the group in the middle of the argument. In this way, one can exclude the counterexample as not being “true”, “genuine”, or “pure” enough to be considered as part of the group in question.

Suppose there is a population consisting of 90% psychologists and 10% engineers. Given that you know someone enjoyed physics at school, you may conclude that they are an engineer rather than a psychologist, even though you know that this person comes from a population consisting of far more psychologists than engineers.

When we ignore the rate of occurrence of some trait in a population (the base-rate information) we commit base rate fallacy .

Cost-benefit fallacy is a common error that occurs when allocating sources in project management. It is the fallacy of assuming that cost-benefit estimates are more or less accurate, when in fact they are highly inaccurate and biased. This means that cost-benefit analyses can be useful, but only after the cost-benefit fallacy has been acknowledged and corrected for. Cost-benefit fallacy is a type of base rate fallacy .

In advertising, the fallacy of equivocation is often used to create a pun. For example, a billboard company might advertise their billboards using a line like: “Looking for a sign? This is it!” The word sign has a literal meaning as billboard and a figurative one as a sign from God, the universe, etc.

Equivocation is a fallacy because it is a form of argumentation that is both misleading and logically unsound. When the meaning of a word or phrase shifts in the course of an argument, it causes confusion and also implies that the conclusion (which may be true) does not follow from the premise.

The fallacy of equivocation is an informal logical fallacy, meaning that the error lies in the content of the argument instead of the structure.

Fallacies of relevance are a group of fallacies that occur in arguments when the premises are logically irrelevant to the conclusion. Although at first there seems to be a connection between the premise and the conclusion, in reality fallacies of relevance use unrelated forms of appeal.

For example, the genetic fallacy makes an appeal to the source or origin of the claim in an attempt to assert or refute something.

The ad hominem fallacy and the genetic fallacy are closely related in that they are both fallacies of relevance. In other words, they both involve arguments that use evidence or examples that are not logically related to the argument at hand. However, there is a difference between the two:

  • In the ad hominem fallacy , the goal is to discredit the argument by discrediting the person currently making the argument.
  • In the genetic fallacy , the goal is to discredit the argument by discrediting the history or origin (i.e., genesis) of an argument.

False dilemma fallacy is also known as false dichotomy, false binary, and “either-or” fallacy. It is the fallacy of presenting only two choices, outcomes, or sides to an argument as the only possibilities, when more are available.

The false dilemma fallacy works in two ways:

  • By presenting only two options as if these were the only ones available
  • By presenting two options as mutually exclusive (i.e., only one option can be selected or can be true at a time)

In both cases, by using the false dilemma fallacy, one conceals alternative choices and doesn’t allow others to consider the full range of options. This is usually achieved through an“either-or” construction and polarised, divisive language (“you are either a friend or an enemy”).

The best way to avoid a false dilemma fallacy is to pause and reflect on two points:

  • Are the options presented truly the only ones available ? It could be that another option has been deliberately omitted.
  • Are the options mentioned mutually exclusive ? Perhaps all of the available options can be selected (or be true) at the same time, which shows that they aren’t mutually exclusive. Proving this is called “escaping between the horns of the dilemma.”

Begging the question fallacy is an argument in which you assume what you are trying to prove. In other words, your position and the justification of that position are the same, only slightly rephrased.

For example: “All freshmen should attend college orientation, because all college students should go to such an orientation.”

The complex question fallacy and begging the question fallacy are similar in that they are both based on assumptions. However, there is a difference between them:

  • A complex question fallacy occurs when someone asks a question that presupposes the answer to another question that has not been established or accepted by the other person. For example, asking someone “Have you stopped cheating on tests?”, unless it has previously been established that the person is indeed cheating on tests, is a fallacy.
  • Begging the question fallacy occurs when we assume the very thing as a premise that we’re trying to prove in our conclusion. In other words, the conclusion is used to support the premises, and the premises prove the validity of the conclusion. For example: “God exists because the Bible says so, and the Bible is true because it is the word of God.”

In other words, begging the question is about drawing a conclusion based on an assumption, while a complex question involves asking a question that presupposes the answer to a prior question.

“ No true Scotsman ” arguments aren’t always fallacious. When there is a generally accepted definition of who or what constitutes a group, it’s reasonable to use statements in the form of “no true Scotsman”.

For example, the statement that “no true pacifist would volunteer for military service” is not fallacious, since a pacifist is, by definition, someone who opposes war or violence as a means of settling disputes.

No true Scotsman arguments are fallacious because instead of logically refuting the counterexample, they simply assert that it doesn’t count. In other words, the counterexample is rejected for psychological, but not logical, reasons.

Base rate fallacy can be avoided by following these steps:

  • Avoid making an important decision in haste. When we are under pressure, we are more likely to resort to cognitive shortcuts like the availability heuristic and the representativeness heuristic . Due to this, we are more likely to factor in only current and vivid information, and ignore the actual probability of something happening (i.e., base rate).
  • Take a long-term view on the decision or question at hand. Look for relevant statistical data, which can reveal long-term trends and give you the full picture.
  • Talk to experts like professionals. They are more aware of probabilities related to specific decisions.

To identify an appeal to authority fallacy , you can ask yourself the following questions:

  • Is the authority cited really a qualified expert in this particular area under discussion? For example, someone who has formal education or years of experience can be an expert.
  • Do experts disagree on this particular subject? If that is the case, then for almost any claim supported by one expert there will be a counterclaim that is supported by another expert. If there is no consensus, an appeal to authority is fallacious.
  • Is the authority in question biased? If you suspect that an expert’s prejudice and bias could have influenced their views, then the expert is not reliable and an argument citing this expert will be fallacious.To identify an appeal to authority fallacy, you ask yourself whether the authority cited is a qualified expert in the particular area under discussion.

Appeal to authority is a fallacy when those who use it do not provide any justification to support their argument. Instead they cite someone famous who agrees with their viewpoint, but is not qualified to make reliable claims on the subject.

Appeal to authority fallacy is often convincing because of the effect authority figures have on us. When someone cites a famous person, a well-known scientist, a politician, etc. people tend to be distracted and often fail to critically examine whether the authority figure is indeed an expert in the area under discussion.

The ad populum fallacy is common in politics. One example is the following viewpoint: “The majority of our countrymen think we should have military operations overseas; therefore, it’s the right thing to do.”

This line of reasoning is fallacious, because popular acceptance of a belief or position does not amount to a justification of that belief. In other words, following the prevailing opinion without examining the underlying reasons is irrational.

The ad populum fallacy plays on our innate desire to fit in (known as “bandwagon effect”). If many people believe something, our common sense tells us that it must be true and we tend to accept it. However, in logic, the popularity of a proposition cannot serve as evidence of its truthfulness.

Ad populum (or appeal to popularity) fallacy and appeal to authority fallacy are similar in that they both conflate the validity of a belief with its popular acceptance among a specific group. However there is a key difference between the two:

  • An ad populum fallacy tries to persuade others by claiming that something is true or right because a lot of people think so.
  • An appeal to authority fallacy tries to persuade by claiming a group of experts believe something is true or right, therefore it must be so.

To identify a false cause fallacy , you need to carefully analyse the argument:

  • When someone claims that one event directly causes another, ask if there is sufficient evidence to establish a cause-and-effect relationship. 
  • Ask if the claim is based merely on the chronological order or co-occurrence of the two events. 
  • Consider alternative possible explanations (are there other factors at play that could influence the outcome?).

By carefully analysing the reasoning, considering alternative explanations, and examining the evidence provided, you can identify a false cause fallacy and discern whether a causal claim is valid or flawed.

False cause fallacy examples include: 

  • Believing that wearing your lucky jersey will help your team win 
  • Thinking that everytime you wash your car, it rains
  • Claiming that playing video games causes violent behavior 

In each of these examples, we falsely assume that one event causes another without any proof.

The planning fallacy and procrastination are not the same thing. Although they both relate to time and task management, they describe different challenges:

  • The planning fallacy describes our inability to correctly estimate how long a future task will take, mainly due to optimism bias and a strong focus on the best-case scenario.
  • Procrastination refers to postponing a task, usually by focusing on less urgent or more enjoyable activities. This is due to psychological reasons, like fear of failure.

In other words, the planning fallacy refers to inaccurate predictions about the time we need to finish a task, while procrastination is a deliberate delay due to psychological factors.

A real-life example of the planning fallacy is the construction of the Sydney Opera House in Australia. When construction began in the late 1950s, it was initially estimated that it would be completed in four years at a cost of around $7 million.

Because the government wanted the construction to start before political opposition would stop it and while public opinion was still favorable, a number of design issues had not been carefully studied in advance. Due to this, several problems appeared immediately after the project commenced.

The construction process eventually stretched over 14 years, with the Opera House being completed in 1973 at a cost of over $100 million, significantly exceeding the initial estimates.

An example of appeal to pity fallacy is the following appeal by a student to their professor:

“Professor, please consider raising my grade. I had a terrible semester: my car broke down, my laptop got stolen, and my cat got sick.”

While these circumstances may be unfortunate, they are not directly related to the student’s academic performance.

While both the appeal to pity fallacy and   red herring fallacy can serve as a distraction from the original discussion topic, they are distinct fallacies. More specifically:

  • Appeal to pity fallacy attempts to evoke feelings of sympathy, pity, or guilt in an audience, so that they accept the speaker’s conclusion as truthful.
  • Red herring fallacy attempts to introduce an irrelevant piece of information that diverts the audience’s attention to a different topic.

Both fallacies can be used as a tool of deception. However, they operate differently and serve distinct purposes in arguments.

Argumentum ad misericordiam (Latin for “argument from pity or misery”) is another name for appeal to pity fallacy . It occurs when someone evokes sympathy or guilt in an attempt to gain support for their claim, without providing any logical reasons to support the claim itself. Appeal to pity is a deceptive tactic of argumentation, playing on people’s emotions to sway their opinion.

Ad hominem tu quoque (‘you too”) is an attempt to rebut a claim by attacking its proponent on the grounds that they uphold a double standard or that they don’t practice what they preach. For example, someone is telling you that you should drive slowly otherwise you’ll get a speeding ticket one of these days, and you reply “but you used to get them all the time!”

The planning fallacy refers to people’s tendency to underestimate the resources needed to complete a future task, despite knowing that previous tasks have also taken longer than planned.

For example, people generally tend to underestimate the cost and time needed for construction projects. The planning fallacy occurs due to people’s tendency to overestimate the chances that positive events, such as a shortened timeline, will happen to them. This phenomenon is called optimism bias or positivity bias.

Although both red herring fallacy and straw man fallacy are logical fallacies or reasoning errors, they denote different attempts to “win” an argument. More specifically:

  • A red herring fallacy refers to an attempt to change the subject and divert attention from the original issue. In other words, a seemingly solid but ultimately irrelevant argument is introduced into the discussion, either on purpose or by mistake.
  • A straw man argument involves the deliberate distortion of another person’s argument. By oversimplifying or exaggerating it, the other party creates an easy-to-refute argument and then attacks it.

The red herring fallacy is a problem because it is flawed reasoning. It is a distraction device that causes people to become sidetracked from the main issue and draw wrong conclusions.

Although a red herring may have some kernel of truth, it is used as a distraction to keep our eyes on a different matter. As a result, it can cause us to accept and spread misleading information.

The sunk cost fallacy and escalation of commitment (or commitment bias ) are two closely related terms. However, there is a slight difference between them:

  • Escalation of commitment (aka commitment bias ) is the tendency to be consistent with what we have already done or said we will do in the past, especially if we did so in public. In other words, it is an attempt to save face and appear consistent.
  • Sunk cost fallacy is the tendency to stick with a decision or a plan even when it’s failing. Because we have already invested valuable time, money, or energy, quitting feels like these resources were wasted.

In other words, escalating commitment is a manifestation of the sunk cost fallacy: an irrational escalation of commitment frequently occurs when people refuse to accept that the resources they’ve already invested cannot be recovered. Instead, they insist on more spending to justify the initial investment (and the incurred losses).

When you are faced with a straw man argument , the best way to respond is to draw attention to the fallacy and ask your discussion partner to show how your original statement and their distorted version are the same. Since these are different, your partner will either have to admit that their argument is invalid or try to justify it by using more flawed reasoning, which you can then attack.

The straw man argument is a problem because it occurs when we fail to take an opposing point of view seriously. Instead, we intentionally misrepresent our opponent’s ideas and avoid genuinely engaging with them. Due to this, resorting to straw man fallacy lowers the standard of constructive debate.

A straw man argument is a distorted (and weaker) version of another person’s argument that can easily be refuted (e.g., when a teacher proposes that the class spend more time on math exercises, a parent complains that the teacher doesn’t care about reading and writing).

This is a straw man argument because it misrepresents the teacher’s position, which didn’t mention anything about cutting down on reading and writing. The straw man argument is also known as the straw man fallacy .

A slippery slope argument is not always a fallacy.

  • When someone claims adopting a certain policy or taking a certain action will automatically lead to a series of other policies or actions also being taken, this is a slippery slope argument.
  • If they don’t show a causal connection between the advocated policy and the consequent policies, then they commit a slippery slope fallacy .

There are a number of ways you can deal with slippery slope arguments especially when you suspect these are fallacious:

  • Slippery slope arguments take advantage of the gray area between an initial action or decision and the possible next steps that might lead to the undesirable outcome. You can point out these missing steps and ask your partner to indicate what evidence exists to support the claimed relationship between two or more events.
  • Ask yourself if each link in the chain of events or action is valid. Every proposition has to be true for the overall argument to work, so even if one link is irrational or not supported by evidence, then the argument collapses.
  • Sometimes people commit a slippery slope fallacy unintentionally. In these instances, use an example that demonstrates the problem with slippery slope arguments in general (e.g., by using statements to reach a conclusion that is not necessarily relevant to the initial statement). By attacking the concept of slippery slope arguments you can show that they are often fallacious.

People sometimes confuse cognitive bias and logical fallacies because they both relate to flawed thinking. However, they are not the same:

  • Cognitive bias is the tendency to make decisions or take action in an illogical way because of our values, memory, socialization, and other personal attributes. In other words, it refers to a fixed pattern of thinking rooted in the way our brain works.
  • Logical fallacies relate to how we make claims and construct our arguments in the moment. They are statements that sound convincing at first but can be disproven through logical reasoning.

In other words, cognitive bias refers to an ongoing predisposition, while logical fallacy refers to mistakes of reasoning that occur in the moment.

An appeal to ignorance (ignorance here meaning lack of evidence) is a type of informal logical fallacy .

It asserts that something must be true because it hasn’t been proven false—or that something must be false because it has not yet been proven true.

For example, “unicorns exist because there is no evidence that they don’t.” The appeal to ignorance is also called the burden of proof fallacy .

An ad hominem (Latin for “to the person”) is a type of informal logical fallacy . Instead of arguing against a person’s position, an ad hominem argument attacks the person’s character or actions in an effort to discredit them.

This rhetorical strategy is fallacious because a person’s character, motive, education, or other personal trait is logically irrelevant to whether their argument is true or false.

Name-calling is common in ad hominem fallacy (e.g., “environmental activists are ineffective because they’re all lazy tree-huggers”).

Ad hominem is a persuasive technique where someone tries to undermine the opponent’s argument by personally attacking them.

In this way, one can redirect the discussion away from the main topic and to the opponent’s personality without engaging with their viewpoint. When the opponent’s personality is irrelevant to the discussion, we call it an ad hominem fallacy .

A fallacy is a mistaken belief, particularly one based on unsound arguments or one that lacks the evidence to support it. Common types of fallacy that may compromise the quality of your research are:

  • Correlation/causation fallacy: Claiming that two events that occur together have a cause-and-effect relationship even though this can’t be proven
  • Ecological fallacy : Making inferences about the nature of individuals based on aggregate data for the group
  • The sunk cost fallacy : Following through on a project or decision because we have already invested time, effort, or money into it, even if the current costs outweigh the benefits
  • The base-rate fallacy : Ignoring base-rate or statistically significant information, such as sample size or the relative frequency of an event, in favor of  less relevant information e.g., pertaining to a single case, or a small number of cases
  • The planning fallacy : Underestimating the time needed to complete a future task, even when we know that similar tasks in the past have taken longer than planned

Argumentum ad hominem means “argument to the person” in Latin and it is commonly referred to as ad hominem argument or personal attack. Ad hominem arguments are used in debates to refute an argument by attacking the character of the person making it, instead of the logic or premise of the argument itself.

The opposite of the hasty generalization fallacy is called slothful induction fallacy or appeal to coincidence .

It is the tendency to deny a conclusion even though there is sufficient evidence that supports it. Slothful induction occurs due to our natural tendency to dismiss events or facts that do not align with our personal biases and expectations. For example, a researcher may try to explain away unexpected results by claiming it is just a coincidence.

To avoid a hasty generalization fallacy we need to ensure that the conclusions drawn are well-supported by the appropriate evidence. More specifically:

  • In statistics , if we want to draw inferences about an entire population, we need to make sure that the sample is random and representative of the population . We can achieve that by using a probability sampling method , like simple random sampling or stratified sampling .
  • In academic writing , use precise language and measured phases. Try to avoid making absolute claims, cite specific instances and examples without applying the findings to a larger group.
  • As readers, we need to ask ourselves “does the writer demonstrate sufficient knowledge of the situation or phenomenon that would allow them to make a generalization?”

The hasty generalization fallacy and the anecdotal evidence fallacy are similar in that they both result in conclusions drawn from insufficient evidence. However, there is a difference between the two:

  • The hasty generalization fallacy involves genuinely considering an example or case (i.e., the evidence comes first and then an incorrect conclusion is drawn from this).
  • The anecdotal evidence fallacy (also known as “cherry-picking” ) is knowing in advance what conclusion we want to support, and then selecting the story (or a few stories) that support it. By overemphasizing anecdotal evidence that fits well with the point we are trying to make, we overlook evidence that would undermine our argument.

Although many sources use circular reasoning fallacy and begging the question interchangeably, others point out that there is a subtle difference between the two:

  • Begging the question fallacy occurs when you assume that an argument is true in order to justify a conclusion. If something begs the question, what you are actually asking is, “Is the premise of that argument actually true?” For example, the statement “Snakes make great pets. That’s why we should get a snake” begs the question “are snakes really great pets?”
  • Circular reasoning fallacy on the other hand, occurs when the evidence used to support a claim is just a repetition of the claim itself.  For example, “People have free will because they can choose what to do.”

In other words, we could say begging the question is a form of circular reasoning.

Circular reasoning fallacy uses circular reasoning to support an argument. More specifically, the evidence used to support a claim is just a repetition of the claim itself. For example: “The President of the United States is a good leader (claim), because they are the leader of this country (supporting evidence)”.

An example of a non sequitur is the following statement:

“Giving up nuclear weapons weakened the United States’ military. Giving up nuclear weapons also weakened China. For this reason, it is wrong to try to outlaw firearms in the United States today.”

Clearly there is a step missing in this line of reasoning and the conclusion does not follow from the premise, resulting in a non sequitur fallacy .

The difference between the post hoc fallacy and the non sequitur fallacy is that post hoc fallacy infers a causal connection between two events where none exists, whereas the non sequitur fallacy infers a conclusion that lacks a logical connection to the premise.

In other words, a post hoc fallacy occurs when there is a lack of a cause-and-effect relationship, while a non sequitur fallacy occurs when there is a lack of logical connection.

Post hoc fallacy and hasty generalisation fallacy are similar in that they both involve jumping to conclusions. However, there is a difference between the two:

  • Post hoc fallacy is assuming a cause and effect relationship between two events, simply because one happened after the other.
  • Hasty generalisation fallacy is drawing a general conclusion from a small sample or little evidence.

In other words, post hoc fallacy involves a leap to a causal claim; hasty generalisation fallacy involves a leap to a general proposition.

The fallacy of composition is similar to and can be confused with the hasty generalization fallacy . However, there is a difference between the two:

  • The fallacy of composition involves drawing an inference about the characteristics of a whole or group based on the characteristics of its individual members.
  • The hasty generalization fallacy involves drawing an inference about a population or class of things on the basis of few atypical instances or a small sample of that population or thing.

In other words, the fallacy of composition is using an unwarranted assumption that we can infer something about a whole based on the characteristics of its parts, while the hasty generalization fallacy is using insufficient evidence to draw a conclusion.

The opposite of the fallacy of composition is the fallacy of division . In the fallacy of division, the assumption is that a characteristic which applies to a whole or a group must necessarily apply to the parts or individual members. For example, “Australians travel a lot. Gary is Australian, so he must travel a lot.”

Ask our team

Want to contact us directly? No problem. We are always here for you.

Support team - Nina

Our support team is here to help you daily via chat, WhatsApp, email, or phone between 9:00 a.m. to 11:00 p.m. CET.

Our APA experts default to APA 7 for editing and formatting. For the Citation Editing Service you are able to choose between APA 6 and 7.

Yes, if your document is longer than 20,000 words, you will get a sample of approximately 2,000 words. This sample edit gives you a first impression of the editor’s editing style and a chance to ask questions and give feedback.

How does the sample edit work?

You will receive the sample edit within 24 hours after placing your order. You then have 24 hours to let us know if you’re happy with the sample or if there’s something you would like the editor to do differently.

Read more about how the sample edit works

Yes, you can upload your document in sections.

We try our best to ensure that the same editor checks all the different sections of your document. When you upload a new file, our system recognizes you as a returning customer, and we immediately contact the editor who helped you before.

However, we cannot guarantee that the same editor will be available. Your chances are higher if

  • You send us your text as soon as possible and
  • You can be flexible about the deadline.

Please note that the shorter your deadline is, the lower the chance that your previous editor is not available.

If your previous editor isn’t available, then we will inform you immediately and look for another qualified editor. Fear not! Every Scribbr editor follows the  Scribbr Improvement Model  and will deliver high-quality work.

Yes, our editors also work during the weekends and holidays.

Because we have many editors available, we can check your document 24 hours per day and 7 days per week, all year round.

If you choose a 72 hour deadline and upload your document on a Thursday evening, you’ll have your thesis back by Sunday evening!

Yes! Our editors are all native speakers, and they have lots of experience editing texts written by ESL students. They will make sure your grammar is perfect and point out any sentences that are difficult to understand. They’ll also notice your most common mistakes, and give you personal feedback to improve your writing in English.

Every Scribbr order comes with our award-winning Proofreading & Editing service , which combines two important stages of the revision process.

For a more comprehensive edit, you can add a Structure Check or Clarity Check to your order. With these building blocks, you can customize the kind of feedback you receive.

You might be familiar with a different set of editing terms. To help you understand what you can expect at Scribbr, we created this table:

View an example

When you place an order, you can specify your field of study and we’ll match you with an editor who has familiarity with this area.

However, our editors are language specialists, not academic experts in your field. Your editor’s job is not to comment on the content of your dissertation, but to improve your language and help you express your ideas as clearly and fluently as possible.

This means that your editor will understand your text well enough to give feedback on its clarity, logic and structure, but not on the accuracy or originality of its content.

Good academic writing should be understandable to a non-expert reader, and we believe that academic editing is a discipline in itself. The research, ideas and arguments are all yours – we’re here to make sure they shine!

After your document has been edited, you will receive an email with a link to download the document.

The editor has made changes to your document using ‘Track Changes’ in Word. This means that you only have to accept or ignore the changes that are made in the text one by one.

It is also possible to accept all changes at once. However, we strongly advise you not to do so for the following reasons:

  • You can learn a lot by looking at the mistakes you made.
  • The editors don’t only change the text – they also place comments when sentences or sometimes even entire paragraphs are unclear. You should read through these comments and take into account your editor’s tips and suggestions.
  • With a final read-through, you can make sure you’re 100% happy with your text before you submit!

You choose the turnaround time when ordering. We can return your dissertation within 24 hours , 3 days or 1 week . These timescales include weekends and holidays. As soon as you’ve paid, the deadline is set, and we guarantee to meet it! We’ll notify you by text and email when your editor has completed the job.

Very large orders might not be possible to complete in 24 hours. On average, our editors can complete around 13,000 words in a day while maintaining our high quality standards. If your order is longer than this and urgent, contact us to discuss possibilities.

Always leave yourself enough time to check through the document and accept the changes before your submission deadline.

Scribbr is specialised in editing study related documents. We check:

  • Graduation projects
  • Dissertations
  • Admissions essays
  • College essays
  • Application essays
  • Personal statements
  • Process reports
  • Reflections
  • Internship reports
  • Academic papers
  • Research proposals
  • Prospectuses

Calculate the costs

The fastest turnaround time is 24 hours.

You can upload your document at any time and choose between three deadlines:

At Scribbr, we promise to make every customer 100% happy with the service we offer. Our philosophy: Your complaint is always justified – no denial, no doubts.

Our customer support team is here to find the solution that helps you the most, whether that’s a free new edit or a refund for the service.

Yes, in the order process you can indicate your preference for American, British, or Australian English .

If you don’t choose one, your editor will follow the style of English you currently use. If your editor has any questions about this, we will contact you.

41+ Critical Thinking Examples (Definition + Practices)

practical psychology logo

Critical thinking is an essential skill in our information-overloaded world, where figuring out what is fact and fiction has become increasingly challenging.

But why is critical thinking essential? Put, critical thinking empowers us to make better decisions, challenge and validate our beliefs and assumptions, and understand and interact with the world more effectively and meaningfully.

Critical thinking is like using your brain's "superpowers" to make smart choices. Whether it's picking the right insurance, deciding what to do in a job, or discussing topics in school, thinking deeply helps a lot. In the next parts, we'll share real-life examples of when this superpower comes in handy and give you some fun exercises to practice it.

Critical Thinking Process Outline

a woman thinking

Critical thinking means thinking clearly and fairly without letting personal feelings get in the way. It's like being a detective, trying to solve a mystery by using clues and thinking hard about them.

It isn't always easy to think critically, as it can take a pretty smart person to see some of the questions that aren't being answered in a certain situation. But, we can train our brains to think more like puzzle solvers, which can help develop our critical thinking skills.

Here's what it looks like step by step:

Spotting the Problem: It's like discovering a puzzle to solve. You see that there's something you need to figure out or decide.

Collecting Clues: Now, you need to gather information. Maybe you read about it, watch a video, talk to people, or do some research. It's like getting all the pieces to solve your puzzle.

Breaking It Down: This is where you look at all your clues and try to see how they fit together. You're asking questions like: Why did this happen? What could happen next?

Checking Your Clues: You want to make sure your information is good. This means seeing if what you found out is true and if you can trust where it came from.

Making a Guess: After looking at all your clues, you think about what they mean and come up with an answer. This answer is like your best guess based on what you know.

Explaining Your Thoughts: Now, you tell others how you solved the puzzle. You explain how you thought about it and how you answered. 

Checking Your Work: This is like looking back and seeing if you missed anything. Did you make any mistakes? Did you let any personal feelings get in the way? This step helps make sure your thinking is clear and fair.

And remember, you might sometimes need to go back and redo some steps if you discover something new. If you realize you missed an important clue, you might have to go back and collect more information.

Critical Thinking Methods

Just like doing push-ups or running helps our bodies get stronger, there are special exercises that help our brains think better. These brain workouts push us to think harder, look at things closely, and ask many questions.

It's not always about finding the "right" answer. Instead, it's about the journey of thinking and asking "why" or "how." Doing these exercises often helps us become better thinkers and makes us curious to know more about the world.

Now, let's look at some brain workouts to help us think better:

1. "What If" Scenarios

Imagine crazy things happening, like, "What if there was no internet for a month? What would we do?" These games help us think of new and different ideas.

Pick a hot topic. Argue one side of it and then try arguing the opposite. This makes us see different viewpoints and think deeply about a topic.

3. Analyze Visual Data

Check out charts or pictures with lots of numbers and info but no explanations. What story are they telling? This helps us get better at understanding information just by looking at it.

4. Mind Mapping

Write an idea in the center and then draw lines to related ideas. It's like making a map of your thoughts. This helps us see how everything is connected.

There's lots of mind-mapping software , but it's also nice to do this by hand.

5. Weekly Diary

Every week, write about what happened, the choices you made, and what you learned. Writing helps us think about our actions and how we can do better.

6. Evaluating Information Sources

Collect stories or articles about one topic from newspapers or blogs. Which ones are trustworthy? Which ones might be a little biased? This teaches us to be smart about where we get our info.

There are many resources to help you determine if information sources are factual or not.

7. Socratic Questioning

This way of thinking is called the Socrates Method, named after an old-time thinker from Greece. It's about asking lots of questions to understand a topic. You can do this by yourself or chat with a friend.

Start with a Big Question:

"What does 'success' mean?"

Dive Deeper with More Questions:

"Why do you think of success that way?" "Do TV shows, friends, or family make you think that?" "Does everyone think about success the same way?"

"Can someone be a winner even if they aren't rich or famous?" "Can someone feel like they didn't succeed, even if everyone else thinks they did?"

Look for Real-life Examples:

"Who is someone you think is successful? Why?" "Was there a time you felt like a winner? What happened?"

Think About Other People's Views:

"How might a person from another country think about success?" "Does the idea of success change as we grow up or as our life changes?"

Think About What It Means:

"How does your idea of success shape what you want in life?" "Are there problems with only wanting to be rich or famous?"

Look Back and Think:

"After talking about this, did your idea of success change? How?" "Did you learn something new about what success means?"

socratic dialogue statues

8. Six Thinking Hats 

Edward de Bono came up with a cool way to solve problems by thinking in six different ways, like wearing different colored hats. You can do this independently, but it might be more effective in a group so everyone can have a different hat color. Each color has its way of thinking:

White Hat (Facts): Just the facts! Ask, "What do we know? What do we need to find out?"

Red Hat (Feelings): Talk about feelings. Ask, "How do I feel about this?"

Black Hat (Careful Thinking): Be cautious. Ask, "What could go wrong?"

Yellow Hat (Positive Thinking): Look on the bright side. Ask, "What's good about this?"

Green Hat (Creative Thinking): Think of new ideas. Ask, "What's another way to look at this?"

Blue Hat (Planning): Organize the talk. Ask, "What should we do next?"

When using this method with a group:

  • Explain all the hats.
  • Decide which hat to wear first.
  • Make sure everyone switches hats at the same time.
  • Finish with the Blue Hat to plan the next steps.

9. SWOT Analysis

SWOT Analysis is like a game plan for businesses to know where they stand and where they should go. "SWOT" stands for Strengths, Weaknesses, Opportunities, and Threats.

There are a lot of SWOT templates out there for how to do this visually, but you can also think it through. It doesn't just apply to businesses but can be a good way to decide if a project you're working on is working.

Strengths: What's working well? Ask, "What are we good at?"

Weaknesses: Where can we do better? Ask, "Where can we improve?"

Opportunities: What good things might come our way? Ask, "What chances can we grab?"

Threats: What challenges might we face? Ask, "What might make things tough for us?"

Steps to do a SWOT Analysis:

  • Goal: Decide what you want to find out.
  • Research: Learn about your business and the world around it.
  • Brainstorm: Get a group and think together. Talk about strengths, weaknesses, opportunities, and threats.
  • Pick the Most Important Points: Some things might be more urgent or important than others.
  • Make a Plan: Decide what to do based on your SWOT list.
  • Check Again Later: Things change, so look at your SWOT again after a while to update it.

Now that you have a few tools for thinking critically, let’s get into some specific examples.

Everyday Examples

Life is a series of decisions. From the moment we wake up, we're faced with choices – some trivial, like choosing a breakfast cereal, and some more significant, like buying a home or confronting an ethical dilemma at work. While it might seem that these decisions are disparate, they all benefit from the application of critical thinking.

10. Deciding to buy something

Imagine you want a new phone. Don't just buy it because the ad looks cool. Think about what you need in a phone. Look up different phones and see what people say about them. Choose the one that's the best deal for what you want.

11. Deciding what is true

There's a lot of news everywhere. Don't believe everything right away. Think about why someone might be telling you this. Check if what you're reading or watching is true. Make up your mind after you've looked into it.

12. Deciding when you’re wrong

Sometimes, friends can have disagreements. Don't just get mad right away. Try to see where they're coming from. Talk about what's going on. Find a way to fix the problem that's fair for everyone.

13. Deciding what to eat

There's always a new diet or exercise that's popular. Don't just follow it because it's trendy. Find out if it's good for you. Ask someone who knows, like a doctor. Make choices that make you feel good and stay healthy.

14. Deciding what to do today

Everyone is busy with school, chores, and hobbies. Make a list of things you need to do. Decide which ones are most important. Plan your day so you can get things done and still have fun.

15. Making Tough Choices

Sometimes, it's hard to know what's right. Think about how each choice will affect you and others. Talk to people you trust about it. Choose what feels right in your heart and is fair to others.

16. Planning for the Future

Big decisions, like where to go to school, can be tricky. Think about what you want in the future. Look at the good and bad of each choice. Talk to people who know about it. Pick what feels best for your dreams and goals.

choosing a house

Job Examples

17. solving problems.

Workers brainstorm ways to fix a machine quickly without making things worse when a machine breaks at a factory.

18. Decision Making

A store manager decides which products to order more of based on what's selling best.

19. Setting Goals

A team leader helps their team decide what tasks are most important to finish this month and which can wait.

20. Evaluating Ideas

At a team meeting, everyone shares ideas for a new project. The group discusses each idea's pros and cons before picking one.

21. Handling Conflict

Two workers disagree on how to do a job. Instead of arguing, they talk calmly, listen to each other, and find a solution they both like.

22. Improving Processes

A cashier thinks of a faster way to ring up items so customers don't have to wait as long.

23. Asking Questions

Before starting a big task, an employee asks for clear instructions and checks if they have the necessary tools.

24. Checking Facts

Before presenting a report, someone double-checks all their information to make sure there are no mistakes.

25. Planning for the Future

A business owner thinks about what might happen in the next few years, like new competitors or changes in what customers want, and makes plans based on those thoughts.

26. Understanding Perspectives

A team is designing a new toy. They think about what kids and parents would both like instead of just what they think is fun.

School Examples

27. researching a topic.

For a history project, a student looks up different sources to understand an event from multiple viewpoints.

28. Debating an Issue

In a class discussion, students pick sides on a topic, like school uniforms, and share reasons to support their views.

29. Evaluating Sources

While writing an essay, a student checks if the information from a website is trustworthy or might be biased.

30. Problem Solving in Math

When stuck on a tricky math problem, a student tries different methods to find the answer instead of giving up.

31. Analyzing Literature

In English class, students discuss why a character in a book made certain choices and what those decisions reveal about them.

32. Testing a Hypothesis

For a science experiment, students guess what will happen and then conduct tests to see if they're right or wrong.

33. Giving Peer Feedback

After reading a classmate's essay, a student offers suggestions for improving it.

34. Questioning Assumptions

In a geography lesson, students consider why certain countries are called "developed" and what that label means.

35. Designing a Study

For a psychology project, students plan an experiment to understand how people's memories work and think of ways to ensure accurate results.

36. Interpreting Data

In a science class, students look at charts and graphs from a study, then discuss what the information tells them and if there are any patterns.

Critical Thinking Puzzles

critical thinking tree

Not all scenarios will have a single correct answer that can be figured out by thinking critically. Sometimes we have to think critically about ethical choices or moral behaviors. 

Here are some mind games and scenarios you can solve using critical thinking. You can see the solution(s) at the end of the post.

37. The Farmer, Fox, Chicken, and Grain Problem

A farmer is at a riverbank with a fox, a chicken, and a grain bag. He needs to get all three items across the river. However, his boat can only carry himself and one of the three items at a time. 

Here's the challenge:

  • If the fox is left alone with the chicken, the fox will eat the chicken.
  • If the chicken is left alone with the grain, the chicken will eat the grain.

How can the farmer get all three items across the river without any item being eaten? 

38. The Rope, Jar, and Pebbles Problem

You are in a room with two long ropes hanging from the ceiling. Each rope is just out of arm's reach from the other, so you can't hold onto one rope and reach the other simultaneously. 

Your task is to tie the two rope ends together, but you can't move the position where they hang from the ceiling.

You are given a jar full of pebbles. How do you complete the task?

39. The Two Guards Problem

Imagine there are two doors. One door leads to certain doom, and the other leads to freedom. You don't know which is which.

In front of each door stands a guard. One guard always tells the truth. The other guard always lies. You don't know which guard is which.

You can ask only one question to one of the guards. What question should you ask to find the door that leads to freedom?

40. The Hourglass Problem

You have two hourglasses. One measures 7 minutes when turned over, and the other measures 4 minutes. Using just these hourglasses, how can you time exactly 9 minutes?

41. The Lifeboat Dilemma

Imagine you're on a ship that's sinking. You get on a lifeboat, but it's already too full and might flip over. 

Nearby in the water, five people are struggling: a scientist close to finding a cure for a sickness, an old couple who've been together for a long time, a mom with three kids waiting at home, and a tired teenager who helped save others but is now in danger. 

You can only save one person without making the boat flip. Who would you choose?

42. The Tech Dilemma

You work at a tech company and help make a computer program to help small businesses. You're almost ready to share it with everyone, but you find out there might be a small chance it has a problem that could show users' private info. 

If you decide to fix it, you must wait two more months before sharing it. But your bosses want you to share it now. What would you do?

43. The History Mystery

Dr. Amelia is a history expert. She's studying where a group of people traveled long ago. She reads old letters and documents to learn about it. But she finds some letters that tell a different story than what most people believe. 

If she says this new story is true, it could change what people learn in school and what they think about history. What should she do?

The Role of Bias in Critical Thinking

Have you ever decided you don’t like someone before you even know them? Or maybe someone shared an idea with you that you immediately loved without even knowing all the details. 

This experience is called bias, which occurs when you like or dislike something or someone without a good reason or knowing why. It can also take shape in certain reactions to situations, like a habit or instinct. 

Bias comes from our own experiences, what friends or family tell us, or even things we are born believing. Sometimes, bias can help us stay safe, but other times it stops us from seeing the truth.

Not all bias is bad. Bias can be a mechanism for assessing our potential safety in a new situation. If we are biased to think that anything long, thin, and curled up is a snake, we might assume the rope is something to be afraid of before we know it is just a rope.

While bias might serve us in some situations (like jumping out of the way of an actual snake before we have time to process that we need to be jumping out of the way), it often harms our ability to think critically.

How Bias Gets in the Way of Good Thinking

Selective Perception: We only notice things that match our ideas and ignore the rest. 

It's like only picking red candies from a mixed bowl because you think they taste the best, but they taste the same as every other candy in the bowl. It could also be when we see all the signs that our partner is cheating on us but choose to ignore them because we are happy the way we are (or at least, we think we are).

Agreeing with Yourself: This is called “ confirmation bias ” when we only listen to ideas that match our own and seek, interpret, and remember information in a way that confirms what we already think we know or believe. 

An example is when someone wants to know if it is safe to vaccinate their children but already believes that vaccines are not safe, so they only look for information supporting the idea that vaccines are bad.

Thinking We Know It All: Similar to confirmation bias, this is called “overconfidence bias.” Sometimes we think our ideas are the best and don't listen to others. This can stop us from learning.

Have you ever met someone who you consider a “know it”? Probably, they have a lot of overconfidence bias because while they may know many things accurately, they can’t know everything. Still, if they act like they do, they show overconfidence bias.

There's a weird kind of bias similar to this called the Dunning Kruger Effect, and that is when someone is bad at what they do, but they believe and act like they are the best .

Following the Crowd: This is formally called “groupthink”. It's hard to speak up with a different idea if everyone agrees. But this can lead to mistakes.

An example of this we’ve all likely seen is the cool clique in primary school. There is usually one person that is the head of the group, the “coolest kid in school”, and everyone listens to them and does what they want, even if they don’t think it’s a good idea.

How to Overcome Biases

Here are a few ways to learn to think better, free from our biases (or at least aware of them!).

Know Your Biases: Realize that everyone has biases. If we know about them, we can think better.

Listen to Different People: Talking to different kinds of people can give us new ideas.

Ask Why: Always ask yourself why you believe something. Is it true, or is it just a bias?

Understand Others: Try to think about how others feel. It helps you see things in new ways.

Keep Learning: Always be curious and open to new information.

city in a globe connection

In today's world, everything changes fast, and there's so much information everywhere. This makes critical thinking super important. It helps us distinguish between what's real and what's made up. It also helps us make good choices. But thinking this way can be tough sometimes because of biases. These are like sneaky thoughts that can trick us. The good news is we can learn to see them and think better.

There are cool tools and ways we've talked about, like the "Socratic Questioning" method and the "Six Thinking Hats." These tools help us get better at thinking. These thinking skills can also help us in school, work, and everyday life.

We’ve also looked at specific scenarios where critical thinking would be helpful, such as deciding what diet to follow and checking facts.

Thinking isn't just a skill—it's a special talent we improve over time. Working on it lets us see things more clearly and understand the world better. So, keep practicing and asking questions! It'll make you a smarter thinker and help you see the world differently.

Critical Thinking Puzzles (Solutions)

The farmer, fox, chicken, and grain problem.

  • The farmer first takes the chicken across the river and leaves it on the other side.
  • He returns to the original side and takes the fox across the river.
  • After leaving the fox on the other side, he returns the chicken to the starting side.
  • He leaves the chicken on the starting side and takes the grain bag across the river.
  • He leaves the grain with the fox on the other side and returns to get the chicken.
  • The farmer takes the chicken across, and now all three items -- the fox, the chicken, and the grain -- are safely on the other side of the river.

The Rope, Jar, and Pebbles Problem

  • Take one rope and tie the jar of pebbles to its end.
  • Swing the rope with the jar in a pendulum motion.
  • While the rope is swinging, grab the other rope and wait.
  • As the swinging rope comes back within reach due to its pendulum motion, grab it.
  • With both ropes within reach, untie the jar and tie the rope ends together.

The Two Guards Problem

The question is, "What would the other guard say is the door to doom?" Then choose the opposite door.

The Hourglass Problem

  • Start both hourglasses. 
  • When the 4-minute hourglass runs out, turn it over.
  • When the 7-minute hourglass runs out, the 4-minute hourglass will have been running for 3 minutes. Turn the 7-minute hourglass over. 
  • When the 4-minute hourglass runs out for the second time (a total of 8 minutes have passed), the 7-minute hourglass will run for 1 minute. Turn the 7-minute hourglass again for 1 minute to empty the hourglass (a total of 9 minutes passed).

The Boat and Weights Problem

Take the cat over first and leave it on the other side. Then, return and take the fish across next. When you get there, take the cat back with you. Leave the cat on the starting side and take the cat food across. Lastly, return to get the cat and bring it to the other side.

The Lifeboat Dilemma

There isn’t one correct answer to this problem. Here are some elements to consider:

  • Moral Principles: What values guide your decision? Is it the potential greater good for humanity (the scientist)? What is the value of long-standing love and commitment (the elderly couple)? What is the future of young children who depend on their mothers? Or the selfless bravery of the teenager?
  • Future Implications: Consider the future consequences of each choice. Saving the scientist might benefit millions in the future, but what moral message does it send about the value of individual lives?
  • Emotional vs. Logical Thinking: While it's essential to engage empathy, it's also crucial not to let emotions cloud judgment entirely. For instance, while the teenager's bravery is commendable, does it make him more deserving of a spot on the boat than the others?
  • Acknowledging Uncertainty: The scientist claims to be close to a significant breakthrough, but there's no certainty. How does this uncertainty factor into your decision?
  • Personal Bias: Recognize and challenge any personal biases, such as biases towards age, profession, or familial status.

The Tech Dilemma

Again, there isn’t one correct answer to this problem. Here are some elements to consider:

  • Evaluate the Risk: How severe is the potential vulnerability? Can it be easily exploited, or would it require significant expertise? Even if the circumstances are rare, what would be the consequences if the vulnerability were exploited?
  • Stakeholder Considerations: Different stakeholders will have different priorities. Upper management might prioritize financial projections, the marketing team might be concerned about the product's reputation, and customers might prioritize the security of their data. How do you balance these competing interests?
  • Short-Term vs. Long-Term Implications: While launching on time could meet immediate financial goals, consider the potential long-term damage to the company's reputation if the vulnerability is exploited. Would the short-term gains be worth the potential long-term costs?
  • Ethical Implications : Beyond the financial and reputational aspects, there's an ethical dimension to consider. Is it right to release a product with a known vulnerability, even if the chances of it being exploited are low?
  • Seek External Input: Consulting with cybersecurity experts outside your company might be beneficial. They could provide a more objective risk assessment and potential mitigation strategies.
  • Communication: How will you communicate the decision, whatever it may be, both internally to your team and upper management and externally to your customers and potential users?

The History Mystery

Dr. Amelia should take the following steps:

  • Verify the Letters: Before making any claims, she should check if the letters are actual and not fake. She can do this by seeing when and where they were written and if they match with other things from that time.
  • Get a Second Opinion: It's always good to have someone else look at what you've found. Dr. Amelia could show the letters to other history experts and see their thoughts.
  • Research More: Maybe there are more documents or letters out there that support this new story. Dr. Amelia should keep looking to see if she can find more evidence.
  • Share the Findings: If Dr. Amelia believes the letters are true after all her checks, she should tell others. This can be through books, talks, or articles.
  • Stay Open to Feedback: Some people might agree with Dr. Amelia, and others might not. She should listen to everyone and be ready to learn more or change her mind if new information arises.

Ultimately, Dr. Amelia's job is to find out the truth about history and share it. It's okay if this new truth differs from what people used to believe. History is about learning from the past, no matter the story.

Related posts:

  • Experimenter Bias (Definition + Examples)
  • Hasty Generalization Fallacy (31 Examples + Similar Names)
  • Ad Hoc Fallacy (29 Examples + Other Names)
  • Confirmation Bias (Examples + Definition)
  • Equivocation Fallacy (26 Examples + Description)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Logical Fallacies

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

This resource covers using logic within writing—logical vocabulary, logical fallacies, and other types of logos-based reasoning.

Fallacies are common errors in reasoning that will undermine the logic of your argument. Fallacies can be either illegitimate arguments or irrelevant points, and are often identified because they lack evidence that supports their claim. Avoid these common fallacies in your own arguments and watch for them in the arguments of others.

Slippery Slope: This is a conclusion based on the premise that if A happens, then eventually through a series of small steps, through B, C,..., X, Y, Z will happen, too, basically equating A and Z. So, if we don't want Z to occur, A must not be allowed to occur either. Example:

If we ban Hummers because they are bad for the environment eventually the government will ban all cars, so we should not ban Hummers.

In this example, the author is equating banning Hummers with banning all cars, which is not the same thing.

Hasty Generalization: This is a conclusion based on insufficient or biased evidence. In other words, you are rushing to a conclusion before you have all the relevant facts. Example:

Even though it's only the first day, I can tell this is going to be a boring course.

In this example, the author is basing his evaluation of the entire course on only the first day, which is notoriously boring and full of housekeeping tasks for most courses. To make a fair and reasonable evaluation the author must attend not one but several classes, and possibly even examine the textbook, talk to the professor, or talk to others who have previously finished the course in order to have sufficient evidence to base a conclusion on.

Post hoc ergo propter hoc: This is a conclusion that assumes that if 'A' occurred after 'B' then 'B' must have caused 'A.' Example:

I drank bottled water and now I am sick, so the water must have made me sick.

In this example, the author assumes that if one event chronologically follows another the first event must have caused the second. But the illness could have been caused by the burrito the night before, a flu bug that had been working on the body for days, or a chemical spill across campus. There is no reason, without more evidence, to assume the water caused the person to be sick.

Genetic Fallacy: This conclusion is based on an argument that the origins of a person, idea, institute, or theory determine its character, nature, or worth. Example:

The Volkswagen Beetle is an evil car because it was originally designed by Hitler's army.

In this example the author is equating the character of a car with the character of the people who built the car. However, the two are not inherently related.

Begging the Claim: The conclusion that the writer should prove is validated within the claim. Example:

Filthy and polluting coal should be banned.

Arguing that coal pollutes the earth and thus should be banned would be logical. But the very conclusion that should be proved, that coal causes enough pollution to warrant banning its use, is already assumed in the claim by referring to it as "filthy and polluting."

Circular Argument: This restates the argument rather than actually proving it. Example:

George Bush is a good communicator because he speaks effectively.

In this example, the conclusion that Bush is a "good communicator" and the evidence used to prove it "he speaks effectively" are basically the same idea. Specific evidence such as using everyday language, breaking down complex problems, or illustrating his points with humorous stories would be needed to prove either half of the sentence.

Either/or: This is a conclusion that oversimplifies the argument by reducing it to only two sides or choices. Example:

We can either stop using cars or destroy the earth.

In this example, the two choices are presented as the only options, yet the author ignores a range of choices in between such as developing cleaner technology, car-sharing systems for necessities and emergencies, or better community planning to discourage daily driving.

Ad hominem: This is an attack on the character of a person rather than his or her opinions or arguments. Example:

Green Peace's strategies aren't effective because they are all dirty, lazy hippies.

In this example, the author doesn't even name particular strategies Green Peace has suggested, much less evaluate those strategies on their merits. Instead, the author attacks the characters of the individuals in the group.

Ad populum/Bandwagon Appeal: This is an appeal that presents what most people, or a group of people think, in order to persuade one to think the same way. Getting on the bandwagon is one such instance of an ad populum appeal.

If you were a true American you would support the rights of people to choose whatever vehicle they want.

In this example, the author equates being a "true American," a concept that people want to be associated with, particularly in a time of war, with allowing people to buy any vehicle they want even though there is no inherent connection between the two.

Red Herring: This is a diversionary tactic that avoids the key issues, often by avoiding opposing arguments rather than addressing them. Example:

The level of mercury in seafood may be unsafe, but what will fishers do to support their families?

In this example, the author switches the discussion away from the safety of the food and talks instead about an economic issue, the livelihood of those catching fish. While one issue may affect the other it does not mean we should ignore possible safety issues because of possible economic consequences to a few individuals.

Straw Man: This move oversimplifies an opponent's viewpoint and then attacks that hollow argument.

People who don't support the proposed state minimum wage increase hate the poor.

In this example, the author attributes the worst possible motive to an opponent's position. In reality, however, the opposition probably has more complex and sympathetic arguments to support their point. By not addressing those arguments, the author is not treating the opposition with respect or refuting their position.

Moral Equivalence: This fallacy compares minor misdeeds with major atrocities, suggesting that both are equally immoral.

That parking attendant who gave me a ticket is as bad as Hitler.

In this example, the author is comparing the relatively harmless actions of a person doing their job with the horrific actions of Hitler. This comparison is unfair and inaccurate.

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base
  • Logical Fallacies | Definition, Types, List & Examples

Logical Fallacies | Definition, Types, List & Examples

Published on April 20, 2023 by Kassiani Nikolopoulou . Revised on October 9, 2023.

A logical fallacy is an argument that may sound convincing or true but is actually flawed. Logical fallacies are leaps of logic that lead us to an unsupported conclusion. People may commit a logical fallacy unintentionally, due to poor reasoning, or intentionally, in order to manipulate others.

Because logical fallacies can be deceptive, it is important to be able to spot them in your own argumentation and that of others.

Table of contents

Logical fallacy list (free download), what is a logical fallacy, types of logical fallacies, what are common logical fallacies, logical fallacy examples, other interesting articles, frequently asked questions about logical fallacies.

There are many logical fallacies. You can download an overview of the most common logical fallacies by clicking the blue button.

Logical fallacy list (Google Docs)

A logical fallacy is an error in reasoning that occurs when invalid arguments or irrelevant points are introduced without any evidence to support them. People often resort to logical fallacies when their goal is to persuade others. Because fallacies appear to be correct even though they are not, people can be tricked into accepting them.

The majority of logical fallacies involve arguments—in other words, one or more statements (called the premise ) and a conclusion . The premise is offered in support of the claim being made, which is the conclusion.

There are two types of mistakes that can occur in arguments:

  • A factual error in the premises . Here, the mistake is not one of logic. A premise can be proven or disproven with facts. For example, If you counted 13 people in the room when there were 14, then you made a factual mistake.
  • The premises fail to logically support the conclusion . A logical fallacy is usually a mistake of this type. In the example above, the students never proved that English 101 was itself a useless course—they merely “begged the question” and moved on to the next part of their argument, skipping the most important part.

In other words, a logical fallacy violates the principles of critical thinking because the premises do not sufficiently support the conclusion, while a factual error involves being wrong about the facts.

There are several ways to label and classify fallacies, such as according to the psychological reasons that lead people to use them or according to similarity in their form. Broadly speaking, there are two main types of logical fallacy, depending on what kind of reasoning error the argument contains:

Informal logical fallacies

Formal logical fallacies.

An informal logical fallacy occurs when there is an error in the content of an argument (i.e., it is based on irrelevant or false premises).

Informal fallacies can be further subdivided into groups according to similarity, such as relevance (informal fallacies that raise an irrelevant point) or ambiguity (informal fallacies that use ambiguous words or phrases, the meanings of which change in the course of discussion).

“ Some philosophers argue that all acts are selfish . Even if you strive to serve others, you are still acting selfishly because your act is just to satisfy your desire to serve others.”

A formal logical fallacy occurs when there is an error in the logical structure of an argument.

Premise 2: The citizens of New York know that Spider-Man saved their city.

Conclusion: The citizens of New York know that Peter Parker saved their city.  

This argument is invalid, because even though Spider-Man is in fact Peter Parker, the citizens of New York don’t necessarily know Spider-Man’s true identity and therefore don’t necessarily know that Peter Parker saved their city.

A logical fallacy may arise in any form of communication, ranging from debates to writing, but it may also crop up in our own internal reasoning. Here are some examples of common fallacies that you may encounter in the media, in essays, and in everyday discussions.

Logical fallacies

Red herring logical fallacy

The red herring fallacy is the deliberate attempt to mislead and distract an audience by bringing up an unrelated issue to falsely oppose the issue at hand. Essentially, it is an attempt to change the subject and divert attention elsewhere.

Bandwagon logical fallacy

The bandwagon logical fallacy (or ad populum fallacy ) occurs when we base the validity of our argument on how many people believe or do the same thing as we do. In other words, we claim that something must be true simply because it is popular.

This fallacy can easily go unnoticed in everyday conversations because the argument may sound reasonable at first. However, it doesn’t factor in whether or not “everyone” who claims x is in fact qualified to do so.

Straw man logical fallacy

The straw man logical fallacy is the distortion of an opponent’s argument to make it easier to refute. By exaggerating or simplifying someone’s position, one can easily attack a weak version of it and ignore their real argument.

Person 2: “So you are fine with children taking ecstasy and LSD?”

Slippery slope logical fallacy

The slippery slope logical fallacy occurs when someone asserts that a relatively small step or initial action will lead to a chain of events resulting in a drastic change or undesirable outcome. However, no evidence is offered to prove that this chain reaction will indeed happen.

Hasty generalization logical fallacy

The hasty generalization fallacy (or jumping to conclusions ) occurs when we use a small sample or exceptional cases to draw a conclusion or generalize a rule.

A false dilemma (or either/or fallacy ) is a common persuasion technique in advertising. It presents us with only two possible options without considering the broad range of possible alternatives.

In other words, the campaign suggests that animal testing and child mortality are the only two options available. One has to save either animal lives or children’s lives.

People often confuse correlation (i.e., the fact that two things happen one after the other or at the same time) with causation (the fact that one thing causes the other to happen).

It’s possible, for example, that people with MS have lower vitamin D levels because of their decreased mobility and sun exposure, rather than the other way around.

It’s important to carefully account for other factors that may be involved in any observed relationship. The fact that two events or variables are associated in some way does not necessarily imply that there is a cause-and-effect relationship between them and cannot tell us the direction of any cause-and-effect relationship that does exist.

If you want to know more about fallacies , research bias , or AI tools , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • Sunk cost fallacy
  • Straw man fallacy
  • Slippery slope fallacy
  • Either or fallacy
  • Appeal to emotion fallacy
  • Non sequitur fallacy

Research bias

  • Implicit bias
  • Framing bias
  • Cognitive bias
  • Optimism bias
  • Hawthorne effect
  • Affect heuristic

An ad hominem (Latin for “to the person”) is a type of informal logical fallacy . Instead of arguing against a person’s position, an ad hominem argument attacks the person’s character or actions in an effort to discredit them.

This rhetorical strategy is fallacious because a person’s character, motive, education, or other personal trait is logically irrelevant to whether their argument is true or false.

Name-calling is common in ad hominem fallacy (e.g., “environmental activists are ineffective because they’re all lazy tree-huggers”).

An appeal to ignorance (ignorance here meaning lack of evidence) is a type of informal logical fallacy .

It asserts that something must be true because it hasn’t been proven false—or that something must be false because it has not yet been proven true.

For example, “unicorns exist because there is no evidence that they don’t.” The appeal to ignorance is also called the burden of proof fallacy .

People sometimes confuse cognitive bias and logical fallacies because they both relate to flawed thinking. However, they are not the same:

  • Cognitive bias is the tendency to make decisions or take action in an illogical way because of our values, memory, socialization, and other personal attributes. In other words, it refers to a fixed pattern of thinking rooted in the way our brain works.
  • Logical fallacies relate to how we make claims and construct our arguments in the moment. They are statements that sound convincing at first but can be disproven through logical reasoning.

In other words, cognitive bias refers to an ongoing predisposition, while logical fallacy refers to mistakes of reasoning that occur in the moment.

Sources in this article

We strongly encourage students to use sources in their work. You can cite our article (APA Style) or take a deep dive into the articles below.

Nikolopoulou, K. (2023, October 09). Logical Fallacies | Definition, Types, List & Examples. Scribbr. Retrieved April 10, 2024, from https://www.scribbr.com/fallacies/logical-fallacy/
Jin, Z., Lalwani, A., Vaidhya, T., Shen, X., Ding, Y., Lyu, Z., Sachan, M., Mihalcea, R., & Schölkopf, B. (2022). Logical Fallacy Detection. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.2202.13758

Is this article helpful?

Kassiani Nikolopoulou

Kassiani Nikolopoulou

Other students also liked, slippery slope fallacy | definition & examples, what is a red herring fallacy | definition & examples, what is straw man fallacy | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

The Post Hoc Pitfall: Rethinking Sensitivity and Specificity in Clinical Practice

  • Published: 27 February 2024

Cite this article

  • José Nunes de Alencar Neto MD   ORCID: orcid.org/0000-0002-3835-6067 1 &
  • Leopoldo Santos-Neto MD, PhD 2  

1011 Accesses

9 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

INTRODUCTION

In medical education, sensitivity and specificity are often emphasized as essential criteria for evaluating the efficacy of diagnostic tests. 1 While these measures are pivotal, their application in isolation, as posited by the Spin and Snout mnemonics 2 , is not without limitations in the clinical environment. The article unfolds the shortcomings of this reliance, highlighting their post hoc nature and the disconnect this creates in the context of pre hoc, or forward-looking, clinical diagnostics.

Subsequently, we will delve into the subject through hypothetical illustrative scenarios, postulating that likelihood ratios (LRs) present compelling alternatives. We will examine how, cognitively, LRs necessitate probabilistic thinking from clinicians by their very definition—a critical aspect often underappreciated in medical diagnostics.

THE POST HOC NATURE OF SENSITIVITY AND SPECIFICITY

Sensitivity and specificity are metrics calculated from studies where participants’ health status is already known. In contrast, clinical practice often requires “pre hoc” or “forward-looking” diagnostic tests to determine an unknown health outcome. 3 This creates a significant disconnect between the retrospective nature of these metrics and the prospective needs of clinical practice. 4

Sensitivity measures how well a test identifies true positives among those with the disease. Specificity gauges the test’s ability to correctly identify true negatives among healthy individuals. Mathematically:

As an example, suppose a doctor prescribes a diagnostic test that possesses a sensitivity and specificity of precisely 90% in order to identify a specific disease. It is tempting to assume that the patient has the disease with a 90% probability when the test is positive. This reasoning is fallacious. Sensitivity and specificity are not derived from the uncertain clinical scenarios to which these tests are frequently applied, but rather from populations with known disease status. Indeed, the doctor is employing the exam specifically to ascertain the patient’s unidentified health condition.

THE PROPOSED ALTERNATIVE: ADVOCATING FOR A WIDER USE OF LIKELIHOOD RATIOS IN CLINICAL DECISION-MAKING

In contrast to sensitivity and specificity, likelihood ratios are more naturally applied in a “pre hoc” manner, allowing clinicians to update their diagnostic probabilities based on new evidence. Mathematically, the LR + and the LR − are defined as follows:

It is crucial to acknowledge that likelihood ratios are derived from sensitivity and specificity. However, its application offers distinct advantages. The perspective provided by likelihood ratios is advantageous because it advocates for a different view than usual: given that a test result is positive, by how many times does the chance of the patient having the disease increase? And if it is negative, by how many times will this chance decrease?

Notice that this perspective, in terms of probability, requires the physician to take a step back and think about the chance (or probability) of the patient having the disease in question. This insight is not provided by sensitivity and specificity alone.

To provide an illustration, consider a test characterized by a sensitivity of 20% and a specificity of 90%. A physician might be tempted to conclude, based solely on this information, that a positive test result signifies a 90% chance that the patient has the disease; however, this is not the case. Conversely, by focusing on likelihood ratios, they will ascertain that the LR + is 2.0, signifying that a positive test result will result in a doubling of the patient’s probability of contracting the disease.

But double from what to what? If the disease probability is 5%, it will be 10% after the test, not 90% as determined by the specificity. This is where the use of likelihood ratios “forces” probabilistic thinking. They compel physicians to consider the pre-test probability or the baseline rate of a disease in a given population, a step often overlooked when relying solely on sensitivity and specificity. 5 , 6

BAYESIAN REASONING IN CLINICAL PRACTICE: A DYNAMIC APPROACH TO DIAGNOSING

In medicine, the probabilistic nature of diagnosis is often overlooked, leading to a cognitive bias known as base-rate neglect. 7 Clinicians may focus too intently on the sensitivity and specificity of a test, neglecting the initial likelihood or actual prevalence of a disease in the population. This oversight can distort the application of Bayesian reasoning, resulting in flawed clinical decisions. Bayesian reasoning represents a dynamic framework in medical decision-making. This approach integrates prior probabilities and incorporates the diagnostic performance of a test. The strength of Bayesian reasoning lies in its ability to constantly update and adapt to new information, thereby offering a more nuanced and patient-centered diagnostic process. 8

To illuminate the application of Bayesian reasoning, let us contemplate an alternative scenario. Consider a patient who exhibits ST-segment elevation during an electrocardiogram (ECG) for screening purposes. Nevertheless, the patient exhibits no clinical symptoms or symptoms consistent with acute coronary syndrome (ACS). An accuracy study reported a sensitivity of 41% and a specificity of 94% for ST-segment elevation when diagnosing occlusion myocardial infarction. 9 At first glance, this finding might be interpreted as suggesting a 94% probability of the patient having the disease when ST-segment elevation is present. Using standard formulas, we find:

Given the patient’s asymptomatic status, the clinician estimates the pre-test probability of occlusion myocardial infarction to be about 1%. Using the Fagan nomogram 10 and applying the LR + of 6.83 to this pre-test probability, the post-test probability is calculated to be around 6.5%. This means that despite the ST-segment elevation, there is approximately 93.5% chance that the patient is not experiencing an acute coronary occlusion (Fig.  1 ).

figure 1

The calculation of the post-test probability for an acute myocardial infarction in an asymptomatic patient exhibiting ST-segment elevation is illustrated using Fagan’s nomogram. The derived post-test probability is 6.5%, which is obtained by intersecting a pre-test probability of 1% with an LR + value of 6.83. The visual depiction underscores the importance of integrating likelihood ratios when enhancing diagnostic probabilities.

Alternatively, this post-test probability can be calculated through the following steps:

1. Convert pre-test probability to odds:

2. Multiply by LR:

3. Convert post-test odds to probability:

NATURAL FREQUENCIES: ANOTHER WAY TO USE BAYESIAN REASONING

Another pertinent approach within Bayesian reasoning is the use of natural frequencies, a method that involves constructing a decision tree to visually represent how diagnostic tests interact with pre-existing probabilities, thereby enhancing our understanding of a patient’s health status. 11 , 12 The approach begins with the pre-test probability, which is derived from epidemiological data or the clinician’s evaluation of the likelihood of the disease prior to conducting the test. A decision tree then divides into “Diseased” and “Healthy” branches, which further subdivide True Positives, False Negatives, True Negatives, and False Positives in accordance with test outcomes (Fig.  2 ).

figure 2

By employing natural frequencies and starting with the base rate, a physician can more accurately determine the post-test probability of a positive test result being true. In this model, we consider a hypothetical cohort of 100 individuals who closely resemble the patient under investigation in terms of age, comorbidities, and symptoms. Based on the physician’s estimated pre-test probability of 1%, 1 individual in this cohort is assumed to have the disease. In a population consisting of 1 diseased and 99 healthy individuals, it becomes evident that the proportion of true positives among all positive test results is 6.4%. This value represents the post-test probability and signifies the likelihood that a positive test result is indeed accurate.

While natural frequencies can intuitively convey the probabilistic nature of test interpretations, their integration into clinical practice is not straightforward. Often, this methodology does not align with the typical cognitive framework of practitioners, leading to underutilization in actual patient care. Despite their potential to demystify complex statistical concepts, natural frequencies remain an underemployed strategy in the diagnostic process.

The Case of Tests with Sensitivity + Specificity = 1

When the sum of sensitivity and specificity equals 1.0, an intriguing instance of curiosity arises. Consider, for example, a test whose sensitivity is 5% and its specificity is 95%. Upon initial examination, the test’s high specificity may indicate its efficacy in definitively diagnosing the disease. But this could not be further from the truth. The mathematical expressions for calculating the LR + and LR − are as follows:

A value of 1.0 is produced by both LR + and LR − , signifying that the test has no effect on the pre-test probability of the disease. Alternatively stated, a patient’s post-test probability would remain at 10% regardless of the outcome of the test, whether it be positive or negative, if their pre-test probability is 10%. Notwithstanding its notable specificity, the test is fundamentally ineffective in either validating or excluding the disease.

This phenomenon occurs when a test has no actual diagnostic power for the disease in question. Since the disease has no correlation with the test, both groups—those labeled as diseased and those labeled as healthy—are essentially composed of the same individuals and will test positive or negative merely by chance. Consequently, the prevalence of the disease will naturally be similar in both groups.

Interestingly, the rarer the disease under study (which, again, has no actual correlation with the test), the more inflated the specificity will appear. This is because there will be more true negatives in the sample defined as healthy, artificially boosting the specificity (Fig.  3 ).

figure 3

The concept of natural frequencies exemplifies the utilization of a diagnostic test whose prevalence is equivalent between healthy and diseased individuals, thereby making it useless in clinical settings to ascertain the presence or absence of disease. The issue of sensitivity and specificity adding up to 1.0 becomes apparent in such circumstances. Notwithstanding the test’s apparent high specificity (99%), its clinical utility is rendered futile on account of its likelihood ratio of 1.0. This means that upon receiving a positive test result, the probability of disease presence is effectively multiplied by 1.0, while the post-test probability remains unchanged from the pre-test probability.

It is important to clarify that while likelihood ratios (LRs) are derived from sensitivity and specificity, they reframe this information in a manner that is more directly applicable to clinical decision-making. Emphasizing this point, it becomes evident that knowing the exact values of sensitivity and specificity is less critical than understanding how LRs should be interpreted. A value of 1.0 for an LR means multiplying the chance by 1, essentially keeping it the same. In contrast, a specificity of 90% might seem appealing based on the “Spin and Snout” mnemonic, but if the sensitivity is only 10%, the test will not be useful. This conclusion is not obvious when analyzing sensitivity and specificity in isolation.

In synthesizing our findings, this article reaffirms the value of likelihood ratios (LR +) and (LR −) in clinical practice, not simply as substitutes for traditional sensitivity and specificity, but as cognitively superior tools for diagnostic reasoning within a Bayesian framework. This assertion rests on the premise that while LRs are indeed derived from sensitivity and specificity, their utilization promotes a probabilistic mode of thinking that is not inherently elicited by sensitivity and specificity alone.

The central argument of this paper is that LRs facilitate a cognitive shift towards probabilistic reasoning, thereby enhancing the physician’s ability to calibrate diagnostic hypotheses more effectively. This shift is critical, as it moves beyond the raw metrics of test accuracy to encompass the nuances of clinical context and patient-specific probabilities. The definition of LRs themselves—quantifying how much a positive or negative test result shifts the odds of having a disease—inherently guides clinicians to consider the magnitude of change in disease probability, a conceptual leap that is less apparent when considering sensitivity and specificity in isolation.

Furthermore, while the construction of natural frequencies offers an alternative Bayesian approach, it is not as intuitively accessible as the straightforward calculation and interpretation of LRs. Therefore, we argue for the broader adoption of LRs as an essential component of a comprehensive diagnostic strategy, one that better navigates the complexities and uncertainties of medical practice. Through this lens, LRs are not merely mathematical derivatives but pivotal instruments that prompt clinicians to engage more deeply with the probabilistic nature of diagnosis and treatment decisions.

Altman DG, Bland JM. Diagnostic tests. 1: Sensitivity and specificity. BMJ. 1994;308(6943):1552. https://doi.org/10.1136/bmj.308.6943.1552

Pewsner D, Battaglia M, Minder C, Marx A, Bucher HC, Egger M. Ruling a diagnosis in or out with “SpPIn” and “SnNOut”: a note of caution. BMJ. 2004;329(7459):209-213.

Naeger DM, Kohi MP, Webb EM, Phelps A, Ordovas KG, Newman TB. Correctly Using Sensitivity, Specificity, and Predictive Values in Clinical Practice: How to Avoid Three Common Pitfalls. Am J Roentgenol. 2013;200(6):W566-W570. https://doi.org/10.2214/AJR.12.9888

Moons KGM, Harrell FE. Sensitivity and specificity should be de-emphasized in diagnostic accuracy studies. Acad Radiol. 2003;10(6):670-672. https://doi.org/10.1016/s1076-6332(03)80087-9

Deeks JJ, Altman DG. Diagnostic tests 4: likelihood ratios. BMJ. 2004;329(7458):168-169. https://doi.org/10.1136/bmj.329.7458.168

Cahan A, Gilon D, Manor O, Paltiel O. Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities? QJM Int J Med. 2003;96(10):763-769. https://doi.org/10.1093/qjmed/hcg122

O’Sullivan ED, Schofield SJ. Cognitive bias in clinical medicine. J R Coll Physicians Edinb. 2018;48(3):225-232. https://doi.org/10.4997/JRCPE.2018.306

de Alencar Neto JN. Applying Bayesian reasoning to electrocardiogram interpretation. J Electrocardiol. Published online October 17, 2023. https://doi.org/10.1016/j.jelectrocard.2023.10.006

Meyers HP, Bracey A, Lee D, et al. Accuracy of OMI ECG findings versus STEMI criteria for diagnosis of acute coronary occlusion myocardial infarction. IJC Heart Vasc. 2021;33:100767. https://doi.org/10.1016/j.ijcha.2021.100767

Fagan TJ. Letter: Nomogram for Bayes theorem. N Engl J Med. 1975;293(5):257-257. https://doi.org/10.1056/NEJM197507312930513

Binder K, Krauss S, Schmidmaier R, Braun LT. Natural frequency trees improve diagnostic efficiency in Bayesian reasoning. Adv Health Sci Educ Theory Pract. 2021;26(3):847-863. https://doi.org/10.1007/s10459-020-10025-8

Gigerenzer G, Hoffrage U. How to improve Bayesian reasoning without instruction: Frequency formats. Psychol Rev. 1995;102(4):684-704. https://doi.org/10.1037/0033-295X.102.4.684

Download references

Author information

Authors and affiliations.

Instituto Dante Pazzanese de Cardiologia, São Paulo, Brazil

José Nunes de Alencar Neto MD

Universidade de Brasília, Brasília, Brazil

Leopoldo Santos-Neto MD, PhD

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to José Nunes de Alencar Neto MD .

Ethics declarations

Conflict of interest.

The authors have no conflicts of interest to declare.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

de Alencar Neto, J.N., Santos-Neto, L. The Post Hoc Pitfall: Rethinking Sensitivity and Specificity in Clinical Practice. J GEN INTERN MED (2024). https://doi.org/10.1007/s11606-024-08692-z

Download citation

Received : 06 October 2023

Accepted : 20 February 2024

Published : 27 February 2024

DOI : https://doi.org/10.1007/s11606-024-08692-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. 6 Examples of Critical Thinking Skills

    post hoc critical thinking example

  2. The benefits of critical thinking for students and how to develop it

    post hoc critical thinking example

  3. 25 Critical Thinking Examples (2024)

    post hoc critical thinking example

  4. CRITICAL THINKING SKILLS

    post hoc critical thinking example

  5. Educational Classroom Posters And Resources

    post hoc critical thinking example

  6. Tools Of Critical Thinking

    post hoc critical thinking example

VIDEO

  1. Career as a Market Research Analyst

  2. T-62M Crash Into Ukrainian Trench And Gets Blown Up

  3. ServSafe Manager Practice Test: Chapter

  4. Cubby 🧸

  5. Things You Should Avoid If You Want to Be Rich

  6. Type-Safe Fetch with Next.js, Strapi, and OpenAPI

COMMENTS

  1. What is The Post Hoc Fallacy?

    Explanation. Examples. How to avoid the Post Hoc Fallacy fallacy. What the Post Hoc fallacy is: The Post Hoc fallacy, also known as post hoc ergo propter hoc ("after this, therefore because of this"), occurs when one assumes that because one event precedes another, it must be the cause of the second event. When it occurs:

  2. Post Hoc Fallacy (27 Examples

    The concept of the post hoc fallacy has been around for a long time, but it got its formal name from the field of philosophy. The term itself is Latin and has been used in academic circles to discuss flawed arguments. The phrase "Post Hoc, Ergo Propter Hoc" was coined to specifically identify this type of logical misstep.

  3. Post Hoc Fallacy

    The post hoc fallacy occurs when we draw a causal conclusion without sufficient evidence to support it. "Post hoc" is a shortened version of the Latin phrase "post hoc ergo propter hoc," meaning "after this, therefore because of this.". Post hoc fallacies are committed when one argues that because B happened immediately after A, A ...

  4. 5 Post Hoc Fallacy Examples (and How to Respond to This Argument)

    5 Cui Bono Fallacy Examples to Find Out "Who Will Benefit". 6 Anchoring Bias Examples That Impact Your Decisions. 7 Virtue Signaling Examples in Everyday Life. 7 Cherry Picking Fallacy Examples for When People Ignore Evidence. 9 Circular Reasoning Examples (or "Begging the Question") in Everyday Life.

  5. Post Hoc Fallacy: Explanation and Examples

    Like thinking that because you found a dollar on the ground after you put on a green shirt, your green shirt has the power to give you money. But in reality, the two events are not connected—that's what makes it a post hoc fallacy. The Logical Mistake of Post Hoc Fallacy. This type of logic goes something like this: An event named X takes ...

  6. 7.4: Fallacies

    Arguing Using Critical Thinking (Marteney) 7: Reasoning 7.4: Fallacies ... Seeing something as "black and white" is an example of a false dilemma. ... Post hoc ergo propter hoc The post hoc ergo propter hoc, "after this, therefore because of this," fallacy is based upon the mistaken notion that simply because one thing happens after ...

  7. [T13] The post hoc fallacy

    The mistake of confusing correlation with causation is a common one, and is often called the post hoc fallacy. The name comes from the Latin phrase post hoc ergo propter hoc, which (roughly!) translated means "after that, therefore because of that". If B usually occurs just after A, it is tempting to conclude that B occurred because of A, but ...

  8. Post Hoc Ergo Propter Hoc

    Post Hoc Ergo Propter Hoc. We learned in Lesson 14 that to make a strong causal argument you need the cause to precede the effect. In other words, if problem A causes result B, cause A had to occur before result B. However, this is not the only factor in determining cause. Just because one event precedes another does not mean that it caused it.

  9. Fallacies: Post Hoc Ergo Propter Hoc (video)

    About. Transcript. In this video, Paul explains the post-hoc-ergo-propter-hoc fallacy. This is an informal fallacy committed when a person reasons that because one event happened after another event, the first event caused the second. He also discusses why it is sometimes hasty to conclude that your cat scratch caused your fever.

  10. What Is a Post Hoc Logical Fallacy?

    Updated on January 17, 2020. Post hoc (a shortened form of post hoc, ergo propter hoc) is a logical fallacy in which one event is said to be the cause of a later event simply because it occurred earlier. "Although two events might be consecutive," says Madsen Pirie in " How to Win Every Argument ," "we cannot simply assume that the one would ...

  11. Post Hoc Reasoning in Arguments: Explanation and Examples

    This is an example of post hoc reasoning because the team is connecting the win to the new uniforms without considering their skills or the other team's performance. ... Critical Thinking: The skill of evaluating arguments and evidence with an open mind, which helps to avoid falling into the post hoc reasoning trap. ...

  12. post hoc fallacy

    Post hoc reasoning is the basis for many superstitions and erroneous beliefs. Many events follow sequential patterns without being causally related. For example, you have a cold, so you drink fluids and two weeks later your cold goes away. You have a headache so you stand on your head and six hours later your headache goes away.

  13. Critical Thinking #10: Post Hoc Fallacy

    → http://brilliant.org/criticalthinkingThe critical thinking miniseries was made possible by our viewers and listeners. To support more of this type of work,...

  14. What is an example of post hoc fallacy?

    An example of post hoc fallacy is the following line of reasoning: "Yesterday I had ice cream, and today I have a terrible stomachache. I'm sure the ice cream caused this.". Although it is possible that the ice cream had something to do with the stomachache, there is no proof to justify the conclusion other than the order of events.

  15. CRITICAL THINKING

    In this video, Paul Henne (Duke University) explains the post-hoc-ergo-propter-hoc fallacy. This is an informal fallacy committed when a person reasons that ...

  16. Spotlight on The Post Hoc Fallacy for Better Thinking and ...

    An example of the post hoc ergo propter hoc fallacy is the claim that if the rooster crowed immediately before the sun rose, then that means that the rooster caused the sun to rise.

  17. Fallacies: Slippery Slope (video)

    In this Wireless Philosophy video, Joseph Wu (University of Cambridge) introduces you to the slippery slope argument. This argument is that when one event occurs, other related events will follow, and this slippery slope will eventually lead to undesirable consequences. Wu walks us through this rhetorical strategy and shows us how to avoid ...

  18. 41+ Critical Thinking Examples (Definition + Practices)

    There are many resources to help you determine if information sources are factual or not. 7. Socratic Questioning. This way of thinking is called the Socrates Method, named after an old-time thinker from Greece. It's about asking lots of questions to understand a topic.

  19. How to Spot and Avoid the Post Hoc Fallacy

    The post hoc fallacy is problematic because it can lead to false or misleading beliefs, judgments, and actions. For example, if you believe that wearing a lucky charm caused you to ace an exam ...

  20. Fallacies

    Post hoc ergo propter hoc: This is a conclusion that assumes that if 'A' occurred after 'B' then 'B' must have caused 'A.' Example: ... In this example, the author equates being a "true American," a concept that people want to be associated with, particularly in a time of war, with allowing people to buy any vehicle they want even though there ...

  21. Logical Fallacies

    People sometimes confuse cognitive bias and logical fallacies because they both relate to flawed thinking. However, they are not the same: Cognitive bias is the tendency to make decisions or take action in an illogical way because of our values, memory, socialization, and other personal attributes. In other words, it refers to a fixed pattern of thinking rooted in the way our brain works.

  22. The Post Hoc Pitfall: Rethinking Sensitivity and Specificity in

    Sensitivity and specificity are metrics calculated from studies where participants' health status is already known. In contrast, clinical practice often requires "pre hoc" or "forward-looking" diagnostic tests to determine an unknown health outcome. 3 This creates a significant disconnect between the retrospective nature of these metrics and the prospective needs of clinical practice. 4