Confirmation Bias In Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Confirmation Bias is the tendency to look for information that supports, rather than rejects, one’s preconceptions, typically by interpreting evidence to confirm existing beliefs while rejecting or ignoring any conflicting data (American Psychological Association).

One of the early demonstrations of confirmation bias appeared in an experiment by Peter Watson (1960) in which the subjects were to find the experimenter’s rule for sequencing numbers.

Its results showed that the subjects chose responses that supported their hypotheses while rejecting contradictory evidence, and even though their hypotheses were incorrect, they became confident in them quickly (Gray, 2010, p. 356).

Though such evidence of confirmation bias has appeared in psychological literature throughout history, the term ‘confirmation bias’ was first used in a 1977 paper detailing an experimental study on the topic (Mynatt, Doherty, & Tweney, 1977).

Confirmation bias as psychological objective attitude issue outline diagram. Incorrect information checking or aware of self interpretation vector illustration. Tendency to approve existing opinion.

Biased Search for Information

This type of confirmation bias explains people’s search for evidence in a one-sided way to support their hypotheses or theories.

Experiments have shown that people provide tests/questions designed to yield “yes” if their favored hypothesis is true and ignore alternative hypotheses that are likely to give the same result.

This is also known as the congruence heuristic (Baron, 2000, p.162-64). Though the preference for affirmative questions itself may not be biased, there are experiments that have shown that congruence bias does exist.

For Example:

If you were to search “Are cats better than dogs?” in Google, all you would get are sites listing the reasons why cats are better.

However, if you were to search “Are dogs better than cats?” google will only provide you with sites that believe dogs are better than cats.

This shows that phrasing questions in a one-sided way (i.e., affirmative manner) will assist you in obtaining evidence consistent with your hypothesis.

Biased Interpretation

This type of bias explains that people interpret evidence concerning their existing beliefs by evaluating confirming evidence differently than evidence that challenges their preconceptions.

Various experiments have shown that people tend not to change their beliefs on complex issues even after being provided with research because of the way they interpret the evidence.

Additionally, people accept “confirming” evidence more easily and critically evaluate the “disconfirming” evidence (this is known as disconfirmation bias) (Taber & Lodge, 2006).

When provided with the same evidence, people’s interpretations could still be biased.

For example:

Biased interpretation is shown in an experiment conducted by Stanford University on the topic of capital punishment. It included participants who were in support of and others who were against capital punishment.

All subjects were provided with the same two studies.

After reading the detailed descriptions of the studies, participants still held their initial beliefs and supported their reasoning by providing “confirming” evidence from the studies and rejecting any contradictory evidence or considering it inferior to the “confirming” evidence (Lord, Ross, & Lepper, 1979).

Biased Memory

To confirm their current beliefs, people may remember/recall information selectively. Psychological theories vary in defining memory bias.

Some theories state that information confirming prior beliefs is stored in the memory while contradictory evidence is not (i.e., Schema theory). Some others claim that striking information is remembered best (i.e., humor effect).

Memory confirmation bias also serves a role in stereotype maintenance. Experiments have shown that the mental association between expectancy-confirming information and the group label strongly affects recall and recognition memory.

Though a certain stereotype about a social group might not be true for an individual, people tend to remember the stereotype-consistent information better than any disconfirming evidence (Fyock & Stangor, 1994).

In one experimental study, participants were asked to read a woman’s profile (detailing her extroverted and introverted skills) and assess her for either a job of a librarian or real-estate salesperson.

Those assessing her as a salesperson better recalled extroverted traits, while the other group recalled more examples of introversion (Snyder & Cantor, 1979).

These experiments, along with others, have offered an insight into selective memory and provided evidence for biased memory, proving that one searches for and better remembers confirming evidence.

social media bias

Social Media

Information we are presented on social media is not only reflective of what the users want to see but also of the designers’ beliefs and values. Today, people are exposed to an overwhelming number of news sources, each varying in their credibility.

To form conclusions, people tend to read the news that aligns with their perspectives. For instance, new channels provide information (even the same news) differently from each other on complex issues (i.e., racism, political parties, etc.), with some using sensational headlines/pictures and one-sided information.

Due to the biased coverage of topics, people only utilize certain channels/sites to obtain their information to make biased conclusions.

Religious Faith

People also tend to search for and interpret evidence with respect to their religious beliefs (if any).

For instance, on the topics of abortion and transgender rights, people whose religions are against such things will interpret this information differently than others and will look for evidence to validate what they believe.

Similarly, those who religiously reject the theory of evolution will either gather information disproving evolution or hold no official stance on the topic.

Also, irreligious people might perceive events that are considered “miracles” and “test of faiths” by religious people to be a reinforcement of their lack of faith in a religion.

when Does The Confirmation Bias Occur?

There are several explanations why humans possess confirmation bias, including this tendency being an efficient way to process information, protect self-esteem, and minimize cognitive dissonance.

Information Processing

Confirmation bias serves as an efficient way to process information because of the limitless information humans are exposed to.

To form an unbiased decision, one would have to critically evaluate every piece of information present, which is unfeasible. Therefore, people only tend to look for information desired to form their conclusions (Casad, 2019).

Protect Self-esteem

People are susceptible to confirmation bias to protect their self-esteem (to know that their beliefs are accurate).

To make themselves feel confident, they tend to look for information that supports their existing beliefs (Casad, 2019).

Minimize Cognitive Dissonance

Cognitive dissonance also explains why confirmation bias is adaptive.

Cognitive dissonance is a mental conflict that occurs when a person holds two contradictory beliefs and causes psychological stress/unease in a person.

To minimize this dissonance, people adapt to confirmation bias by avoiding information that is contradictory to their views and seeking evidence confirming their beliefs.

Challenge avoidance and reinforcement seeking to affect people’s thoughts/reactions differently since exposure to disconfirming information results in negative emotions, something that is nonexistent when seeking reinforcing evidence (“The Confirmation Bias: Why People See What They Want to See”).

Implications

Confirmation bias consistently shapes the way we look for and interpret information that influences our decisions in this society, ranging from homes to global platforms. This bias prevents people from gathering information objectively.

During the election campaign, people tend to look for information confirming their perspectives on different candidates while ignoring any information contradictory to their views.

This subjective manner of obtaining information can lead to overconfidence in a candidate, and misinterpretation/overlooking of important information, thus influencing their voting decision and, eventually country’s leadership (Cherry, 2020).

Recruitment and Selection

Confirmation bias also affects employment diversity because preconceived ideas about different social groups can introduce discrimination (though it might be unconscious) and impact the recruitment process (Agarwal, 2018).

Existing beliefs of a certain group being more competent than the other is the reason why particular races and gender are represented the most in companies today. This bias can hamper the company’s attempt at diversifying its employees.

Mitigating Confirmation Bias

Change in intrapersonal thought:.

To avoid being susceptible to confirmation bias, start questioning your research methods, and sources used to obtain their information.

Expanding the types of sources used in searching for information could provide different aspects of a particular topic and offer levels of credibility.

  • Read entire articles rather than forming conclusions based on the headlines and pictures. – Search for credible evidence presented in the article.
  • Analyze if the statements being asserted are backed up by trustworthy evidence (tracking the source of evidence could prove its credibility). – Encourage yourself and others to gather information in a conscious manner.

Alternative hypothesis:

Confirmation bias occurs when people tend to look for information that confirms their beliefs/hypotheses, but this bias can be reduced by taking into alternative hypotheses and their consequences.

Considering the possibility of beliefs/hypotheses other than one’s own could help you gather information in a more dynamic manner (rather than a one-sided way).

Related Cognitive Biases

There are many cognitive biases that characterize as subtypes of confirmation bias. Following are two of the subtypes:

Backfire Effect

The backfire effect occurs when people’s preexisting beliefs strengthen when challenged by contradictory evidence (Silverman, 2011).

  • Therefore, disproving a misconception can actually strengthen a person’s belief in that misconception.

One piece of disconfirming evidence does not change people’s views, but a constant flow of credible refutations could correct misinformation/misconceptions.

This effect is considered a subtype of confirmation bias because it explains people’s reactions to new information based on their preexisting hypotheses.

A study by Brendan Nyhan and Jason Reifler (two researchers on political misinformation) explored the effects of different types of statements on people’s beliefs.

While examining two statements, “I am not a Muslim, Obama says.” and “I am a Christian, Obama says,” they concluded that the latter statement is more persuasive and resulted in people’s change of beliefs, thus affirming statements are more effective at correcting incorrect views (Silverman, 2011).

Halo Effect

The halo effect occurs when people use impressions from a single trait to form conclusions about other unrelated attributes. It is heavily influenced by the first impression.

Research on this effect was pioneered by American psychologist Edward Thorndike who, in 1920, described ways officers rated their soldiers on different traits based on first impressions (Neugaard, 2019).

Experiments have shown that when positive attributes are presented first, a person is judged more favorably than when negative traits are shown first. This is a subtype of confirmation bias because it allows us to structure our thinking about other information using only initial evidence.

Learning Check

When does the confirmation bias occur.

  • When an individual only researches information that is consistent with personal beliefs.
  • When an individual only makes a decision after all perspectives have been evaluated.
  • When an individual becomes more confident in one’s judgments after researching alternative perspectives.
  • When an individual believes that the odds of an event occurring increase if the event hasn’t occurred recently.

The correct answer is A. Confirmation bias occurs when an individual only researches information consistent with personal beliefs. This bias leads people to favor information that confirms their preconceptions or hypotheses, regardless of whether the information is true.

Take-home Messages

  • Confirmation bias is the tendency of people to favor information that confirms their existing beliefs or hypotheses.
  • Confirmation bias happens when a person gives more weight to evidence that confirms their beliefs and undervalues evidence that could disprove it.
  • People display this bias when they gather or recall information selectively or when they interpret it in a biased way.
  • The effect is stronger for emotionally charged issues and for deeply entrenched beliefs.

Agarwal, P., Dr. (2018, October 19). Here Is How Bias Can Affect Recruitment In Your Organisation. https://www.forbes.com/sites/pragyaagarwaleurope/2018/10/19/how-can-bias-during-interviewsaffect-recruitment-in-your-organisation

American Psychological Association. (n.d.). APA Dictionary of Psychology. https://dictionary.apa.org/confirmation-bias

Baron, J. (2000). Thinking and Deciding (Third ed.). Cambridge University Press.

Casad, B. (2019, October 09). Confirmation bias . https://www.britannica.com/science/confirmation-bias

Cherry, K. (2020, February 19). Why Do We Favor Information That Confirms Our Existing Beliefs? https://www.verywellmind.com/what-is-a-confirmation-bias-2795024

Fyock, J., & Stangor, C. (1994). The role of memory biases in stereotype maintenance. The British journal of social psychology, 33 (3), 331–343.

Gray, P. O. (2010). Psychology . New York: Worth Publishers.

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37 (11), 2098–2109.

Mynatt, C. R., Doherty, M. E., & Tweney, R. D. (1977). Confirmation bias in a simulated research environment: An experimental study of scientific inference. Quarterly Journal of Experimental Psychology, 29 (1), 85-95.

Neugaard, B. (2019, October 09). Halo effect. https://www.britannica.com/science/halo-effect

Silverman, C. (2011, June 17). The Backfire Effect . https://archives.cjr.org/behind_the_news/the_backfire_effect.php

Snyder, M., & Cantor, N. (1979). Testing hypotheses about other people: The use of historical knowledge. Journal of Experimental Social Psychology, 15 (4), 330–342.

Further Information

  • What Is Confirmation Bias and When Do People Actually Have It?
  • Confirmation Bias: A Ubiquitous Phenomenon in Many Guises
  • The importance of making assumptions: why confirmation is not necessarily a bias
  • Decision Making Is Caused By Information Processing And Emotion: A Synthesis Of Two Approaches To Explain The Phenomenon Of Confirmation Bias

Confirmation bias occurs when individuals selectively collect, interpret, or remember information that confirms their existing beliefs or ideas, while ignoring or discounting evidence that contradicts these beliefs.

This bias can happen unconsciously and can influence decision-making and reasoning in various contexts, such as research, politics, or everyday decision-making.

What is confirmation bias in psychology?

Confirmation bias in psychology is the tendency to favor information that confirms existing beliefs or values. People exhibiting this bias are likely to seek out, interpret, remember, and give more weight to evidence that supports their views, while ignoring, dismissing, or undervaluing the relevance of evidence that contradicts them.

This can lead to faulty decision-making because one-sided information doesn’t provide a full picture.

Print Friendly, PDF & Email

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research bias

Types of Bias in Research | Definition & Examples

Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection , data analysis , interpretation, or publication. Research bias can occur in both qualitative and quantitative research .

Understanding research bias is important for several reasons.

  • Bias exists in all research, across research designs , and is difficult to eliminate.
  • Bias can occur at any stage of the research process .
  • Bias impacts the validity and reliability of your findings, leading to misinterpretation of data.

It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimize them.

For example, the success rate of the program will likely be affected if participants start to drop out ( attrition ). Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results.  

Table of contents

Information bias, interviewer bias.

  • Publication bias

Researcher bias

Response bias.

Selection bias

Cognitive bias

How to avoid bias in research

Other types of research bias, frequently asked questions about research bias.

Information bias , also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.

The main types of information bias are:

  • Recall bias
  • Observer bias

Performance bias

Regression to the mean (rtm).

Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.

Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.

As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.

  • A group of children who have been diagnosed, called the case group
  • A group of children who have not been diagnosed, called the control group

Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.

The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.

Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.

Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observationa l and experimental studies, where subjective judgment (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the d ata collection process.

Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double-blinded  and single-blinded research methods.

Based on discussions you had with other researchers before starting your observations , you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.

At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct semi-structured interviews with medical staff to clarify the observed events. Note: Observer bias and actor–observer bias are not the same thing.

Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.

Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding , which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.

When the subjects of an experimental study change or improve their behavior because they are aware they are being studied, this is called the Hawthorne effect (or observer effect). Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behavior in an effort to compensate for their perceived disadvantage.

Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the center of its distribution on a second measurement.

Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.

In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean .

This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.

However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.

Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.

Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.

Participant: “I like to solve puzzles, or sometimes do some gardening.”

You: “I love gardening, too!”

In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.

Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.

Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant , or favoring the study hypotheses are more likely to be published due to publication bias.

Publication bias is related to data dredging (also called p -hacking ), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P -hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.

Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions ).

The unconscious form of researcher bias is associated with the Pygmalion effect (or Rosenthal effect ), where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.

Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs .

  • Good question: What are your views on alcohol consumption among your peers?
  • Bad question: Do you think it’s okay for young people to drink so much?

Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews .

This happens because when people are asked a question (e.g., during an interview ), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.

Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.

While interviewing a student, you ask them:

“Do you think it’s okay to cheat on an exam?”

Common types of response bias are:

Acquiescence bias

Demand characteristics.

  • Social desirability bias

Courtesy bias

  • Question-order bias

Extreme responding

Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like “agree/disagree,” “yes/no,” or “true/false.” Acquiescence is sometimes referred to as “yea-saying.”

This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.

Q: Are you a social person?

People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.

In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:

Q: What would you prefer?

  • A quiet night in
  • A night out with friends

Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviors or views. Ensuring that participants are not aware of the research objectives is the best way to avoid this type of bias.

On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.

Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behavior.

You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.

Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.

Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.

Question order bias

Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.

When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect ), which can lead to systematic distortion of the responses.

Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales , and it distorts people’s true attitudes and opinions.

Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.

Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.

Common types of selection bias are:

Sampling or ascertainment bias

  • Attrition bias
  • Self-selection (or volunteer) bias
  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias

Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.

The easiest way to prevent sampling bias is to use a probability sampling method . This way, each member of the population you are studying has an equal chance of being included in your sample.

Sampling bias is often referred to as ascertainment bias in the medical field.

Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.

You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.

You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey, three surveys during the program, and a posttest survey.

Self-selection or volunteer bias

Self-selection bias (also called volunteer bias ) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.

Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment —i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.

Closely related to volunteer bias is nonresponse bias , which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.

Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.

Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.

This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process—focusing on “survivors” and forgetting those who went through a similar process and did not survive.

Note that “survival” does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.

However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.

Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.

You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality , and sending them reminders to complete the survey.

You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.

Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.

Cognitive bias refers to a set of predictable (i.e., nonrandom) errors in thinking that arise from our limited ability to process information objectively. Rather, our judgment is influenced by our values, memories, and other personal traits. These create “ mental shortcuts” that help us process information intuitively and decide faster. However, cognitive bias can also cause us to misunderstand or misinterpret situations, information, or other people.

Because of cognitive bias, people often perceive events to be more predictable after they happen.

Although there is no general agreement on how many types of cognitive bias exist, some common types are:

  • Anchoring bias  
  • Framing effect  
  • Actor-observer bias
  • Availability heuristic (or availability bias)
  • Confirmation bias  
  • Halo effect
  • The Baader-Meinhof phenomenon  

Anchoring bias

Anchoring bias is people’s tendency to fixate on the first piece of information they receive, especially when it concerns numbers. This piece of information becomes a reference point or anchor. Because of that, people base all subsequent decisions on this anchor. For example, initial offers have a stronger influence on the outcome of negotiations than subsequent ones.

  • Framing effect

Framing effect refers to our tendency to decide based on how the information about the decision is presented to us. In other words, our response depends on whether the option is presented in a negative or positive light, e.g., gain or loss, reward or punishment, etc. This means that the same information can be more or less attractive depending on the wording or what features are highlighted.

Actor–observer bias

Actor–observer bias occurs when you attribute the behavior of others to internal factors, like skill or personality, but attribute your own behavior to external or situational factors.

In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behavior of others, you are more likely to associate behavior with their personality, nature, or temperament.

One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the highway, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.

At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.

  • Availability heuristic

Availability heuristic (or availability bias) describes the tendency to evaluate a topic using the information we can quickly recall to our mind, i.e., that is available to us. However, this is not necessarily the best information, rather it’s the most vivid or recent. Even so, due to this mental shortcut, we tend to think that what we can recall must be right and ignore any other information.

  • Confirmation bias

Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.

Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasize findings that “prove” that your lived experience is the case for most families, neglecting other explanations and experiences.

The halo effect refers to situations whereby our general impression about a person, a brand, or a product is shaped by a single trait. It happens, for instance, when we automatically make positive assumptions about people based on something positive we notice, while in reality, we know little about them.

The Baader-Meinhof phenomenon

The Baader-Meinhof phenomenon (or frequency illusion) occurs when something that you recently learned seems to appear “everywhere” soon after it was first brought to your attention. However, this is not the case. What has increased is your awareness of something, such as a new word or an old song you never knew existed, not their frequency.

While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.

  • Clearly explain in your methodology section how your research design will help you meet the research objectives and why this is the most appropriate research design.
  • In quantitative studies , make sure that you use probability sampling to select the participants. If you’re running an experiment, make sure you use random assignment to assign your control and treatment groups.
  • Account for participants who withdraw or are lost to follow-up during the study. If they are withdrawing for a particular reason, it could bias your results. This applies especially to longer-term or longitudinal studies .
  • Use triangulation to enhance the validity and credibility of your findings.
  • Phrase your survey or interview questions in a neutral, non-judgmental tone. Be very careful that your questions do not steer your participants in any particular direction.
  • Consider using a reflexive journal. Here, you can log the details of each interview , paying special attention to any influence you may have had on participants. You can include these in your final analysis.
  • Baader–Meinhof phenomenon
  • Sampling bias
  • Ascertainment bias
  • Self-selection bias
  • Hawthorne effect
  • Omitted variable bias
  • Pygmalion effect
  • Placebo effect

Research bias affects the validity and reliability of your research findings , leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behavior and external factors (difficult circumstances) to justify the same behavior in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews. These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen because people are either not willing or not able to participate.

Is this article helpful?

Other students also liked.

  • Attrition Bias | Examples, Explanation, Prevention
  • Observer Bias | Definition, Examples, Prevention
  • What Is Social Desirability Bias? | Definition & Examples

More interesting articles

  • Demand Characteristics | Definition, Examples & Control
  • Hostile Attribution Bias | Definition & Examples
  • Regression to the Mean | Definition & Examples
  • Representativeness Heuristic | Example & Definition
  • Sampling Bias and How to Avoid It | Types & Examples
  • Self-Fulfilling Prophecy | Definition & Examples
  • The Availability Heuristic | Example & Definition
  • The Baader–Meinhof Phenomenon Explained
  • What Is a Ceiling Effect? | Definition & Examples
  • What Is Actor-Observer Bias? | Definition & Examples
  • What Is Affinity Bias? | Definition & Examples
  • What Is Anchoring Bias? | Definition & Examples
  • What Is Ascertainment Bias? | Definition & Examples
  • What Is Belief Bias? | Definition & Examples
  • What Is Bias for Action? | Definition & Examples
  • What Is Cognitive Bias? | Definition, Types, & Examples
  • What Is Confirmation Bias? | Definition & Examples
  • What Is Conformity Bias? | Definition & Examples
  • What Is Correspondence Bias? | Definition & Example
  • What Is Explicit Bias? | Definition & Examples
  • What Is Generalizability? | Definition & Examples
  • What Is Hindsight Bias? | Definition & Examples
  • What Is Implicit Bias? | Definition & Examples
  • What Is Information Bias? | Definition & Examples
  • What Is Ingroup Bias? | Definition & Examples
  • What Is Negativity Bias? | Definition & Examples
  • What Is Nonresponse Bias? | Definition & Example
  • What Is Normalcy Bias? | Definition & Example
  • What Is Omitted Variable Bias? | Definition & Examples
  • What Is Optimism Bias? | Definition & Examples
  • What Is Outgroup Bias? | Definition & Examples
  • What Is Overconfidence Bias? | Definition & Examples
  • What Is Perception Bias? | Definition & Examples
  • What Is Primacy Bias? | Definition & Example
  • What Is Publication Bias? | Definition & Examples
  • What Is Recall Bias? | Definition & Examples
  • What Is Recency Bias? | Definition & Examples
  • What Is Response Bias? | Definition & Examples
  • What Is Selection Bias? | Definition & Examples
  • What Is Self-Selection Bias? | Definition & Example
  • What Is Self-Serving Bias? | Definition & Example
  • What Is Status Quo Bias? | Definition & Examples
  • What Is Survivorship Bias? | Definition & Examples
  • What Is the Affect Heuristic? | Example & Definition
  • What Is the Egocentric Bias? | Definition & Examples
  • What Is the Framing Effect? | Definition & Examples
  • What Is the Halo Effect? | Definition & Examples
  • What Is the Hawthorne Effect? | Definition & Examples
  • What Is the Placebo Effect? | Definition & Examples
  • What Is the Pygmalion Effect? | Definition & Examples
  • What Is Unconscious Bias? | Definition & Examples
  • What Is Undercoverage Bias? | Definition & Example
  • What Is Vividness Bias? | Definition & Examples

Search This Blog

Buyer behaviour.

Deciphering Buying Behaviour & Consumer Lifestyles for Marketing Decisions.

Prior Hypothesis Bias

Popular posts, common man's cars, world peace, yes; info. to the world, no.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 20 August 2021

Implicit and explicit learning of Bayesian priors differently impacts bias during perceptual decision-making

  • V. N. Thakur 1 ,
  • M. A. Basso 2 ,
  • J. Ditterich 3 &
  • B. J. Knowlton 1  

Scientific Reports volume  11 , Article number:  16932 ( 2021 ) Cite this article

2039 Accesses

1 Citations

1 Altmetric

Metrics details

  • Human behaviour
  • Neuroscience

Knowledge without awareness, or implicit knowledge, influences a variety of behaviors. It is unknown however, whether implicit knowledge of statistical structure informs visual perceptual decisions or whether explicit knowledge of statistical probabilities is required. Here, we measured visual decision-making performance using a novel task in which humans reported the orientation of two differently colored translational Glass patterns; each color associated with different orientation probabilities. The task design allowed us to assess participants’ ability to learn and use a general orientation prior as well as a color specific feature prior. Classifying decision-makers based on a questionnaire revealed that both implicit and explicit learners implemented a general orientation bias by adjusting the starting point of evidence accumulation in the drift diffusion model framework. Explicit learners additionally adjusted the drift rate offset. When subjects implemented a stimulus specific bias, they did so by adjusting primarily the drift rate offset. We conclude that humans can learn priors implicitly for perceptual decision-making and depending on awareness implement the priors using different mechanisms.

Similar content being viewed by others

example of prior hypothesis bias

A unifying theory explains seemingly contradictory biases in perceptual estimation

Michael Hahn & Xue-Xin Wei

example of prior hypothesis bias

Neural tuning instantiates prior expectations in the human visual system

William J. Harrison, Paul M. Bays & Reuben Rideaux

example of prior hypothesis bias

Perceptual bias is reduced with longer reaction times during visual discrimination

Ron Dekel & Dov Sagi

Introduction

Regularities in the environment can be learned implicitly and can bias judgments, preferences or fluency of movement 1 , 2 , 3 , 4 , 5 , 6 , 7 . For example, the statistical regularities of one’s native language are readily acquired by infants 8 . Humans are also able to learn to use contextual information to search the location of a target in a display when the location is correlated with contextual features. Contextual cuing can occur without awareness of the correlation 9 . Implicit learning of regularities in finite-state artificial grammars also occurs through exposure to exemplars formed according to the grammar. After such exposure, participants classify new items as grammatical despite lacking awareness for the grammatical rules 5 , 10 , 11 , 12 . Perceptuo-motor behavior is also influenced by implicitly learned statistical regularities 13 , 14 .

Here, we test the hypothesis that base rate priors that are implicitly acquired can influence orientation judgments. It is currently unclear whether implicit knowledge could become integrated with diagnostic information that is present to influence a judgement. In some circumstances, base rates, or Bayesian priors, influence judgements 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 . For example, a cloudy sky in Seattle might lead one to grab an umbrella when leaving home, while the same sky would not lead to this decision in Los Angeles. Much of our knowledge of regularities of the environment is learned without awareness, and thus it seems adaptive that such knowledge should contribute to judgments. On the other hand, such knowledge may not integrate readily with perceptual decisions that are based on sensory information in awareness such that only explicit knowledge of priors would be able to influence judgments.

If implicit knowledge of priors influences perceptual judgments, it may be that this knowledge affects judgments differently than explicit knowledge of priors. To compare decision-making under these different conditions, we applied the Drift Diffusion Model (DDM) to behavioral data and assessed how model parameters shift when subjects are aware of priors compared to when priors are implicitly learned. The DDM has been used extensively to effectively explain psychometric and neural data from perceptual choice tasks 22 , 24 , 25 , 26 .

Participants made judgments about the orientation (45° left or right) of a dynamic Glass pattern stimulus 27 . We parameterized the difficulty of the perceptual judgement on each trial by varying the dot pair correlations, referred to as coherence, ranging from 0 to 100% with 0% having no dot pair correlations and thus no orientation signal and with 100% having all dot pairs correlated and thus the strongest orientation signal. On each trial, after the appearance of a centrally-positioned fixation spot for a fixation time of 1000 ± 200 ms, two choice targets appeared, one on the left and the other on the right of the screen, to orient the participants. Then, a Glass pattern stimulus appeared and the fixation spot disappeared and the participants reported their perception with a key press; either ‘o’ for left or ‘p’ for right. If the participant chose correctly, an audible tone provided feedback. Incorrect choices received no tone and the 0% coherence trials received feedback consistent with the base-rate prior for that condition (e.g., 50% of the trials in the condition in which the two orientations were equally likely; Fig.  1 a).

figure 1

Schematics of the task design and the drift diffusion model (DDM). ( a ) The black squares show the screen that the participants viewed and illustrate the spatial and temporal arrangements of the task. The white circle in the center of the screen shows the fixation spot and the two additional white circles indicate the two possible choices; left or right. The fixation spot and the choice targets appeared sequentially with a delay of 1000 ms ± 200 ms between. Next the Glass pattern appeared and the fixation spot disappeared simultaneously, cueing the participant to report the perceived orientation with a key press (‘o’ for left and ‘p’ for right). A tone, indicated by the audio symbol, provided feedback only for correct choices. ( b ) Experiment 1: Khaki and lilac Glass patterns (100% coherence, 45 deg orientation negative and positive), illustrate the prior manipulation. The negative orientation occurred for the khaki Glass pattern on 25% of the trials and the positive orientation occurred on the remaining 75% of the khaki Glass pattern trials (Positive Prior). For the lilac Glass pattern stimuli, the negative and positive orientations occurred with equal probability (Equal Prior). All trials types were interleaved randomly and we counterbalanced the color and direction of the Glass pattern stimuli across participants and eventually converted to positive and negative orientation for analysis purpose. ( c ) Experiment 2: The negative orientation occurred on 25% of the khaki Glass pattern trials and the positive orientation occurred on the remaining 75% (Positive Prior). In the same experimental block, the negative orientation occurred on 75% of the orange Glass pattern trials and the positive orientation occurred on remaining 25% of the orange Glass pattern trials (Negative Prior). All trial types were randomly interleaved and we counterbalanced the color and direction of the Glass pattern stimuli across participants and eventually converted into positive and negative orientations for analysis purpose. See text and “ Methods ” section for further details. ( d ) The DDM of perceptual decision making proposes that sensory evidence is accumulated over time until a bound is reached, here labeled Positive and Negative. The solid khaki arrow shows the drift rate of evidence accumulation, labeled drift rate for more frequent orientations and the dashed khaki arrow shows the sane for less frequent orientations. The blue line shows the noisy evidence accumulation for an individual trial. In the absence of a bias, evidence for either the positive or the negative decision begins to accumulate at a point that is equidistant from the two bounds, referred to as the starting point of evidence accumulation (khaki arrow). Time is indicated along the horizontal axis. ( e ) Same as in ( d ) except the accumulation process begins at a point closer to Positive decision bound (shown by dashed black arrow). ( f ) Same as in ( d ) except with an increased drift rate offset for positive decisions (shown by dashed black arrow).

The critical aspect of this task was that each Glass pattern stimulus appeared in one of two different colors (red or green) and for each participant, the probability of the left or right orientation for each color differed. Thus, stimulus color provided information that could contribute to the perceptual decision about orientation, and in this sense knowledge of the color-orientation relationship acts as a Bayesian prior 25 , 28 . The stimulus color and prior probabilities were counterbalanced across participants, hence we converted the orientations to positive (right) and negative (left) orientations. In Experiment 1, referred to as the 75–50 condition, one colored Glass pattern had positive orientation priors and the other had equal orientation priors (Fig.  1 b). Similar to Experiment 1, Experiment 2 also used two colored Glass patterns with different prior probabilities (Fig.  1 c). In Experiment 2, each color was associated with a positive prior and negative prior with equal strength, so there was no base-rate difference for the two orientations across all stimuli. Previous studies on perceptual decision-making have not directly examined whether knowledge of priors was explicit or implicit. Here, we were able to classify participants’ knowledge of the priors based on their responses to a questionnaire given after the session, and compare their performance to that of participants who were informed of the priors at the onset of the task.

To characterize differences based on the type of knowledge of the priors, we examined performance using the DDM. The DDM considers a process where the net evidence—the difference in evidence for or against a particular outcome—accumulates with time. When the amount of evidence reaches a level associated with one or the other bound or threshold, a decision is made (Fig.  1 d). In this framework, priors can create biases in decision-making either by changing the starting point from which evidence begins to accumulate to be closer to the bound associated with the more frequent alternative 17 , 19 , 29 , 30 (Fig.  1 e), or by increasing the rate of evidence accumulation for the more frequent alternative 1 , 23 , 24 , 31 (Fig.  1 f), or both 21 , 25 , 32 . We modeled the data obtained from Experiments 1 and 2 using a simple DDM (see “ Methods ” section) to assess the mechanisms underlying how implicit and explicit knowledge of priors create perceptual decision biases.

Experiment 1—participants can learn priors implicitly

Data from 67 participants were analyzed in Experiment 1, all with informed consent using procedures approved by the UCLA IRB. Based on responses to the questionnaire, participants were divided into three groups. Participants were classified as “Implicit Learners” if they reported no awareness that the different colors were associated with different orientation base-rates and that they perceived the orientations to occur with equal frequency (N = 23, 34%). Participants were classified as “Partial Explicit Learners” if they reported an overall difference in the orientation frequency but were unaware that this differed by color or could only identify the orientation prior for one color (N = 20, 30%). A group of participants (N = 24, 37%) classified as ‘Explicit Learners’ was informed of the priors for the different colored stimuli at the outset of the session and they all reported them correctly at the end of the session.

All participants performed the task well, using the orientation information to guide their choices (Fig.  2 a–c). For non-zero coherence stimuli, percent correct was 83.14%, 81.16%, 79.54% for Implicit, Partial Explicit and Explicit groups respectively. This accuracy was highest in Implicit Learners and was significantly greater than Explicit learners: t(45) = 2.55, p = 0.014. All participants also showed a bias in making more positive choices. We calculated the bias by measuring the proportion of choices made at 0% coherence. For the Implicit Learners, the mean proportion of positive choices on the positive prior trials was 0.575, (Fig.  2 a, difference from chance: t(22) = 31.88, p < 0.001). Furthermore, there was a 4.2% increase in positive choices compared to chance level even for the equal prior trials (Fig.  2 a, 0.542; difference from chance: t(22) = 25.25, p < 0.001). The bias did not differ significantly between the two conditions (t(22) = 1.008, p = 0.289). The Partial Explicit Learners showed a similar trend in performance and the use of priors (Fig.  2 b, mean proportion = 0.622; t(19) = 24.09, p < 0.001). For the equal prior trials, the Partially Explicit Learners also showed an increase in choosing the positive orientation (Fig.  2 b, mean proportion = 0.551; t(19) = 20.25, p < 0.001), which was lower than for the positive prior trials (t(19) = 2.274, p < 0.05). Thus, implicit learning of both the general prior (Implicit group) and the color-specific priors (Partially Explicit group) occurred in the participants who were not informed of the base-rate differences. The mean bias for the positive prior trials in the Explicit group was 0.645, which was significantly above chance (Fig.  2 c, t(23) = 28.08, p < 0.001). The Explicit Learners also showed an increase in bias for the equal prior condition that was significantly different from chance (Fig.  2 c, 0.591; t(23) = 18.47, p < 0.001), but significantly lower than for the positive prior condition (t(23) = 2.316, p < 0.05). The Partial Explicit and Explicit groups performed similarly, with no significant difference between the ability to apply the priors associated with the two colors (p = 0.889), even though participants in the Partially Explicit group had minimal awareness of how stimulus color was associated with orientation probability.

figure 2

Participants learn multiple priors implicitly and explicitly. ( a ) Proportion of choices plotted against Glass pattern coherence for 23 participants. Filled lilac circles and error bars show the mean and SE for each coherence across all participants who reported no difference in orientation frequency (Implicit Learners) for the equal prior condition (50% positive and 50% negative). The lilac solid lines show the best fit logistic function (see “ Methods ” section). The khaki filled circles and lines show the same for the positive prior condition in which the more frequent orientation was positive, referred to as positive prior condition (75% positive and 25% negative). The inset shows the bias parameter for equal (lilac) and positive (khaki) priors represented as deviation of the proportion of positive choices at 0% coherence from unbiased condition (0.5). ( b ) Same as in ( a ) for the participants who reported a difference in orientation frequency regardless of color or for only one color (Partially Explicit Learners; n = 20). ( c ) Same as in ( a ) for the participants who were informed of the prior conditions (Explicit Learners; n = 24). ( d ) Parameter estimates from the DDM fits for the equal prior condition (lilac) and the positive prior condition (khaki) for all Implicit Learners. Diagonal hatched bars show the mean and 95% confidence intervals (CI) of the starting point offset and horizontal hatched bars show mean and 95% confidence interval for the drift-rate offset (equivalent % coherence). ( e ) Sarnia as in ( d ) for the Partially Explicit Learners. ( f ) Same as in ( d ) for the Explicit Learners.

Given previous work demonstrating that a correct choice on a previous trial can bias the decision on a current trial (Urai et al. 23 ), we examined whether such “history biases” were present in the current data and whether these could account for the effect of priors we observed. For each group, we examined choices on zero-coherence trials to examine whether or not they differed based on the direction of the previous choice when it was correct. We found no evidence of history biases in any of the groups (t’s < 1.6, p’s > 0.12). Thus, it appears unlikely that biases were driven by the choice on the previous trial rather than across trials.

Implicit and explicit learners use different mechanisms to apply decision biases

Using the DDM, we previously observed that healthy participants adjust both the starting point of evidence accumulation and the drift rate offset to express decision-making biases, and that it was the change in the drift rate offset that provided feature specificity to the bias 25 . However, in that study we did not know whether the healthy participants were aware of the feature-prior association or whether awareness would be associated with the use of different mechanisms to implement the bias. Here, we assessed whether the mechanism underlying the expression of the bias differed for the different types of learners, implicit, partially explicit and explicit.

We fitted two feature-specific DDMs to the second-half of the experimental session data independently (600 trials), allowing only the starting point and the drift rate offset to vary between the two features (Table S1 ). Figure  2 d–f shows the mean parameter estimates and confidence intervals of the parameter estimates for the starting point and drift rate offsets for both prior conditions. For the Implicit Learners, the starting point shifted toward the more frequent orientation for both the equal and the positive prior trials. The shift in starting point was similar for the two conditions (Equal Prior- estimate: 7.6%, S.E.: 0.7%, Positive Priors- estimate: 5%, S.E.: 0.7%). The change in starting point was numerically larger for the equal prior condition compared to the positive prior condition, but this was counteracted by an opposite change in the drift rate offset for equal prior trials (estimate: − 2.1%, S.E.: 0.5%). The drift rate offset for the positive prior condition was in the same direction as the more frequent orientation but with a small magnitude (estimate: 0.7%, S.E.: 0.5%). In summary, for the Implicit Learners, there was a significant increase in starting point for both prior conditions accompanied by a slight decrease in drift offset when priors were equal, leading to a bias in decision-making that was undifferentiated by stimulus color.

For the Partially Explicit Learners, we observed similar changes in the starting point of evidence accumulation and quantitatively larger changes in the drift rate offset compared to the Implicit Learners (Fig.  2 e). The starting point of evidence accumulation for equal prior trials was 6.4% (S.E.: 0.8%) and 3.1% for the positive prior trials (S.E.: 0.8%). Similarly, the drift rate offset was positive for both prior conditions but in contrast with the starting point, the drift rate was much higher for the positive prior condition (6.6%, S.E.: 0.5%) compared to the equal prior condition (2.2%, S.E.: 0.5%). Thus, both Implicit and Partially Explicit Learners expressed a decision bias by changing the starting point, however, only the Partially Explicit Learners developed a sensitivity to priors associated with stimulus colors that appears to be mediated primarily by changes in the drift rate offset.

We found that the Explicit Learners also showed an overall bias but only a numerically greater feature specific sensitivity to the priors unlike the Partially Explicit Learners (Fig.  2 b,c). The starting point offsets in the Explicit Learners were similar for both priors (Equal Prior: 3.2%, S.E.: 0.8%; Positive Prior: 2.1%, S.E.: 0.8; Fig.  2 f). Similar to Partially Explicit Learners, the Explicit Learners also adjusted their drift rate offset asymmetrically to express the bias (Equal Prior: 2.0%, S.E.: 0.6%; Positive Prior: 6.9%, S.E.: 0.6%; Fig.  2 f). Hence, similar to the Partially Explicit group, the feature-specific biases were primarily driven by drift rate offset in the Explicit group.

To see the effect of starting point offset or drift rate offset individually, we simulated the DDM model using optimized parameters with minor modifications. To identify the contribution of the starting point offset, we manually kept the drift offset at zero and measured biases at 0% coherence for both prior conditions. Similarly, to identify the contribution of the drift rate offset to the bias, we simulated the model with optimized parameters and manually set the start point offset to zero. We observed that with the full model the biases in the Implicit group, Partially Explicit group and Explicit group were: [Equal Prior, Positive Prior]: [0.531, 0.555]; [0.567, 0.602]; [0.585, 0.622] respectively. When we looked at only the starting point’s contribution, we found simulated biases as follows: [Equal Prior, Positive Prior]: [0.555, 0.547]; [0.543, 0.531]; [0.567, 0.558] for the Implicit group, Partially Explicit group and Explicit group respectively. When only the drift offset was kept in the model, we found the biases for the Implicit, Partially Explicit and Explicit groups to be: [Equal Prior, Positive Prior]: [0.484, 0.524]; [0.534, 0.586]; [0.567, 0.610]. As we can see from the above analysis, the start point contributes toward an increase in the overall bias of the choice behavior and the drift offsets contribute toward the color specificity in all three groups including the Implicit group. However, the drift rate offsets in the Implicit group were smaller than start point offset leading to a lack of a significant color-specific bias.

In Experiment 1, participants were able to learn a Bayesian prior implicitly based on different base-rates of orientation and could apply this prior in a perceptual decision-making task. Participants were also able to apply priors differentially depending on stimulus color even if they were unaware that the colors were associated with different base rates. Implicit knowledge of the general prior, that one of the orientations occurred more frequently, was captured by a start point offset in the DDM. Explicit knowledge of the color-specific priors was captured mainly by changes in drift rate offset in the DDM. Participants who gained some knowledge of the frequency of the orientations across the task but did not become aware of the color-specific priors also showed differences in drift rate offset for the two conditions.

Experiment 2—learning feature-specific priors in the absence of a general prior

The design of Experiment 1, with an equal base rate prior for one stimulus color (50% for each orientation) and a positive prior for the other (75% for one orientation) led to a general prior of 67% towards the positive orientation across all stimuli. Participants in all groups generalized across the stimulus colors to some extent with biased responses toward the positive prior. Thus, it was difficult to tease apart learning feature-specific and general prior knowledge in the results. To examine learning of feature-specific priors in the absence of a task general prior, we conducted a second experiment in which the base rate orientation of the two stimulus colors were in opposite directions. So, The Glass patterns of one color occurred with 75% of trials with a positive orientation and 25% of the trials with a negative orientation, whereas the Glass patterns of the other color occurred with a positive orientation on 25% of the trials and a negative orientation on 75% of the trials (Fig.  1 c). This arrangement ensured that the overall positive and negative orientation probabilities remained constant.

We analyzed the data for a different group of 41 participants in Experiment 2, all with informed consent and using procedures approved by the UCLA IRB. 21 participants were assigned to the Implicit Learner group as they were not told about the differences in base rates. None of these participants reported noticing base rate differences on the questionnaire given after the session. 20 participants were assigned to the Explicit group, and were informed of the different base rates for the two colors. Participants accurately classified 81.03% and 83.40% of non-zero coherence stimuli in Implicit and Explicit groups respectively and there were no differences between the groups (t(39) =  − 1.48, p = 0.146). Unlike in Experiment 1, the Explicit group performed numerically better than the Implicit group, suggesting that the lower performance observed in the Explicit group in Experiment 1 was not reliable. Figure  3 a shows that even though Implicit Learners were unaware of the priors, they showed a feature-specific bias. The mean bias for the positive prior trials was 0.540 and for the negative prior trials was 0.423 and this difference in biases were statistically significant (Fig.  3 a: t = 2.68, p < 0.001). As in Experiment 1, we found no evidence that correct choices on the previous trial influenced choices at zero coherence for either group (t’s < 0.42, p’s > 0.68). Using the DDM, we found that both the starting point and the drift rate offsets were asymmetric with the task feature i.e., both were negative for negative prior and non-negative for the positive prior condition. The change in starting point for the negative priors was − 1.6% (S.E.: 0.8%; Fig.  3 c) and for positive priors was 0% (S.E.: 0.8%; Fig.  3 c). The drift rate offset parameter, like the starting point, changed such that the negative prior had a negative drift rate offset and the positive prior had positive drift rate offset: Negative Prior: 3.7%, S.E.: 0.6%; Positive Prior: 4.7, S.E.: 0.6%; Fig.  3 c). These results are consistent with Experiment 1 in that an implicitly learned feature-selective prior biases decision-making through both starting point and drift rate offset with a dominant effect from the drift rate offset.

figure 3

Participants learn and use stimulus specific orientation priors that bias perceptual decisions. ( a ) Proportion of choices plotted against Glass pattern coherence from 21 participants. Filled orange circles and error bars show the mean and one SE for each coherence across all participants who reported no difference in orientation frequency (Implicit Learners) unequal prior condition in which the more frequent orientation was negative referred to as Negative Prior condition (75% negative and 25% positive). The orange lines show the best fit logistic function (see “ Methods ” section). The khaki filled circles and lines show the same for the unequal prior condition in which the more frequent orientation was positive referred to as Positive Prior condition (25% negative and 75% positive). The inset shows the bias parameter for negative (orange) and positive (khaki) priors. It represents the deviation of proportion of positive choices for 0% coherence from the unbiased condition (0.5). ( b ) Same as in ( a ) for the participants who reported a difference in orientation frequency (Explicit Learners; n = 20). ( c ) Parameter estimates from the DDM fits for the negative (orange) and positive (khaki) prior conditions for all implicit learners. Diagonal hatched bars show the mean and 95% Cl of the parameter estimates for the starting point offset and horizontal hatched bars shows the mean and 95% CI of the parameter estimate for the drift-rate offset (in equivalent % coherence). ( d ) Same as in ( c ) for the Explicit Learners.

Participants in the Explicit Group showed robust expression of feature-specific biases. The mean bias measured for the 0% coherence, positive prior trials was 0.662 and for the negative trials was 0.292 and these differences were significantly different (Fig.  3 b; t(19) = 7.13, p < 0.001) and the difference in bias for the two colors was significantly different from those observed in the Implicit Learners (t(39) = 3.782, p < 0.001). The modeling results showed that the Explicit Group used drift rate offsets to express stimulus feature-prior associations, and these changes were much larger than those obtained in the Implicit Learners.

In the Explicit group, the starting point offset was actually in the opposite direction to the priors. The starting point estimates for the Explicit group were, Negative Prior: 6.9%, S.E.: 0.8%; Positive Prior: 0.3%, S.E.: 0.8%; (Fig.  3 d—diagonal stripes). Similar to the Implicit Learners, the Explicit group also showed feature-specific changes in the drift rate offset and these changes were much more prominent: Negative Prior: − 17.9%, S.E.: 0.6%; Positive Prior: 12.7%, S.E.: 0.5%; (Fig.  3 d—horizontal stripes). This result suggests that the change in starting point for the negative prior condition could be a compensatory mechanism for a high change in the drift rate offset as reported previously 25 . These results indicate that according to the DDM, people implement explicit priors through the enhancement of drift rate toward the decision boundaries.

In Experiment 2, we wanted to measure the individual contribution of starting point and drift rate offset when there were priors that were stimulus specific with no overall prior. We observed that with the full model the biases in Implicit Learners and the Explicit group were: [Negative Prior, Positive Prior]: [0.448, 0.550]; [0.318, 0.646] respectively. When we looked at only the starting point contribution, we found simulated biases as follows: [Negative Prior, Positive Prior]: [0.482, 0.507]; [0.500 0.516] for the Implicit and Explicit groups respectively. When the drift offset was kept in the model, we found the biases for the Implicit and Explicit groups to be: [Negative Prior, Positive Prior]: [0.457, 0.550]; [0.287, 0.648]. Similar to Experiment 1, the drift rate offset contributes to the stimulus specific biases. The starting point had a smaller contribution that appeared compensatory for the large drift rate offsets in the Explicit group.

Based on the results of the two experiments, implicit learning of base rate information can occur in both a general and a feature-specific manner, and this knowledge appears to influence decision-making through the more efficient accumulation of sensory evidence consistent with the priors. When participants are aware of the base rate differences, there are greater increases in the drift rate offset. Starting point offsets can also contribute to the stimulus specificity either by increasing the bias of smaller changes due to drift rate offset, as in the Implicit Learners, or by counteracting the large changes in drift rate offset seen in the Explicit group. Thus, starting point may be adjusted to both bias decision-making implicitly and possibly to compensate for large drift rate increases that implement explicit knowledge of priors.

Additionally, we also explored whether a single mechanism of bias implementation could explain the data better. For this we fitted two different models, one with only starting point offset and another with only drift rate offset and calculated their BIC scores which are shown in Table S3 of Supplementary Data. The BIC score for the two free-parameter model was lower for four out of five groups. This analysis suggests that the model with both start point offset and drift-rate offset as a free parameters explains the data better after accounting for the increase in the number of parameters. The fifth group, the Explicit group in Experiment 2, shows a lower BIC score for the model with only drift-rate offset. As discussed above, the Explicit group in Experiment 2 shows the highest separation of stimulus-specific biases that appears almost exclusively driven by the manipulating the drift-rate offset. Importantly, for implicit learners, behavior is captured most effectively by the model including both drift-rate and start point parameters.

We assessed the ability of participants to learn base-rate priors implicitly and apply them in a perceptual decision-making task. In both experiments, participants acquired a bias in an orientation judgment task that was apparent when participants judged items with zero coherence. Participants who did not report awareness of the priors nevertheless exhibited evidence of bias in their decisions about the orientations of the stimuli. In Experiment 1, two different kinds of priors were present; an overall prior based on the frequency of the different orientations across all stimuli and feature-specific priors that were specific to stimulus color. We found that both types of priors could be learned implicitly. In the group that reported no explicit knowledge of the different priors of the two orientations (Implicit Learners), there was still a significant bias toward the more frequent orientation across all stimuli, which was also seen in participants who were informed of this prior or became aware of it during testing. Implicit learning of feature-specific priors was also seen in those participants who became aware of only the general difference in frequency between the orientations (Partially Explicit Learners). In this group, explicit knowledge of the structure of the task lagged behind implicit knowledge in that they did not become aware of differences in priors for the two colors despite applying the differential priors in their choice behavior. In Experiment 2, the two orientations were equally likely overall and the priors for the two colors were more distinct than in Experiment 1. Here, we also found implicit learning of feature-specific priors.

Our results are consistent with the recent study by Rungratsameetaweemana et al. 33 in which the ability of patients with amnesia following hippocampal damage to acquire a prior in a motion direction judgment task was measured. The sensitivity to prior information was similar in amnesic patients and control participants, supporting the idea that explicit learning of the prior does not benefit performance. We assessed awareness of priors in participants after the session to determine the level of explicit knowledge gained during the task. We also demonstrated that different priors specific to different sets of stimuli based on color could be learned implicitly. These data suggest that changes in perceptual processing can occur based on implicit knowledge of priors rather than only in overall response bias.

Using the DDM, we found that explicit and implicit knowledge of the priors appeared to influence decisions differently. Substantial drift rate modulation was associated with knowledge of the color-specific priors and was much more robust in groups with explicit knowledge than in groups with only implicit knowledge. The influence of starting point on decision-making was confined to implicitly learned knowledge of the priors. The Implicit group in Experiment 1, who learned to apply a bias towards the more frequent orientation, showed a significant starting point offset in the positive direction for both colors. In Experiment 2, where the priors were in equal and opposite directions, the bias in decision-making occurred as a result of significant changes primarily in drift rate offset for stimuli of the different colors. The difference in the drift rate offsets for the two stimulus colors was relatively modest for the Implicit group. In contrast, in the group that was aware of the different priors, there were substantial increases in drift rate offset towards the positive and negative direction for the two colors of stimuli. In Implicit Learners, the starting point supported further increases in bias by offsetting in the direction of the prior for one of the colors. In contrast, in the Explicit group the high change in drift offset rate was counteracted in part by a start point offset in the opposite direction for one of the priors. These results suggest that explicit knowledge of the priors leads to faster accumulation of evidence in the direction of the prior across coherences. Implicit knowledge of stimulus-specific priors appears to also affect drift rate offset. Start-point offsets appeared to reflect implicit knowledge of priors, and may even compensate for large drift rate offset changes implemented by explicit knowledge of priors.

The present work shows that people can implicitly learn base-rate differences and apply this knowledge in a perceptual-decision making task. While classic work in decision-making shows that participants often ignore explicit base-rates 34 , base-rates may have a greater impact when they are learned through experience. An implicit learning mechanism may be engaged with experiential learning that may lead to knowledge that is more effective in influencing judgments than explicit knowledge 35 . We argue that implicit learning of general base-rate information shifts the starting point of evidence accumulation toward the more frequent outcome. Consistent with this idea, local fluctuations in base rates influence decision-making through starting point modulation, even when these local fluctuations do not reflect the explicit priors for the entire task 36 . Thus, starting point, perhaps reflecting biasing activity in neural representations of alternatives, may be modulated fairly automatically by reinforcement history.

Additionally, we also observed changes in the drift rate offset associated with implementing feature-specific biases in decision-making. When participants had knowledge that there were different priors that depended on stimulus color, there was a significant modulation of drift rate, indicating that awareness of feature-specific priors led to greater efficiency in accumulating perceptual evidence in support of the priors. Drift rate changes were also observed despite lack of awareness that the different stimulus colors were associated with different priors in both experiments. It thus appears that implicit knowledge of stimulus specific priors enhances the perceptual processes contributing to decision-making.

Modulation of starting point and diffusion rate are likely supported by different brain regions in the service of decision-making. Starting point offsets may reflect increased activity in neural representations corresponding to more frequent alternatives. In the seminal work of Basso and Wurt 37 and Platt and Glimcher 38 , monkeys performing a two-alternative forced-choice task showed increased pre-trial activity in SC and area LIP representation for the alternative that was cued to be more likely. In Experiment 1, a similar increase in activity in representations corresponding to the more frequent alternative may have occurred. This increase may have occurred very early in the trial, before the integration of evidence about orientation. In the present study, subjects were not cued with base-rate priors but were able to acquire them implicitly through experience. Thus, it appears that modulation of start point does not require awareness of priors. Perugini et al. 25 found that patients with Parkinson’s disease are impaired at adjusting starting point based on priors, suggesting that basal ganglia might also be involved in changes in starting point and adjusting decision threshold generally 39 . Future work may clarify whether modulation of starting point in decision-making relies on similar neural mechanisms for explicitly cued and implicitly learned priors.

The adjustment of drift rate offset in the present study may reflect enhanced processing of elements of the display consistent with the prior. Adjustment of drift rate has been shown to occur in decision-making paradigms in which there is variability of signal to noise across trials 40 ; as variability of difficulty across trials increased, fitting the DDM to data relies more heavily on adjustments to the drift rate 41 . In the present study, where trials varied substantially in terms of difficulty, it is therefore not surprising that drift rates were offset resulting in decision biases. Although present in the Implicit group in Experiment 2, this finding was much more striking in subjects who gained partial knowledge of the base rates or were informed of the different priors at the onset of the task. One possibility is that drift rate offsets reflect top-down attentional effects that enhance gain in perceptual or decision- making regions.

In sum, our results demonstrate that knowledge about base-rate priors can be acquired implicitly and that this knowledge can be specific to stimuli with different features within a stimulus set. Within the DDM framework, the implicit acquisition of an overall bias was implemented through a change in starting point offset. However, feature-specific biases were primarily implemented through changing the rate of evidence accumulation. Future studies can help to identify the brain regions involved in implementation of general and feature-specific biases, and whether implicitly acquired biases are supported by different neural mechanisms than explicit biases.

Participants

We recruited 140 people for this study. Experiment 1 included 89 participants (23 male and 66 female) and Experiment 2 included 51 participants (13 male and 38 female). All participants were right-handed UCLA students with age ranging from 18 to 30 years. None of the participants had active medical, neurological, or psychiatric diagnoses, and were not taking any chronic medication that could affect sensory processing, movement, or cognition. All participants had normal/corrected to normal vision. All the participants were tested for colorblindness using an online version of the Ishihara color blindness test. All the participants gave written informed consent as per the guidelines of the Institutional Review Board of the University of California and received course credit for their participation. In each experiment, subjects were assigned to either an Explicit group, in which they were informed of the different orientation probabilities for red and green stimuli or an Implicit group, in which they were not informed. All subjects were given instructions by the experimenter and completed at least 50 practice trials of stimuli that were 100% coherent with 85–90% accuracy to familiarize themselves with the task before beginning.

Visual stimuli

Dynamic translation Glass patterns were used for this study, which was developed previously 25 . This task uses two identical dot patterns, one of which is translated in position with respect to the other. When these patterns are superimposed, multiple correlated dot pairs are formed, which results in a strong perception of the orientation of dots 27 . The strength of this orientation can be varied using the number of dot-pairs formed. For both experiments, we used 0%, 13%, 35% and 100% coherence, referring to the percentage of correlated dot pairs. In the 100% coherence condition, all dots pairs are correlated, and the orientation of the stimulus (right or left) is readily apparent. In contrast, the 0% coherence condition does not have any dot pairs which are correlated and thus subjects have no perceptual information upon which to base their orientation decision. The stimuli were made dynamic by generating 30 frames of such images and showing them with a frequency of 85 frames/s.

Experimental paradigm

Each trial began with the appearance of a centrally-located white fixation box. Subjects were asked to keep looking at the fixation box. After fixation ranging from 1000 ms ± 200 ms, two target boxes appeared on the screen at left-upper and right-upper positions. Immediately after that, a Glass pattern with one of the coherences mentioned above was shown to the subjects. In each trial, the Glass pattern was shown either in red (luminance 0.6 cd/m 2 ) or green (luminance 0.54 cd/m 2 ), which was randomized with equal probability. Subjects had to gather the information about the orientation of the stimulus and record their response as quickly as possible by pressing a key on the keyboard. If they judged the stimulus to be oriented to the left, they pressed the ‘O’ key, and if they judged it to be oriented to the right, they pressed the ‘P’ key. The Subjects were given 3 s after the onset of the Glass pattern to decide the orientation. If they did not respond within 3 s, the trial was aborted. If they chose the correct orientation, the feedback was given as a high tone by the computer. For both experiments, each subject performed at least 1200 trials.

For Experiment 1, one color of Glass pattern was unbiased regarding orientation (both left and right were equally likely) and for the other color, the orientation occurred 75% in one orientation and 25% in the other. For Experiment 2, both the red and green patterns were biased, with one orientation occurring 75% of the time and the other 25% of the time, and the opposite probabilities occurred for the other color. Thus, the two orientations (left and right) occurred equally often across the stimulus set. The assignment of colors to conditions was counterbalanced across subjects for both experiments and later converted to positive orientation and negative orientation. In this way, we were able to examine whether both general biases (one outcome is more likely to occur in the task) and feature-specific biases (red and green stimuli have different base rates) can be learned in the absence of awareness.

Following the completion of the task, participants’ awareness of the priors was assessed using a follow-up questionnaire. The questionnaire asked participants if they thought that each colored stimulus (red and green) was equally distributed between two orientations. Based on their response, participants were divided into two categories: (1) implicit learners and (2) partial explicit learners. Participants who responded that both orientations occurred equally for each color, i.e., there was no bias, were categorized as Implicit Learners. Participants who identified the prior orientation of only one stimulus category correctly were categorized as Partial Explicit Learners. In the Explicit group, participants were informed about the feature-prior associations at the start of the experiment.

Data analysis

The reaction time (RT) was defined as the duration between onset of Glass pattern stimulus and response time (keypress) onset. All trials with RT less than 200 ms were excluded from the final analysis due to the possibility of anticipatory trials. Additionally, all participants whose performance at 100% coherence was below 90% accuracy were removed from the final analysis (n = 32, across all five groups). Hence, data was analyzed for 67 participants in Experiment 1 and 41 participants in Experiment 2. For both experiments, reaction times for all remaining participants were within two standard deviations at all coherences, and thus data were pooled for subsequent analyses.

Next, a psychometric function of accuracy with respect to coherency was calculated. This psychometric function was fitted using the following logistic function of the form:

where \(p\left(P\right)\) is the proportion of choices towards more frequent orientation, i.e., for 75–50 condition with positive prior, \(p\left(P\right)\) is the proportion of positive choices; \(C\) is dot pair coherence; \(\alpha\) and \(\beta\) are free to model parameters and are determined by the maximum likelihood method. They also represent response bias and sensitivity to stimulus, respectively. \(\lambda_1\) and \(\lambda_2\) represent the lapse rates for the two sides, which equal the difference between perfect perfomance and actual performance. It indicates the transient lapse in attention during the experiment 42 . As 0% coherence is an utterly ambiguous stimulus, subjects’ biases can be assessed at that coherence condition. The use of priors for each stimulus color was calculated as the proportion of choices towards the more frequent orientation at 0% coherence level per subject. The statistical analysis was carried out using pairwise t-tests.

Drift diffusion model One well-known example of the class of evidence accumulation models is the Drift Diffusion Model (DDM). This model explains both, the reaction time and the choice of every trial. We used one of the variants of this model with a collapsing boundary, i.e., as time passes, participants require less accumulation of evidence to make a decision to account for urgency. We fit the DDM model with 11 parameters on pooled data across all participants in a given condition of the experiment. These 11 parameters were: the non-decision time, starting point of accumulation, proportionality factor between coherence and drift rate, the diffusion coefficient, a scaling parameter for collapsing bound, delay in collapsing bound, drift rate offset, proportion of uninformed positive choices, proportion of uninformed negative choices, mean and standard deviation of reaction time for uninformed choices. The biases can be implemented in the DDM-like model by either changing the starting point of evidence accumulation (Fig.  1 e) or by changing the rate of evidence accumulation across all coherences i.e., drift rate offset. This drift rate offset can also be interpreted as the apparent coherence during 0% coherence stimulus (Fig.  1 f). Each of these methods predict slightly different behavior. The change in starting point predicts an asymmetric change in RT between positive and negative choices. On the other hand, a drift rate offset predicts a similar change in RT for positive and negative choices, such that zero coherence trials are no longer the slowest ones. To assess these two mechanisms, we fitted two DDM models, one for each stimulus color, such that five of the model parameters, namely, non-decision time, proportionality factor, the diffusion coefficient, scaling and delay of collapsing bound were shared by both models. The other two parameters, starting point and drift rate offset, were allowed to differ between both models, thus allowing us to assess which parameter change or combination of changes could better explain the behavioral data. The DDM model was fitted to only the second-half of the session for each subject, the rationale being that during the first half of the session subjects are still learning these priors and decision behavior will be more stable during the second half of the session.

Under drift–diffusion models, a decision-making process is initiated at the starting point and continues to accumulate noisy evidence, thus continues to drift until it reaches one of the decision bounds. The decision bounds are defined as:

where, s is a decay parameter of the boundary, d is a delay parameter to start the decay and t is time. \({B}_{lower}\) is defined as - \({B}_{upper}\) . Hence at t  = 0, the bounds are at ± 1 and decay towards zero as time progresses. The drift rate (rate of accumulation) r is defined as

where C is the signed coherence of the stimulus such that positive C means coherence towards positive orientation, and o is the offset in the drift rate. The reaction time can be obtained by

where \({t}_{decision}\) is the time required to cross a decision bound, and \({t}_{0}\) is the non-decision time to account for the sensory and motor processing delays.

The traditional DDM model does not account for lapses in the data. To account for this, we added four parameters to our model such that on random proportions of trials subjects provide an uninformed response with a RT derived from an unknown Gaussian distribution. The mean and the standard deviation of the Gaussian distribution are estimated as model parameters from the data. Hence, the additional four parameters are: proportion of uninformed positive choices, proportion of uninformed negative choices, mean and standard deviation of reaction time for uninformed choices. The proportion of uninformed positive and negative choices were fitted separately for each color.

All the behavioral data analyses and modeling was performed using MATLAB. The DDM model was implemented from the Stochastic Integration Modeling Toolbox for MATLAB which was developed by J. Ditterich ( https://www.peractionlab.org/software ).

The parameter optimization was carried out using maximum likelihood estimation accounting for both reaction time and choice on each trial. We divided the data into 16 subsets based on the coherence (4×), direction (2×), and choice (2×). The likelihood of choice and RT (using QMLE method on normalized histogram of RT distribution) for each subset was calculated independently and log summed at the end for all subsets.

The reaction time likelihood was calculated using modified Quantile Maximum Likelihood Estimation (QMLE) 43 . To get better resolution of estimated RT distribution, we interpolated RT distribution from model and data to 1 ms resolution. Next, we computed the log likelihood from model for each millisecond and took its weighted sum based on RT distribution from data.

Once RT likelihood was calculated for all 16 subset of conditions, we calculated a weighted sum based on the number of trials in each subset. The choice likelihood was calculated using the binomial probability equation 44 , as this was a 2AFC task:

Similar to RT likelihood, choice likelihoods were log summed across all 16 subsets of conditions, and, finally, both RT log likelihood and choice log likelihood were summed. This total likelihood was later optimized using MATLAB functions.

We used a combination of a multidimensional simplex approach (“fminseach” in MATLAB Optimization Toolbox) and a pattern search algorithm (“patternsearch” in MATLAB Global Optimization Toolbox) to find the global optima. Standard errors of the estimated parameters were obtained using 1-dimensional local Gaussian approximation of the likelihood function ( L ) around the optimal value p opt of each parameter p:

In addition to the above model, we also explored whether a single mechanism of bias implementation could explain the data better after controlling for the number of parameters. We fitted two additional models to the data, one allowing only change in starting point and other where change is allowed only in the drift-rate offset. To quantify which model performs better we calculated the BIC scores of each of these three models in all five groups. The scores for all experimental conditions are shown in Table S3 in the Supplementary Data. The BIC score indicates which model performs better after accounting for the number of parameters based on the likelihood estimation, with lower scores indicating better performing models.

Ratcliff, R., Hockley, W. & McKoon, G. Components of activation: Repetition and priming effects in lexical decision and recognition. J. Exp. Psychol. Gen. 114 , 435–450 (1985).

Article   CAS   PubMed   Google Scholar  

Berry, D. C. & Broadbent, D. E. The combination of explicit and implicit learning processes in task control. Psychol. Res. 49 , 7–15 (1987).

Article   Google Scholar  

Berry, D. C. & Broadbent, D. E. Interactive tasks and the implicit-explicit distinction. Br. J. Psychol. 79 , 251–272 (1988).

Schacter, D. L. Implicit memory: History and current status. J. Exp. Psychol. Learn. Mem. Cogn. 13 , 501–518 (1987).

Reber, A. S. Implicit learning and tacit knowledge. J. Exp. Psychol. Gen. 118 , 219–235 (1989).

Lee, Y. S. Effects of learning contexts on implicit and explicit learning. Mem. Cognit. 23 , 723–734 (1995).

Stefan, K. et al. Formation of a motor memory by action observation. J. Neurosci. 25 , 9339–9346 (2005).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Saffran, J. R., Aslin, R. N. & Newport, E. L. Statistical learning by 8-month-old infants. Science 274 , 1926–1928 (1996).

Article   ADS   CAS   PubMed   Google Scholar  

Goujon, A., Didierjean, A. & Thorpe, S. Investigating implicit statistical learning mechanisms through contextual cueing. Trends Cogn. Sci. 19 , 524–533 (2015).

Article   PubMed   Google Scholar  

Reber, A. S. & Lewis, S. Implicit learning: An analysis of the form and structure of a body of tacit knowledge. Cognition 5 , 333–361 (1977).

Mathews, R. C. et al. Role of implicit and explicit processes in learning from examples: A synergistic effect. J. Exp. Psychol. Learn. Mem. Cogn. 15 , 1083–1100 (1989).

Mathews, R. C. Abstractness of implicit grammar knowledge: Comments on Perruchet and Pacteau’s analysis of synthetic grammar learning. J. Exp. Psychol. Gen. 119 , 412–416 (1990).

Nissen, M. J. & Bullemer, P. Attention requirements of learning evidence from performance measures. Cogn. Psychol. 19 , 1–32 (1987).

Reber, P. J. & Squire, L. R. Parallel brain systems for learning with and without awareness. Learn. Mem. 1 , 217–229 (1994).

Koehler, J. J. The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behav. Brain Sci. 19 , 1–53 (1996).

White, C. N. & Poldrack, R. A. Decomposing bias in different types of simple decisions. J. Exp. Psychol. Learn. Mem. Cogn. 40 , 385–398 (2014).

Leite, F. P. & Ratcliff, R. What cognitive processes drive response biases? A diffusion model analysis. Judgm. Decis. Mak. 6 , 651–687 (2011).

Ziori, E. & Dienes, Z. How does prior knowledge affect implicit and explicit concept learning?. Q. J. Exp. Psychol. 61 , 601–624 (2008).

Mulder, M. J., Wagenmakers, E. J., Ratcliff, R., Boekel, W. & Forstmann, B. U. Bias in the brain: A diffusion model analysis of prior probability and potential payoff. J. Neurosci. 32 , 2335–2343 (2012).

Huang, Y., Friesen, A. L., Hanks, T. D., Shadlen, M. N. & Rao, R. P. N. How prior probability influences decision making: A unifying probabilistic model. Adv. Neural Inf. Process. Syst. 2 , 1268–1276 (2012).

Google Scholar  

Diederich, A. Bound-change, drift-rate-change, or two-stage-processing hypothesis. Percep. Psychophys. 68 (2), 194–207 (2006).

Gold, J. I. & Shadlen, M. N. The neural basis of decision making. Annu. Rev. Neurosci. 30 , 535–574 (2007).

Urai, A. E., De Gee, J. W., Tsetsos, K. & Donner, T. H. Choice history biases subsequent evidence accumulation. Elife 8 , e46331 (2019).

Article   PubMed   PubMed Central   Google Scholar  

Hanks, T. D., Mazurek, M. E., Kiani, R., Hopp, E. & Shadlen, M. N. Elapsed decision time affects the weighting of prior probability in a perceptual decision task. J. Neurosci. 31 , 6339–6352 (2011).

Perugini, A., Ditterich, J. & Basso, M. A. Patients with Parkinson’s disease show impaired use of priors in conditions of sensory uncertainty. Curr. Biol. 26 , 1902–1910 (2016).

Kiani, R., Hanks, T. D. & Shadlen, M. N. Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. J. Neurosci. 28 , 3017–3029 (2008).

Glass, L. Moire effect from random dots. Nature 223 , 578–580 (1969).

Nagai, Y., Suzuki, M., Miyazaki, M. & Kitazawa, S. Acquisition of multiple prior distributions in tactile temporal order judgment. Front. Psychol. 3 , 1–7 (2012).

Carpenter, R. H. S. & Williams, M. L. L. Neural computation of log likelihood in control of saccadic eye movements. Nature 377 , 59–62 (1995).

Rao, V., Deangelis, G. C. & Snyder, L. H. Neural correlates of prior expectations of motion in the lateral intraparietal and middle temporal areas. J. Neurosci. 32 (29), 10063–10074 (2012).

Ashby, F. G. A biased random walk model for two choice reaction times. J. Math. Psychol. 27 , 277–297 (1983).

Article   MathSciNet   MATH   Google Scholar  

Diederich, A. Dynamic stochastic models for decision making under time constraints. J. Math. Psychol. 41 , 260–274 (1997).

Article   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Rungratsameetaweemana, N., Squire, L. R. & Serences, J. T. Preserved capacity for learning statistical regularities and directing selective attention after hippocampal lesions. Proc. Natl. Acad. Sci. U.S.A. 116 , 19705–19710 (2019).

Tversky, A. & Kahneman, D. Availability: A heuristic for judging frequency and probability. Cogn. Psychol. 5 , 207–232 (1973).

Spellman, B. A. The implicit use of base rates in experiential and ecologically valid tasks. Behav. Brain Sci. 19 , 38–38 (1996).

Cho, R. Y. et al. Mechanisms underlying dependencies of performance on stimulus history in a two-alternative forced-choice task. Cogn. Affect. Behav. Neurosci. 2 , 283–299 (2002).

Basso, M. A. & Wurtz, R. H. Modulation of neuronal activity in superior colliculus by changes in target probability. J. Neurosci. 18 , 7519–7534 (1998).

Platt, M. L. & Glimcher, P. W. Neural correlates of decision variables in parietal cortex. Nature 400 , 233–238 (1999).

Herz, D. M., Bogacz, R. & Brown, P. Neuroscience: Impaired decision-making in Parkinson’s disease. Curr. Biol. 26 , R671–R673 (2016).

Dunovan, K. E., Tremel, J. J. & Wheeler, M. E. Prior probability and feature predictability interactively bias perceptual decisions. Neuropsychologia 61 , 210–221 (2014).

Bogacz, R., Brown, E., Moehlis, J., Holmes, P. & Cohen, J. D. The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychol. Rev. 113 , 700–765 (2006).

Wichmann, F. A. & Hill, N. J. The psychometric function: I. Fitting, sampling, and goodness of fit. Percept. Psychophys. 63 , 1293–1313 (2001).

Heathcote, A., Brown, S. & Mewhort, D. J. Quantile maximum likelihood estimation of response time distributions. Psychon. Bull. Rev. 9 , 394–401 (2002).

Selvin, S., Clayton, D. & Hills, M. Statistical models in epidemiology. J. Am. Stat. Assoc. https://doi.org/10.2307/2291094 (1995).

Download references

The funding was provided by National Eye Institute (5R01EY013692-16) and National Science Foundation (1634157).

Author information

Authors and affiliations.

Department of Psychology, University of California – Los Angeles, Los Angeles, CA, USA

V. N. Thakur & B. J. Knowlton

Fuster Laboratory of Cognitive Neuroscience, Departments of Neurobiology and Psychiatry and Biobehavioral Science, Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California – Los Angeles, Los Angeles, CA, USA

M. A. Basso

Center for Neuroscience and Department of Neurobiology, Physiology, & Behavior, University of California – Davis, Davis, CA, USA

J. Ditterich

You can also search for this author in PubMed   Google Scholar

Contributions

V.T., M.B. and B.K. designed experiments and wrote the manuscript text. V.T. prepared figures and conducted analyses of behavioral data. J.D. and V.T. conducted modeling analyses. All authors reviewed and edited the manuscript.

Corresponding author

Correspondence to B. J. Knowlton .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Thakur, V.N., Basso, M.A., Ditterich, J. et al. Implicit and explicit learning of Bayesian priors differently impacts bias during perceptual decision-making. Sci Rep 11 , 16932 (2021). https://doi.org/10.1038/s41598-021-95833-7

Download citation

Received : 16 March 2021

Accepted : 26 July 2021

Published : 20 August 2021

DOI : https://doi.org/10.1038/s41598-021-95833-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

example of prior hypothesis bias

Book cover

Clinical Epidemiology pp 35–52 Cite as

Definitions of Bias in Clinical Research

  • Geoffrey Warden 4  
  • First Online: 20 April 2021

2723 Accesses

4 Citations

2 Altmetric

Part of the book series: Methods in Molecular Biology ((MIMB,volume 2249))

In this chapter, a catalog of the various types of bias that can affect the validity of clinical epidemiologic studies is presented. The biases are classified by stage of research: literature review and publication, design of the study and selection of subjects, execution of the intervention, measurement of exposures and outcomes, data analysis , and interpretation and publication. Definitions are provided for each type of bias listed.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Springer Nature is developing a new tool to find and evaluate Protocols.  Learn more

Miettinen OS, Cook EF (1981) Confounding: essence and detection. Am J Epidemiol 114:593–603

Article   CAS   Google Scholar  

Sackett DL (1979) Bias in analytic research. J Chron Dis 32:51–68

Delgado-Rodríguez M, Llorca J (2004) Bias. J Epidemiol Comm Health 58(8):635–641

Article   Google Scholar  

Porta M (1988) A dictionary of epidemiology, 5th edn. Oxford University Press, New York, NY

Google Scholar  

Gail M, Benichou J (2000) Encyclopedia of epidemiologic methods. Wiley, West Sussex, UK

Althubaiti A (2016) Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc 4:211–217

Download references

Author information

Authors and affiliations.

Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NL, Canada

Geoffrey Warden

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Geoffrey Warden .

Editor information

Editors and affiliations.

Clinical Epidemiology Unit, Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NL, Canada

Patrick S. Parfrey

Brendan J. Barrett

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Science+Business Media, LLC, part of Springer Nature

About this protocol

Cite this protocol.

Warden, G. (2021). Definitions of Bias in Clinical Research. In: Parfrey, P.S., Barrett, B.J. (eds) Clinical Epidemiology. Methods in Molecular Biology, vol 2249. Humana, New York, NY. https://doi.org/10.1007/978-1-0716-1138-8_3

Download citation

DOI : https://doi.org/10.1007/978-1-0716-1138-8_3

Published : 20 April 2021

Publisher Name : Humana, New York, NY

Print ISBN : 978-1-0716-1137-1

Online ISBN : 978-1-0716-1138-8

eBook Packages : Springer Protocols

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

COGNITIVE BIASES AND STRATEGIC DECISION PROCESSES: AN INTEGRATWE PERSPECTIVE

Profile image of 雄太 大杉

Previous studies have not adequately addressed the role of cognitive biases in strategie decisiorr processes. In this article we suggest that cognitive biases are systematically associated with strategic decision processes. Different decision processes tend to accentuate particular types of eognitive fjias. We develop an integrative framework to explore the presence of four basic tyq^es of eognitive bias under five difiererit modes of decision making. The cognitive biases include prior hypotheses and focusing on limited targets, exposure to limited alternatives, iiiseri-sitiv-ity to outcome probabilities and illusion of manageability-. The five mcjdes of strategic decision making are rational, avoidance, logical incrementalist, political and garbage can. We suggest a number of key propositions to facilitate empirical testing of the various contingent relationships implicit in the framework. I^istly, we discuss the implications of this framework for research and managerial practice.

Related Papers

Management for Professionals

This article, through some examples, introduces a few types of biases in strategic decisions and the reasons for their biases. It proposes a strategic decision tendentious model on the basis of consideration of internal and external situations. Three important principles that ought to be complied with in the strategic decision makings are proposed and introduced.

example of prior hypothesis bias

Business: Theory and Practice

Renato Luz Brito Costa

Decision-making is a multidisciplinary and ubiquitous phenomenon in organizations, and it can be observed at the individual, group, and organizational levels. Decision making plays, however, an increasingly important role for the manager, whose cognitive competence is reflected in his ability to identify potential opportunities, to immediately detect and solve the problems he faces, and to predict and prevent future threats. Nevertheless, to what extent do managers of the most diverse sectors and industries continue to rely on false knowledge when they have better strategies at their disposal? The present article proposes, through the application of bibliographically based instruments, the diagnosis of three prominent biases – overconfidence, optimism, and anchoring effect – in managers of the Portuguese port sector, as well as also seeking to establish a comparative analysis with the conclusions already documented in relation to the Brazilian civil construction sector. In addition,...

Advances in Human Resources Management and Organizational Development

Thibault Jacquemin

In today&#39;s post-bureaucratic organization, where decision-making is decentralized, most managers are confronted with highly complex situations where time-constraint and availability of information makes the decision-making process essential. Studies show that a great amount of decisions are not taken after a rational decision-making process but rather rely on instinct, emotion or quickly processed information. After briefly describing the journey of thoughts from Rational Choice Theory to the emergence of Behavioral Economics, this chapter will elaborate on the mechanisms that are at play in decision-making in an attempt to understand the root causes of cognitive biases, using the theory of Kahneman&#39;s (2011) System 1 and System 2. It will discuss the linkage between the complexity of decision-making and post-bureaucratic organization.

Long Range Planning

Mumin Dayan

Strategic Management Journal

Gerard P Hodgkinson

Wright and Goodwin (2002) maintain that, in terms of experimental design and ecological validity, Hodgkinson et al. (1999) failed to demonstrate either that the framing bias is likely to be of salience in strategic decision making, or that causal cognitive mapping provides an effective means of limiting the damage accruing from this bias. In reply, we show that there is ample evidence to support both of our original claims. Moreover, using Wright and Goodwin's own data set, we demonstrate that our studies did in fact attain appropriate levels of ecological validity, and that their proposed alternative to causal cognitive mapping, a decision tree approach, is far from ‘simpler.’ Wright and Goodwin's approach not only fails to eliminate the framing bias—it leads to confusion. Copyright © 2002 John Wiley & Sons, Ltd.

Jane McKenzie

Purpose – The purpose of this paper is to challenge an over-reliance on past experience as the cognitive underpinning for strategic decisions. It seeks to argue that, in complex and unknowable conditions, effective leaders use three distinct and complementary thinking capacities, which go beyond those normally learned during their rise to the top. Design/methodology/approach – A conceptual model of thinking capacities is justified through a review of the psychology literature; the face validity of the proposed model is supported through six in-depth interviews with successful CEOs. Findings – A model of non-conventional thinking capacities describes how strategic decision-makers make choices that are better adapted to the conditions of uncertainty, ambiguity and contradiction, which prevail in complex situations. These capacities are complementary to the more conventional approaches generally used in thinking about decisions. Practical implications – The paper aims to stimulate awareness of the limitations of habitual mental responses in the face of difficult strategic decisions. It challenges leaders consciously to extend their abilities beyond conventional expectations to a higher order of thinking that is better suited to multi-stakeholder situations in complex environments. Originality/value – The paper responds to the challenge of McKenna and Martin-Smith to develop new theoretical approaches to complex environments. It extends conventional approaches to decision making by synthesising from the literature some essential thinking capacities, which are well suited to the demands of situations dominated by uncertainty, ambiguity and contradiction.

Jurnal Ekonomi Perusahaan

Bilson Simamora

Extant studies hold that the decision quality at the very moment of choice indicates future task accomplishment. However, regarding individual-making, the decision’s strategic nature still received little attention from scientists so far. For that reason, the author utilizes the strategic decision dimensions of justifiability, confidence, and satisfaction to form a new concept called strategic decisional beliefs. Making self-efficacy, motivation, subjective well-being, loyalty, and switching likelihood as the concept’s consequences under investigation, the author tests the concept using data from 350 new students chosen judgmentally. As expected, exploratory factor analysis with maximum likelihood extraction offers only one latent variable for the three underlined dimensions. Further investigation with confirmatory factor analysis indicates that all items are internally valid, reliable, and solidly merged into a single construct with a close fit measurement model. Good-fit structura...

Marketing Letters

Kim P Corfman , Marian Moore

This goal of this paper is to establish a research agenda that will lead to a stream of research that closes the gap between actual and normative strategic managerial decision making. We start by distinguishing strategic managerial decision making (choices) from other choices. Next, we propose a conceptual model of how managers make strategic decisions that is consistent with the observed gap between actual and normative decision making. This framework suggests a series of interesting issues, both descriptive and prescriptive in nature, about the strategic decision-making process that define our proposed research agenda.

Acta Univ. Agric. Silvic. Mendel. Brun. 2013, Volume 61, Issue 7, pp. 2117-2122.

The aim of the paper is to demonstrate the impact of heuristics, biases and psychological traps on the decision making. Heuristics are unconscious routines people use to cope with the complexity inherent in most decision situations. They serve as mental shortcuts that help people to simplify and structure the information encountered in the world. These heuristics could be quite useful in some situations, while in others they can lead to severe and systematic errors, based on significant deviations from the fundamental principles of statistics, probability and sound judgment. This paper focuses on illustrating the existence of the anchoring, availability, and representativeness heuristics, originally described by Tversky & Kahneman in the early 1970's. The anchoring heuristic is a tendency to focus on the initial information, estimate or perception (even random or irrelevant number) as a starting point. People tend to give disproportionate weight to the initial information they receive. The availability heuristic explains why highly imaginable or vivid information have a disproportionate effect on people’s decisions. The representativeness heuristic causes that people rely on highly specific scenarios, ignore base rates, draw conclusions based on small samples and neglect scope. Mentioned phenomena are illustrated and supported by evidence based on the statistical analysis of the results of a questionnaire.

RELATED PAPERS

Reshmi Chaddha

Thrombosis and Haemostasis

Nils Egberg

International Journal of Supply Chain Management

norzianis rezali

Annals of Combinatorics

Toufik Mansour

Pesquisa Veterinária Brasileira

Rita de Cassia Maria Garcia

npj Vaccines

Fabrice Navarro

Venkatesh ponemone

Computers, Materials &amp; Continua

Salvatore Distefano

European Journal of Cancer

Journal of Pharmacology and Experimental Therapeutics

PIERRE-ALAIN VITTE

British Journal of Anaesthesia

Morgan Hughes

Jurnal Obsesi : Jurnal Pendidikan Anak Usia Dini

Indriyani Rauf

Revista de Iniciação à Docência

Jamerson Bandeira

Tropical Medicine &amp; International Health

Limjon Sinaga

International Journal of Health Sciences (IJHS)

Beatrice Olubukola Ogunba

Debarghya Ghose

JATI (Jurnal Mahasiswa Teknik Informatika)

zaki hidayat

Nurul Afrianti

Gisli Thorsteinsson

DESALINATION AND WATER TREATMENT

Dr. Ayesha Aihetasham

MDPI eBooks

Francesco Marinello

Alessandro Piga

bioRxiv (Cold Spring Harbor Laboratory)

Joana Cabral

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Catalogue of Bias

CEBM and Oxford University Logos

Hypothetical bias

A distortion that arises when an individual’s stated behaviour or valuation differs to that of their real behaviour or valuation.

Table of Contents

Preventive steps.

  • Further resources

Hypothetical bias occurs when individuals report unrealistic behaviours or values to researchers in surveys or in experimental studies. In other words, what individuals say they would do hypothetically is not necessarily what they would do in reality.[1] This bias occurs in stated preference studies (individuals’ stated choices/valuations of goods/services), e.g. discrete choice experiments (DCEs), which are widely used across health sciences. Hypothetical bias impacts the validity of a study’s results. It is considered particularly prevalent in healthcare because there are many treatments and services that individuals may experience in the future or may not experience at all.

Hypothetical bias is thought to be linked to several factors, such as responses in stated preference settings being non-binding. As such, the implications to the individual of their responses are inconsequential (and respondents may not in fact agree with the policy implications of their own choices; see [2]). Moreover, the settings in which experiments or surveys are taken (e.g. online surveys) may be far removed from the settings in which the corresponding real-world behaviours are conducted (e.g. making decisions about treatment options in clinical settings). Lastly, respondents may respond strategically to surveys for a variety of reasons (e.g. report that they would use primary care services more often than they really would if they believed a new service would be opened closer to them on the basis of this [strategic] response to a survey.[3]

Although hypothetical bias potentially arises in any stated preference study, its presence is difficult to detect. It is an issue that is commonly overlooked in health settings for a variety of reasons, such as having no real-world data to detect or correct for hypothetical bias.[4]

Buckell and Hess (2019) use an online DCE in the US tobacco market, and US tobacco market data, to show the presence of (and correct for) hypothetical bias.[5] Their findings suggest that hypothetical bias can affect the predicted market shares of tobacco products; that is, the predicted proportion(s) of smokers that purchase cigarettes or e-cigarettes appears to be distorted by hypothetical bias. Moreover, both the direction and magnitude of predictions of tobacco policy changes appear to be distorted by hypothetical bias.

Empirical evidence shows how hypothetical bias can impact on results of health-based stated preference studies:

  • Ozdemir et al. (2009) show that estimates of willingness to pay for treatment for rheumatoid arthritis are inflated by hypothetical bias. Respondents in the “cheap talk” arm (versus the control arm) reported much lower willingness-to-pay (WTP) for a four-week onset of treatment: $35 vs $255.[6]
  • Mark and Swait (2004) report differences between experimental and real-world preference estimates for physicians’ prescribing of alcohol treatments, where “the stated preference and revealed preference data do not yield identical preference estimates.” For example, estimates for efficacy were significantly lower for revealed preference (estimated parameter = 0.22; t-ratio = 2.00) than for stated preference (estimated parameter 0.46; t-ratio = 3.10).[7]
  • Quaife et al. (2018) demonstrate some discrepancies between predicted health behaviours (including treatments for sleep apnea, tuberculosis treatments, screening for Chlamydia, and preferences for pharmacy-based health checks) from DCEs and corresponding, actual health behaviours in the real world, “Pooled estimation suggests that the sensitivity of DCE predictions was relatively high (0.88, 95% CI 0.81, 0.92), whilst specificity was substantially lower (0.34, 95% CI 0.23, 0.46). These results suggest that DCEs can be moderately informative for predicting future behavior.”[8]

Many approaches are available to mitigate the impact of hypothetical bias. These are typically categorised as ex-ante approaches (i.e. implemented before reporting) or ex-post approaches (i.e. implemented after reporting) and are detailed below. It is worth noting that, “ it is likely that a number of factors affect hypothetical bias and therefore no single technique will be the magic bullet that eliminates this bias ”.[9]

Ex-ante approaches:

  • Cheap talk [10]: instructing respondents that their responses are feeding into important research that may impact on current clinical practise or policy. This approach aims to induce realistic behaviours by linking respondents’ responses to consequences (terms such as “ consequentiality scripts ” and “ honesty pledges ” have also been used to convey similar approaches).
  • Honesty priming [11]: a technique from psychology in which respondents are required, prior to the experimental task, to make sentences from scrambled words, and the words are those associated with honesty, truthfulness, etc. Respondents are then said to be primed, meaning that they are subliminally encouraged to give truthful responses in the experimental tasks that follow.
  • Inferred valuation [12]: asking respondents to estimate others’, rather than their own, value of a good or service. This method removes an individual’s sense of agency in their valuation and as a consequence is thought to reduce self-related biases in valuations.
  • Incentive compatibility [13: conditioning a reward (typically a financial reward), or the chance of a reward, on respondents’ choices. In this case, respondents’ choices are linked to a payoff, and hypothetical bias is said to be reduced.
  • Pivot designs [1]: embedding information on respondents’ own choices in the design of the experimental tasks to make the tasks more realistic and so to reduce hypothetical bias (see also “SP-off-RP” designs [14]).

Ex-post approaches:

  • Certainty calibration [15]: asking respondents to indicate how certain they are that they would make their experimental choices in real-world settings. This information is then used to adjust models, termed calibration , in analyses so as to reduce hypothetical bias.
  • Revealed preference calibration [10]: obtaining available market (i.e. real-world) data, in which individuals actually made choices, and adjusting – or calibrating – models using this data. Since uncalibrated models are based on experimental data, using real-world behaviour to make adjustments is thought to reduce hypothetical bias.

Catalogue of Bias Collaboration, Buckell, J., Buchanan, J., Wordsworth, S., Becker, F., Morrell, L., Roope, L., Kaur, A., Abel, L. Hypothetical Bias. In: Catalogue of Bias. 2020.

Related biases

  • Ascertainment bias
  • Information bias
  • Selection bias
  • Observer bias
  • Hensher, D. A., Rose, J. M., & Greene, W. (2015). Applied Choice Analysis. Cambridge: Cambridge University Press
  • Shah, K. K., Tsuchiya, A., & Wailoo, A. J. (2018). Valuing health at the end of life: A review of stated preference studies in the social sciences literature. Social Science & Medicine , 204, 39-50.
  • Carson, R. T. and T. Groves (2007). Incentive and informational properties of preference questions. Environmental and Resource Economics 37(1): 181-210.
  • Lancsar, E., & Burge, P. (2014). Choice modelling research in health economics. In S. Hess & A. Daly (Eds.), Handbook of Choice Modelling. Cheltenham: Edward Elgar Publishing.
  • Buckell, J. and J. L. Sindelar (2019). The impact of flavors, health risks, secondhand smoke and prices on young adults’ cigarette and e-cigarette choices: a discrete choice experiment. Addiction 114 (8): 1427-1435.
  • Özdemir, S., Johnson, F. R., & Hauber, A. B. (2009). Hypothetical bias, cheap talk, and stated willingness to pay for health care. Journal of Health Economics , 28(4), 894-901.
  • Mark, T. L. and J. Swait (2004). Using stated preference and revealed preference modeling to evaluate prescribing decisions. Health Economics 13 (6): 563-573.
  • Quaife, M., Terris-Prestholt, F., Di Tanna, G. L., & Vickerman, P. (2018). How well do discrete choice experiments predict health choices? A systematic review and meta-analysis of external validity. The European Journal of Health Economics , 19(8), 1053-1066.
  • Murphy, J. J., Allen, P. G., Stevens, T. H., & Weatherhead, D. (2005). A Meta-analysis of Hypothetical Bias in Stated Preference Valuation. Environmental and Resource Economics , 30(3), 313-325.
  • Buckell, J. and S. Hess (2019). Stubbing out hypothetical bias: improving tobacco market predictions by combining stated and revealed preference data. Journal of Health Economics 65 : 93-102.
  • De Magistris, T., Gracia, A., & Nayga, R. M., Jr. (2013). On the Use of Honesty Priming Tasks to Mitigate Hypothetical Bias in Choice Experiments. American Journal of Agricultural Economics , 95(5), 1136-1154.
  • Lusk, J. L. and F. B. Norwood (2009). An Inferred Valuation Method. Land Economics 85(3): 500-514.
  • Smith, Vernon L. Microeconomic systems as an experimental science. The American Economic Review 72.5 (1982): 923-955.
  • Train, K. E. and W. W. Wilson (2009). Monte Carlo analysis of SP-off-RP data. Journal of Choice Modelling 2 (1): 101-117.
  • Beck, M. J., Fifer, S., & Rose, J. M. (2016). Can you ever be certain? Reducing hypothetical bias in stated choice experiments via respondent reported choice certainty. Transportation Research Part B: Methodological , 89, 149-167.

PubMed feed

https://www.ncbi.nlm.nih.gov/pubmed/clinical/?term=%22hypothetical%20bias%22

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Biochem Med (Zagreb)
  • v.23(1); 2013 Feb

Bias in research

By writing scientific articles we communicate science among colleagues and peers. By doing this, it is our responsibility to adhere to some basic principles like transparency and accuracy. Authors, journal editors and reviewers need to be concerned about the quality of the work submitted for publication and ensure that only studies which have been designed, conducted and reported in a transparent way, honestly and without any deviation from the truth get to be published. Any such trend or deviation from the truth in data collection, analysis, interpretation and publication is called bias. Bias in research can occur either intentionally or unintentionally. Bias causes false conclusions and is potentially misleading. Therefore, it is immoral and unethical to conduct biased research. Every scientist should thus be aware of all potential sources of bias and undertake all possible actions to reduce or minimize the deviation from the truth. This article describes some basic issues related to bias in research.

Introduction

Scientific papers are tools for communicating science between colleagues and peers. Every research needs to be designed, conducted and reported in a transparent way, honestly and without any deviation from the truth. Research which is not compliant with those basic principles is misleading. Such studies create distorted impressions and false conclusions and thus can cause wrong medical decisions, harm to the patient as well as substantial financial losses. This article provides the insight into the ways of recognizing sources of bias and avoiding bias in research.

Definition of bias

Bias is any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions. Bias can occur either intentionally or unintentionally ( 1 ). Intention to introduce bias into someone’s research is immoral. Nevertheless, considering the possible consequences of a biased research, it is almost equally irresponsible to conduct and publish a biased research unintentionally.

It is worth pointing out that every study has its confounding variables and limitations. Confounding effect cannot be completely avoided. Every scientist should therefore be aware of all potential sources of bias and undertake all possible actions to reduce and minimize the deviation from the truth. If deviation is still present, authors should confess it in their articles by declaring the known limitations of their work.

It is also the responsibility of editors and reviewers to detect any potential bias. If such bias exists, it is up to the editor to decide whether the bias has an important effect on the study conclusions. If that is the case, such articles need to be rejected for publication, because its conclusions are not valid.

Bias in data collection

Population consists of all individuals with a characteristic of interest. Since, studying a population is quite often impossible due to the limited time and money; we usually study a phenomenon of interest in a representative sample. By doing this, we hope that what we have learned from a sample can be generalized to the entire population ( 2 ). To be able to do so, a sample needs to be representative of the population. If this is not the case, conclusions will not be generalizable, i.e. the study will not have the external validity.

So, sampling is a crucial step for every research. While collecting data for research, there are numerous ways by which researchers can introduce bias in the study. If, for example, during patient recruitment, some patients are less or more likely to enter the study than others, such sample would not be representative of the population in which this research is done. In that case, these subjects who are less likely to enter the study will be under-represented and those who are more likely to enter the study will be over-represented relative to others in the general population, to which conclusions of the study are to be applied to. This is what we call a selection bias . To ensure that a sample is representative of a population, sampling should be random, i.e. every subject needs to have equal probability to be included in the study. It should be noted that sampling bias can also occur if sample is too small to represent the target population ( 3 ).

For example, if the aim of the study is to assess the average hsCRP (high sensitive C-reactive protein) concentration in healthy population in Croatia, the way to go would be to recruit healthy individuals from a general population during their regular annual health check up. On the other hand, a biased study would be one which recruits only volunteer blood donors because healthy blood donors are usually individuals who feel themselves healthy and who are not suffering from any condition or illness which might cause changes in hsCRP concentration. By recruiting only healthy blood donors we might conclude that hsCRP is much lower that it really is. This is a kind of sampling bias, which we call a volunteer bias .

Another example for volunteer bias occurs by inviting colleagues from a laboratory or clinical department to participate in the study on some new marker for anemia. It is very likely that such study would preferentially include those participants who might suspect to be anemic and are curious to learn it from this new test. This way, anemic individuals might be over-represented. A research would then be biased and it would not allow generalization of conclusions to the rest of the population.

Generally speaking, whenever cross-sectional or case control studies are done exclusively in hospital settings, there is a good chance that such study will be biased. This is called admission bias . Bias exists because the population studied does not reflect the general population.

Another example of sampling bias is the so called survivor bias which usually occurs in cross-sectional studies. If a study is aimed to assess the association of altered KLK6 (human Kallikrein-6) expression with a 10 year incidence of Alzheimer’s disease, subjects who died before the study end point might be missed from the study.

Misclassification bias is a kind of sampling bias which occurs when a disease of interest is poorly defined, when there is no gold standard for diagnosis of the disease or when a disease might not be easy detectable. This way some subjects are falsely classified as cases or controls whereas they should have been in another group. Let us say that a researcher wants to study the accuracy of a new test for an early detection of the prostate cancer in asymptomatic men. Due to absence of a reliable test for the early prostate cancer detection, there is a chance that some early prostate cancer cases would go misclassified as disease-free causing the under- or over-estimation of the accuracy of this new marker.

As a general rule, a research question needs to be considered with much attention and all efforts should be made to ensure that a sample is as closely matched to the population, as possible.

Bias in data analysis

A researcher can introduce bias in data analysis by analyzing data in a way which gives preference to the conclusions in favor of research hypothesis. There are various opportunities by which bias can be introduced during data analysis, such as by fabricating, abusing or manipulating the data. Some examples are:

  • reporting non-existing data from experiments which were never done (data fabrication);
  • eliminating data which do not support your hypothesis (outliers, or even whole subgroups);
  • using inappropriate statistical tests to test your data;
  • performing multiple testing (“fishing for P”) by pair-wise comparisons ( 4 ), testing multiple endpoints and performing secondary or subgroup analyses, which were not part of the original plan in order “to find” statistically significant difference regardless to hypothesis.

For example, if the study aim is to show that one biomarker is associated with another in a group of patients, and this association does not prove significant in a total cohort, researchers may start “torturing the data” by trying to divide their data into various subgroups until this association becomes statistically significant. If this sub-classification of a study population was not part of the original research hypothesis, such behavior is considered data manipulation and is neither acceptable nor ethical. Such studies quite often provide meaningless conclusions such as:

  • CRP was statistically significant in a subgroup of women under 37 years with cholesterol concentration > 6.2 mmol/L;
  • lactate concentration was negatively associated with albumin concentration in a subgroup of male patients with a body mass index in the lowest quartile and total leukocyte count below 4.00 × 10 9 /L.

Besides being biased, invalid and illogical, those conclusions are also useless, since they cannot be generalized to the entire population.

There is a very often quoted saying (attributed to Ronald Coase, but unpublished to the best of my knowledge), which says: “If you torture the data long enough, it will confess to anything”. This actually means that there is a good chance that statistical significance will be reached only by increasing the number of hypotheses tested in the work. The question is then: is this significant difference real or did it occur by pure chance?

Actually, it is well known that if 20 tests are performed on the same data set, at least one Type 1 error (α) is to be expected. Therefore, the number of hypotheses to be tested in a certain study needs to determined in advance. If multiple hypotheses are tested, correction for multiple testing should be applied or study should be declared as exploratory.

Bias in data interpretation

By interpreting the results, one needs to make sure that proper statistical tests were used, that results were presented correctly and that data are interpreted only if there was a statistical significance of the observed relationship ( 5 ). Otherwise, there may be some bias in a research.

However, wishful thinking is not rare in scientific research. Some researchers tend to believe so much in their original hypotheses that they tend to neglect the original findings and interpret them in favor of their beliefs. Examples are:

  • discussing observed differences and associations even if they are not statistically significant (the often used expression is “borderline significance”);
  • discussing differences which are statistically significant but are not clinically meaningful;
  • drawing conclusions about the causality, even if the study was not designed as an experiment;
  • drawing conclusions about the values outside the range of observed data (extrapolation);
  • overgeneralization of the study conclusions to the entire general population, even if a study was confined to the population subset;
  • Type I (the expected effect is found significant, when actually there is none) and type II (the expected effect is not found significant, when it is actually present) errors ( 6 ).

Even if this is done as an honest error or due to the negligence, it is still considered a serious misconduct.

Publication bias

Unfortunately, scientific journals are much more likely to accept for publication a study which reports some positive than a study with negative findings. Such behavior creates false impression in the literature and may cause long-term consequences to the entire scientific community. Also, if negative results would not have so many difficulties to get published, other scientists would not unnecessarily waste their time and financial resources by re-running the same experiments.

Journal editors are the most responsible for this phenomenon. Ideally, a study should have equal opportunity to be published regardless of the nature of its findings, if designed in a proper way, with valid scientific assumptions, well conducted experiments and adequate data analysis, presentation and conclusions. However, in reality, this is not the case. To enable publication of studies reporting negative findings, several journals have already been launched, such as Journal of Pharmaceutical Negative Results, Journal of Negative Results in Biomedicine, Journal of Interesting Negative Results and some other. The aim of such journals is to counterbalance the ever-increasing pressure in the scientific literature to publish only positive results.

It is our policy at Biochemia Medica to give equal consideration to submitted articles, regardless to the nature of its findings.

One sort of publication bias is the so called funding bias which occurs due to the prevailing number of studies funded by the same company, related to the same scientific question and supporting the interests of the sponsoring company. It is absolutely acceptable to receive funding from a company to perform a research, as long as the study is run independently and not being influenced in any way by the sponsoring company and as long as the funding source is declared as a potential conflict of interest to the journal editors, reviewers and readers.

It is the policy of our Journal to demand such declaration from the authors during submission and to publish this declaration in the published article ( 7 ). By this we believe that scientific community is given an opportunity to judge on the presence of any potential bias in the published work.

There are many potential sources of bias in research. Bias in research can cause distorted results and wrong conclusions. Such studies can lead to unnecessary costs, wrong clinical practice and they can eventually cause some kind of harm to the patient. It is therefore the responsibility of all involved stakeholders in the scientific publishing to ensure that only valid and unbiased research conducted in a highly professional and competent manner is published ( 8 ).

Potential conflict of interest

None declared.

IMAGES

  1. 78 Cognitive Bias Examples (2024)

    example of prior hypothesis bias

  2. How to Write a Hypothesis

    example of prior hypothesis bias

  3. 15 Hypothesis Examples (2024)

    example of prior hypothesis bias

  4. Acknowledge and Identify Bias

    example of prior hypothesis bias

  5. Research Hypothesis: Definition, Types, Examples and Quick Tips

    example of prior hypothesis bias

  6. How to Write a Strong Hypothesis in 6 Simple Steps

    example of prior hypothesis bias

VIDEO

  1. Hypothesis spaces, Inductive bias, Generalization, Bias variance trade-off in tamil -AL3451 #ML

  2. जाने मशीन लर्निंग के बेसिक टर्म्स inductive bias, hypothesis class, hypothesis and bias

  3. Linear Regression Models: Least squares, single & multiple variables in unit2 tamil -AL3451 #ML

  4. Bayesian linear regression, gradient descent linear regression in tamil (unit2)-AL3451 #ML

  5. Just world hypothesis

  6. mod07lec42

COMMENTS

  1. Confirmation Bias In Psychology: Definition & Examples

    This type of confirmation bias explains people's search for evidence in a one-sided way to support their hypotheses or theories. Experiments have shown that people provide tests/questions designed to yield "yes" if their favored hypothesis is true and ignore alternative hypotheses that are likely to give the same result.

  2. Measuring and Controlling Bias for Some Bayesian Inferences and the

    Thus, bias against can be controlled by sample size n or by the diffuseness of the prior although, as subsequently shown, a diffuse prior induces bias in favor. It is also the case that ( 6 ) converges to 0 when μ 0 → ± ∞ or when σ 0 / n τ 0 is fixed and τ 0 → 0 .

  3. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  4. Prior Hypothesis Bias

    Prior Hypothesis Bias. Prior Hypothesis bias refers to the fact that decision makers who have strong prior beliefs about the relationship between two variables tend to make decisions on the basis of those beliefs, even when presented with the evidence that their beliefs are wrong. Moreover, they tend to use and seek information that is ...

  5. Confirmation bias and methodology in social science: an editorial

    Walter R. Schumm. While science is presumably objective, scholars are humans, with subjective biases. Those biases can lead to distortions in how they develop and use scientific theory and how they apply their research methodologies. The numerous ways in which confirmation bias may influence attempts to accept or reject the null hypothesis are ...

  6. Effect of interpretive bias on research evidence

    Definitions of interpretation biases. Confirmation bias—evaluating evidence that supports one's preconceptions differently from evidence that challenges these convictions. Rescue bias—discounting data by finding selective faults in the experiment. Auxiliary hypothesis bias—introducing ad hoc modifications to imply that an unanticipated finding would have been otherwise had the ...

  7. Biases in research

    1. Confirmation bias (or hypothesis myopia) This is one of the biases that concerns you as a researcher rather than the participants or your study. It means that, if a researcher has stated a hypothesis he/she believes is true, they may be using responses that confirm the bias and disregard evidence that would undermine the hypothesis.

  8. Implicit and explicit learning of Bayesian priors differently ...

    The mean bias for the positive prior trials was 0.540 and for the negative prior trials was 0.423 and this difference in biases were statistically significant (Fig. 3a: t = 2.68, p < 0.001). As in ...

  9. Definitions of Bias in Clinical Research

    Anchoring Bias/Adjustment Bias: When an investigator either subconsciously or consciously adjusts the initial reference point so that the result may reach their estimate hypothesis. Data Dredging Bias: When investigators review the data for all possible associations without prior hypothesis. This "shotgun approach" to analyses increases the ...

  10. Confirmation Bias: A Ubiquitous Phenomenon in Many Guises

    Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand.The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts.

  11. Confirmation bias

    Confirmation bias (also confirmatory bias, myside bias, or congeniality bias) is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their ...

  12. A confirmation bias in perceptual decision-making due to hierarchical

    The bias is due to the interaction of approximations with feedback of prior beliefs, such that L L O ^ f is biased towards LPO f−1, resulting in a confirmation bias. Importantly, this bias arises naturally in both the sampling-based and variational approximate inference algorithms that we study here, as a direct consequence the approximate ...

  13. PDF Confirmation Bias in Complex Analyses

    1 Introduction. Wickens and Hollands (2000, p. 312) define the confirmation bias as a tendency "for people to seek information and cues that confirm the tentatively held hypothesis or belief, and not seek (or discount) those that support an opposite conclusion or belief.".

  14. (Pdf) Cognitive Biases and Strategic Decision Processes: an Integratwe

    Schvver-ik (1984, 1985), for example, identifies 11 cognitive biases, including prior hypothesis bias, single outcome calculation, illusion of control, and so on. ... that managers involved in different decision processes exhibit different combinations of four ba.sic types of cognitive bias. For example, managers in the avoidance mode are ...

  15. Cognitive biases as impediments to enhancing supply chain

    For example, in 2014, the bankruptcy of a key Apple supplier "caught the tech giant by surprise" (Reviewjournal.com, 2014). Suppliers' struggles are often hard to detect, and the prior hypothesis bias could undermine managers' ability to notice what should be telltale clues.

  16. Clinical reasoning in dire times. Analysis of cognitive biases in

    Confirmation bias (i.e. to look only for symptoms or signs that may confirm a diagnostic hypothesis) was present in case 5 where doctors appeared to interpret clinical findings only to support a previous diagnostic hypothesis (verbatim: "Blood smears identified Plasmodium falciparum and she was started on IV artesunate […]

  17. Turning biases into hypotheses through method: A logic of scientific

    A hypothesis in ML is required (1) to explain the relationship between input and output and (2) to be possible to implement either as a bias in training data or as inductive bias. In other words, a Peircean conception of hypotheses makes clear the interpretation of the relationship between input and output, and how this interpretation leads to ...

  18. (PDF) Practical Examples of three types of cognitive bias

    Prior hypothesis 2. Availability 3. Confirmation. 4. ... It's this preferential mode of behaviour that leads to the confirmation bias. ... Another example, ...

  19. Hypothetical bias

    It is an issue that is commonly overlooked in health settings for a variety of reasons, such as having no real-world data to detect or correct for hypothetical bias.[4] Example. Buckell and Hess (2019) use an online DCE in the US tobacco market, and US tobacco market data, to show the presence of (and correct for) hypothetical bias.[5]

  20. Confirmation bias emerges from an approximation to ...

    In the case of attitude polarisation, we also include a strong prior belief against the central hypothesis, P (H) = 0. 2. And in the case of belief perseverance we start with a neutral prior in the central hypothesis, P (H = 1) = 0. 5. In all the examples, the simulated individual receives multiple datums sequentially from either a single ...

  21. Protecting against researcher bias in secondary data analysis

    Proposed solutions include approaches to (1) address bias linked to prior knowledge of the data, (2) enable pre-registration of non-hypothesis-driven research, (3) help ensure that pre-registered analyses will be appropriate for the data, and (4) address difficulties arising from reduced analytic flexibility in pre-registration. ... For example ...

  22. Chapter 1: Strategic Leadership Flashcards

    a bias rooted in the tendency to generalize from a small sample or even a single vivid anecdote. Reasoning by Analogy. ... Prior hypothesis bias. a cognitive bias that occurs when decision makers who have strong prior beliefs tend to make decisions on the basis of these beliefs, even when presented with evidence that their beliefs are wrong ...

  23. Bias in research

    This is called admission bias. Bias exists because the population studied does not reflect the general population. Another example of sampling bias is the so called survivor bias which usually occurs in cross-sectional studies. If a study is aimed to assess the association of altered KLK6 (human Kallikrein-6) expression with a 10 year incidence ...