working

Market Research

Understanding the Purpose of a Qualitative Study: Methods and Examples

In a data-driven world that seems to worship the quantitative, it’s easy to overlook the profound contributions of qualitative study. Yet qualitative research, with its focus on exploring subjective human experiences, plays an indispensable role in our understanding of markets. 

In this blog post, we unravel the true purpose of a qualitative study, examining its various methods and citing real-world examples. Discover how this intricate blend of art and science can provide unparalleled views into consumer behavior, helping your brand connect with audiences on a deeply personal level. Let’s dive in and explore the depths from which real insights emerge.

Understanding Qualitative Research

Qualitative research is a methodological approach used to gain a deep understanding of phenomena or social issues through exploring participants’ experiences, behaviors, and perspectives in a specific context. It aims to provide rich descriptions and explanations of processes within identifiable local contexts. Unlike quantitative research, which focuses on numerical data analysis, qualitative research delves into the reasons behind people’s thoughts and feelings, complementing quantitative methods.

Imagine that you are researching patient satisfaction in healthcare settings. You might utilize qualitative research methods to capture how patients and healthcare professionals feel about care in a social, clinical, or interpersonal context. Through techniques such as interviews, focus groups, or observations, you can uncover valuable insights that provide a more nuanced understanding of patient experiences.

Qualitative vs Quantitative Research

When discussing research methodologies, it is important to distinguish between qualitative and quantitative approaches . While both aim to gather information, there are key differences in their objectives and methodologies.

Quantitative research focuses on collecting numerical data that can be analyzed statistically to identify patterns and generalize findings to a larger population or phenomenon. This type of research is concerned with measuring variables and testing hypotheses using tools like surveys or experiments. It aims to establish cause-and-effect relationships and draw conclusions based on statistical evidence.

In contrast, qualitative research collects non-numerical data to understand social life through targeted populations or places. It is framed in opposition to quantitative research, which uses numerical data for large-scale trends and causal relationships. Qualitative researchers use methods such as observation, interviews, open-ended surveys, focus groups, content analysis, or oral history to investigate the meanings that people attribute to their behavior and interactions. This approach provides an in-depth understanding of attitudes, behaviors, interactions, events, and social processes.

For instance, let’s consider a study on the experiences of individuals living with chronic pain. Through qualitative research methods like in-depth interviews or participant observation, researchers can uncover the lived experiences of individuals, exploring how pain impacts their daily lives, relationships, and overall well-being. This qualitative approach enables researchers to capture rich and contextual insights that quantitative methods alone cannot provide.

While quantitative research seeks to establish generalizability through statistical analysis, qualitative research delves into the subjective meanings and interpretations that individuals ascribe to their experiences. These two approaches complement each other in providing a more comprehensive understanding of complex phenomena.

  • According to a review published in Nature in 2020, over 80% of all social sciences research employs some form of qualitative method.
  • A 2019 study found that around 75% of health-related publications using qualitative methods were used for exploring patient experiences and perceptions.
  • Another survey suggests that approximately 68% of qualitative research is utilized to develop hypotheses and theories in areas where available data is scant or nonexistent.

Conducting a Qualitative Study

Conducting a qualitative study requires methodological rigor and careful planning. Researchers must consider the appropriate approach, methods, and procedures to align with their research questions, objectives, and potential biases. Different methodologies can be employed based on the nature of the study.

One widely used methodology is ethnography, which involves observing participants in their natural environments over an extended period of time. This method allows researchers to understand how environmental constraints and context shape behaviors and outcomes.

Another approach is grounded theory, which suggests that theory should emerge from data rather than being driven by pre-existing hypotheses or theories. Grounded theory is particularly useful when little is known about a problem or a specific context.

Phenomenology aims to understand problems and situations through individuals’ subjective experiences. It explores how people make sense of their world and give meaning to their everyday lives.

Throughout the process of conducting qualitative research, it is crucial for researchers to practice reflexivity. Reflexivity acknowledges the researcher’s subjectivity and biases. Being transparent about one’s background and interests enables readers to draw their own conclusions about interpretations.

Now that we understand the key objectives of qualitative research and the importance of conducting it with methodological rigor, let’s delve into the foundational step of setting research questions.

Setting Research Questions

In any qualitative study, the starting point is setting research questions. These questions serve as a guide to uncovering insights and an in-depth understanding of the phenomenon under investigation. Rather than focusing on numerical data analysis like quantitative research, qualitative studies aim to explore the reasons behind people’s thoughts, feelings, and behaviors. The research questions bring clarity to what researchers seek to learn from participants. They also help ensure that the study remains focused and aligned with the objectives.

Let’s consider an example of a qualitative study exploring patients’ experiences with telehealth during the COVID-19 pandemic. The research questions could revolve around understanding how patients perceive the effectiveness of telehealth, uncovering barriers they face in accessing care, and exploring their satisfaction levels with virtual healthcare encounters.

By setting clear research questions at the outset, researchers can align their objectives with the topic of interest and design a robust methodology to collect relevant data.

Approach to Research Methodology

The methodology used in qualitative research plays a crucial role in shaping how data will be collected, analyzed, and interpreted. Scholars employ various approaches based on the nature of their research questions and desired outcomes. Some common methodologies include ethnography, grounded theory, and phenomenology.

Ethnography involves immersing oneself within a particular group or community to observe participants’ behavior in natural environments over an extended period of time. This approach provides insights into how environmental constraints and context influence behaviors and outcomes.

Grounded theory is valuable when little is known about a problem or context. It emphasizes deriving theories from data rather than relying on preconceived hypotheses or theories. This iterative process helps build a comprehensive understanding of the phenomenon being studied.

Phenomenology aims to understand problems and situations from the perspective of common understanding and subjective experience. It explores how individuals experience and give meaning to their world.

For instance, if researchers wanted to explore the experiences of nurses dealing with workplace burnout, they might choose a phenomenological approach. Through in-depth interviews and reflection on personal experiences, researchers can gain insights into the lived experiences of nurses coping with burnout and the meaning they attach to it.

By carefully selecting the appropriate research methodology, researchers lay a strong foundation for analyzing qualitative data effectively.

Analyzing Qualitative Data

Once the data from a qualitative study has been collected, the next step is to analyze it. Unlike quantitative research, where statistical analysis is employed, qualitative data analysis focuses on interpreting and making sense of the rich and diverse information gathered. This process involves several key steps.

Firstly, researchers must thoroughly familiarize themselves with the data by reading and re-reading transcripts, field notes, or other sources. This immersion allows them to identify recurring themes, patterns, or codes that emerge from the data. These themes may be based on specific words or phrases used by participants or broader concepts that arise organically.

Researchers then engage in coding, which involves systematically organizing and categorizing data based on identified themes or concepts. Codes act as labels that help in classifying different aspects of the data and enable researchers to compare and contrast findings across different sources.

Next, researchers engage in the process of data reduction, where they condense and summarize the vast amount of qualitative data collected. This often involves creating charts or matrices to visualize the relationships between themes or categories. Additionally, researchers may create narrative summaries or rich descriptions to communicate the essence of participants’ experiences effectively.

Finally, interpretations are made based on the analyzed data. These interpretations involve generating explanations or theories that offer insights into participants’ perspectives, behaviors, and interactions within their social context. Researchers need to remain reflexive throughout this process, acknowledging their own biases and subjectivity that may influence their interpretations.

With Discuss.io, you can:

  • Understand why customers buy (or don’t): Go beyond demographics to discover their hopes, fears, and pain points.
  • Fuel innovation: Design products and services that resonate deeply with your target audience.
  • Craft compelling marketing messages: Speak directly to your customers’ emotions and desires.
  • Improve customer experiences: Identify areas for improvement and build stronger relationships.

Are you tired of staring at cold data, longing to understand the human truth behind the numbers? Discuss.io’s innovative qualitative research platform empowers you to go beyond the surface and delve into the rich tapestry of customer thoughts, feelings, and motivations. Schedule a demo and see how Discuss.io can transform your research process.

Sign Up for our Newsletter

Related articles.

qualitative research statements imply cause and effect

Navigating Tomorrow: A Glimpse into the Future of Market Research in 2024

Author: Jim Longo, Co-founder & Chief Strategy Officer As a veteran with over 30 years in…

Looking at qualitative analysis of consumer data.

Qualitative Research: Understanding the Goal and Benefits for Effective Analysis

As market trends evolve at lightning speed in the age of digital transformation, having an intimate…

qualitative research statements imply cause and effect

The Importance of a Market Research Analyst: Key Benefits and Skills

As the world becomes increasingly data-driven, understanding market trends is more crucial than ever for business…

Logo for The University of Regina OEP Program

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

13 4.2 Causality

Learning objectives.

  • Define and provide an example of idiographic and nomothetic causal explanations
  • Describe the role of causality in quantitative research as compared to qualitative research
  • Identify, define, and describe each of the main criteria for nomothetic causal explanations
  • Describe the difference between and provide examples of independent, dependent, and control variables
  • Define hypothesis, be able to state a clear hypothesis, and discuss the respective roles of quantitative and qualitative research when it comes to hypotheses

Most social scientific studies attempt to provide some kind of causal explanation.  In other words, it is about cause and effect. A study on an intervention to prevent child abuse is trying to draw a connection between the intervention and changes in child abuse. Causality refers to the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief.  It seems simple, but you may be surprised to learn there is more than one way to explain how one thing causes another. How can that be? How could there be many ways to understand causality?

qualitative research statements imply cause and effect

Think back to our chapter on paradigms, which were analytic lenses comprised of assumptions about the world. You’ll remember the positivist paradigm as the one that believes in objectivity and social constructionist paradigm as the one that believes in subjectivity. Both paradigms are correct, though incomplete, viewpoints on the social world and social science.

A researcher operating in the social constructionist paradigm would view truth as subjective. In causality, that means that in order to try to understand what caused what, we would need to report what people tell us. Well, that seems pretty straightforward, right? Well, what if two different people saw the same event from the exact same viewpoint and came up with two totally different explanations about what caused what? A social constructionist might say that both people are correct. There is not one singular truth that is true for everyone, but many truths created and shared by people.

When social constructionists engage in science, they are trying to establish one type of causality—idiographic causality.  The word idiographic comes from the root word “idio” which means peculiar to one, personal, and distinct. An idiographic causal explanation means that you will attempt to explain or describe your phenomenon exhaustively, based on the subjective understandings of your participants. Idiographic causal explanations are intended to explain one particular context or phenomenon.  These explanations are bound with the narratives people create about their lives and experience, and are embedded in a cultural, historical, and environmental context. Idiographic causal explanations are so powerful because they convey a deep understanding of a phenomenon and its context. From a social constructionist perspective, the truth is messy. Idiographic research involves finding patterns and themes in the causal themes established by your research participants.

If that doesn’t sound like what you normally think of as “science,” you’re not alone. Although the ideas behind idiographic research are quite old in philosophy, they were only applied to the sciences at the start of the last century. If we think of famous Western scientists like Newton or Darwin, they never saw truth as subjective. They operated with the understanding there were objectively true laws of science that were applicable in all situations. In their time, another paradigm–the positivist paradigm–was dominant and continues its dominance today. When positivists try to establish causality, they are like Newton and Darwin, trying to come up with a broad, sweeping explanation that is universally true for all people. This is the hallmark of a nomothetic causal explanation .  The word nomothetic is derived from the root word “nomo” which means related to a law or legislative, and “thetic” which means something that establishes.  Put the root words together and it means something that is establishing a law, or in our case, a universal explanation.

Nomothetic causal explanations are incredibly powerful. They allow scientists to make predictions about what will happen in the future, with a certain margin of error. Moreover, they allow scientists to generalize —that is, make claims about a large population based on a smaller sample of people or items. Generalizing is important. We clearly do not have time to ask everyone their opinion on a topic, nor do we have the ability to look at every interaction in the social world. We need a type of causal explanation that helps us predict and estimate truth in all situations.

If these still seem like obscure philosophy terms, let’s consider an example. Imagine you are working for a community-based non-profit agency serving people with disabilities. You are putting together a report to help lobby the state government for additional funding for community support programs, and you need to support your argument for additional funding at your agency. If you looked at nomothetic research, you might learn how previous studies have shown that, in general, community-based programs like yours are linked with better health and employment outcomes for people with disabilities. Nomothetic research seeks to explain that community-based programs are better for everyone with disabilities. If you looked at idiographic research, you would get stories and experiences of people in community-based programs. These individual stories are full of detail about the lived experience of being in a community-based program. Using idiographic research, you can understand what it’s like to be a person with a disability and then communicate that to the state government. For example, a person might say “I feel at home when I’m at this agency because they treat me like a family member” or “this is the agency that helped me get my first paycheck.”

Neither kind of causal explanation is better than the other. A decision to conduct idiographic research means that you will attempt to explain or describe your phenomenon exhaustively, attending to cultural context and subjective interpretations. A decision to conduct nomothetic research, on the other hand, means that you will try to explain what is true for everyone and predict what will be true in the future. In short, idiographic explanations have greater depth, and nomothetic explanations have greater breadth. More importantly, social workers understand the value of both approaches to understanding the social world. A social worker helping a client with substance abuse issues seeks idiographic knowledge when they ask about that client’s life story, investigate their unique physical environment, or probe how they understand their addiction. At the same time, a social worker also uses nomothetic knowledge to guide their interventions. Nomothetic research may help guide them to minimize risk factors and maximize protective factors or use an evidence-based therapy, relying on knowledge about what in general helps people with substance abuse issues.

qualitative research statements imply cause and effect

Nomothetic causal explanations

If you are trying to generalize about causality, or create a nomothetic causal explanation, then the rest of these statements are likely to be true: you will use quantitative methods, reason deductively, and engage in explanatory research. How can we make that prediction? Let’s take it part by part.

Because nomothetic causal explanations try to generalize, they must be able to reduce phenomena to a universal language, mathematics. Mathematics allows us to precisely measure, in universal terms, phenomena in the social world. Because explanatory researchers want a clean “x causes y” explanation, they need to use the universal language of mathematics to achieve their goal. That’s why nomothetic causal explanations use quantitative methods.  It’s helpful to note that not all quantitative studies are explanatory. For example, a descriptive study could reveal the number of people without homes in your county, though it won’t tell you why they are homeless. But nearly all explanatory studies are quantitative.

What we’ve been talking about here is an association between variables. When one variable precedes or predicts another, we have what researchers call independent and dependent variables. Two variables can be associated without having a causal relationship.  However, when certain conditions are met (which we describe later in this chapter), the independent variable is considered as a “ cause ” of the dependent variable.  For our example on spanking and aggressive behavior, spanking would be the independent variable and aggressive behavior addiction would be the dependent variable.  In causal explanations, the  independent variable is the cause, and the dependent variable is the effect.  Dependent variables depend on independent variables. If all of that gets confusing, just remember this graphical depiction:

The letters IV on the left with an arrow pointing towards DV

The strength of the association between the independent variable and dependent variable is another important factor to take into consideration when attempting to make causal claims when your research approach is nomothetic.  In this context, strength refers to statistical significance . When the  association between two variables is shown to be statistically significant, we can have greater confidence that the data from our sample reflect a true association between those variables in the target population. Statistical significance is usually represented in statistics as the p- value .  Generally a p -value of .05 or less indicates the association between the two variables is statistically significant.

A hypothesis is a statement describing a researcher’s expectation regarding the research findings. Hypotheses in quantitative research are nomothetic causal explanations that the researcher expects to demonstrate. Hypotheses are written to describe the expected association between the independent and dependent variables. Your prediction should be taken from a theory or model of the social world. For example, you may hypothesize that treating clinical clients with warmth and positive regard is likely to help them achieve their therapeutic goals. That hypothesis would be using the humanistic theories of Carl Rogers. Using previous theories to generate hypotheses is an example of deductive research. If Rogers’ theory of unconditional positive regard is accurate, your hypothesis should be true.

Let’s consider a couple of examples. In research on sexual harassment (Uggen & Blackstone, 2004), one might hypothesize, based on feminist theories of sexual harassment, that more females than males will experience specific sexually harassing behaviors. What is the causal explanation being predicted here? Which is the independent and which is the dependent variable? In this case, we hypothesized that a person’s gender (independent variable) would predict their likelihood to experience sexual harassment (dependent variable).

Sometimes researchers will hypothesize that an association will take a specific direction. As a result, an increase or decrease in one area might be said to cause an increase or decrease in another. For example, you might choose to study the association between age and support for legalization of marijuana. Perhaps you’ve taken a sociology class and, based on the theories you’ve read, you hypothesize that age is negatively related to support for marijuana legalization. In fact, there are empirical data that support this hypothesis. Gallup has conducted research on this very question since the 1960s (Carroll, 2005). What have you just hypothesized? You have hypothesized that as people get older, the likelihood of their supporting marijuana legalization decreases. Thus, as age (your independent variable) moves in one direction (up), support for marijuana legalization (your dependent variable) moves in another direction (down). So, positive associations involve two variables going in the same direction and negative associations involve two variables going in opposite directions. If writing hypotheses feels tricky, it is sometimes helpful to draw them out and depict each of the two hypotheses we have just discussed.

sex (IV) on the left with an arrow point towards sexual harassment (DV)

It’s important to note that once a study starts, it is unethical to change your hypothesis to match the data that you found. For example, what happens if you conduct a study to test the hypothesis from Figure 4.3 on support for marijuana legalization, but you find no association between age and support for legalization? It means that your hypothesis was wrong, but that’s still valuable information. It would challenge what the existing literature says on your topic, demonstrating that more research needs to be done to figure out the factors that impact support for marijuana legalization. Don’t be embarrassed by negative results, and definitely don’t change your hypothesis to make it appear correct all along!

Establishing causality in nomothetic research

Let’s say you conduct your study and you find evidence that supports your hypothesis, as age increases, support for marijuana legalization decreases. Success! Causal explanation complete, right? Not quite. You’ve only established one of the criteria for causality. The main criteria for causality have to do with covariation, plausibility, temporality, and spuriousness. In our example from Figure 4.3, we have established only one criteria—covariation. When variables covary , they vary together. Both age and support for marijuana legalization vary in our study. Our sample contains people of varying ages and varying levels of support for marijuana legalization and they vary together in a patterned way–when age increases, support for legalization decreases.

Just because there might be some correlation between two variables does not mean that a causal explanation between the two is really plausible. Plausibility means that in order to make the claim that one event, behavior, or belief causes another, the claim has to make sense. It makes sense that people from previous generations would have different attitudes towards marijuana than younger generations. People who grew up in the time of Reefer Madness or the hippies may hold different views than those raised in an era of legalized medicinal and recreational use of marijuana.

Once we’ve established that there is a plausible association between the two variables, we also need to establish that the cause happened before the effect, the criterion of temporality . A person’s age is a quality that appears long before any opinions on drug policy, so temporally the cause comes before the effect. It wouldn’t make any sense to say that support for marijuana legalization makes a person’s age increase. Even if you could predict someone’s age based on their support for marijuana legalization, you couldn’t say someone’s age was caused by their support for legalization.

Finally, scientists must establish nonspuriousness. A spurious association is one in which an association between two variables appears to be causal but can in fact be explained by some third variable. For example, we could point to the fact that older cohorts are less likely to have used marijuana. Maybe it is actually use of marijuana that leads people to be more open to legalization, not their age. This is often referred to as the third variable problem, where a seemingly true causal explanation is actually caused by a third variable not in the hypothesis. In this example, the association between age and support for legalization could be more about having tried marijuana than the age of the person.

Quantitative researchers are sensitive to the effects of potentially spurious associations. They are an important form of critique of scientific work. As a result, they will often measure these third variables in their study, so they can control for their effects. These are called control variables , and they refer to variables whose effects are controlled for mathematically in the data analysis process. Control variables can be a bit confusing, but think about it as an argument between you, the researcher, and a critic.

Researcher: “The older a person is, the less likely they are to support marijuana legalization.” Critic: “Actually, it’s more about whether a person has used marijuana before. That is what truly determines whether someone supports marijuana legalization.” Researcher: “Well, I measured previous marijuana use in my study and mathematically controlled for its effects in my analysis. The association between age and support for marijuana legalization is still statistically significant and is the most important association here.”

Let’s consider a few additional, real-world examples of spuriousness. Did you know, for example, that high rates of ice cream sales have been shown to cause drowning? Of course, that’s not really true, but there is a positive association between the two. In this case, the third variable that causes both high ice cream sales and increased deaths by drowning is time of year, as the summer season sees increases in both (Babbie, 2010). Here’s another good one: it is true that as the salaries of Presbyterian ministers in Massachusetts rise, so too does the price of rum in Havana, Cuba. Well, duh, you might be saying to yourself. Everyone knows how much ministers in Massachusetts love their rum, right? Not so fast. Both salaries and rum prices have increased, true, but so has the price of just about everything else (Huff & Geis, 1993).

Finally, research shows that the more firefighters present at a fire, the more damage is done at the scene. What this statement leaves out, of course, is that as the size of a fire increases so too does the amount of damage caused as does the number of firefighters called on to help (Frankfort-Nachmias & Leon-Guerrero, 2011). In each of these examples, it is the presence of a third variable that explains the apparent association between the two original variables.

In sum, the following criteria must be met for a correlation to be considered causal:

  • The two variables must vary together.
  • The association must be plausible.
  • The cause must precede the effect in time.
  • The association must be nonspurious (not due to a third variable).

Once these criteria are met, there is a nomothetic causal explanation, one that is objectively true. However, this is difficult for researchers to achieve. You will almost never hear researchers say that they have proven their hypotheses. A statement that bold implies that a association has been shown to exist with absolute certainty and that there is no chance that there are conditions under which the hypothesis would not be true. Instead, researchers tend to say that their hypotheses have been supported (or not). This more cautious way of discussing findings allows for the possibility that new evidence or new ways of examining an association will be discovered. Researchers may also discuss a null hypothesis. The null hypothesis is one that predicts no association between the variables being studied. If a researcher fails to accept the null hypothesis, she is saying that the variables in question are likely to be related to one another.

Idiographic causal explanations

If you not trying to generalize, but instead are trying to establish an idiographic causal explanation, then you are likely going to use qualitative methods, reason inductively, and engage in exploratory or descriptive research. We can understand these assumptions by walking through them, one by one.

Researchers seeking idiographic causal explanation are not trying to generalize, so they have no need to reduce phenomena to mathematics. In fact, using the language of mathematics to reduce the social world down is a bad thing, as it robs the causality of its meaning and context. Idiographic causal explanations are bound within people’s stories and interpretations. Usually, these are expressed through words. Not all qualitative studies analyze words, as some can use interpretations of visual or performance art, but the vast majority of social science studies do.

qualitative research statements imply cause and effect

But wait, we predicted that an idiographic causal explanation would use descriptive or exploratory research. How can we build causality if we are just describing or exploring a topic? Wouldn’t we need to do explanatory research to build any kind of causal explanation?  To clarify, explanatory research attempts to establish nomothetic causal explanations—an independent variable is demonstrated to cause changes a dependent variable. Exploratory and descriptive qualitative research are actually descriptions of the causal explanations established by the participants in your study. Instead of saying “x causes y,” your participants will describe their experiences with “x,” which they will tell you was caused by and influenced a variety of other factors, depending on time, environment, and subjective experience. As stated before, idiographic causal explanations are messy. The job of a social science researcher is to accurately identify patterns in what participants describe.

Let’s consider an example. What would you say if you were asked why you decided to become a social worker?  If we interviewed many social workers about their decisions to become social workers, we might begin to notice patterns. We might find out that many social workers begin their careers based on a variety of factors, such as: personal experience with a disability or social injustice, positive experiences with social workers, or a desire to help others. No one factor is the “most important factor,” like with nomothetic causal explanations. Instead, a complex web of factors, contingent on context, emerge in the dataset when you interpret what people have said.

Finding patterns in data, as you’ll remember from Chapter 2, is what inductive reasoning is all about. A qualitative researcher collects data, usually words, and notices patterns. Those patterns inform the theories we use in social work. In many ways, the idiographic causal explanations created in qualitative research are like the social theories we reviewed in Chapter 2  and other theories you use in your practice and theory courses. Theories are explanations about how different concepts are associated with each other how that network of associations works in the real world. While you can think of theories like Systems Theory as Theory (with a capital “T”), inductive causality is like theory with a small “t.” It may apply only to the participants, environment, and moment in time in which the data were gathered. Nevertheless, it contributes important information to the body of knowledge on the topic studied.

Unlike nomothetic causal explanations, there are no formal criteria (e.g., covariation) for establishing causality in idiographic causal explanations. In fact, some criteria like temporality and nonspuriousness may be violated. For example, if an adolescent client says, “It’s hard for me to tell whether my depression began before my drinking, but both got worse when I was expelled from my first high school,” they are recognizing that oftentimes it’s not so simple that one thing causes another. Sometimes, there is a reciprocal association where one variable (depression) impacts another (alcohol abuse), which then feeds back into the first variable (depression) and also into other variables (school). Other criteria, such as covariation and plausibility still make sense, as the associations you highlight as part of your idiographic causal explanation should still be plausibly true and it elements should vary together.

Similarly, idiographic causal explanations differ in terms of hypotheses. If you recall from the last section, hypotheses in nomothetic causal explanations are testable predictions based on previous theory. In idiographic research, instead of predicting that “x will decrease y,” researchers will use previous literature to figure out what concepts might be important to participants and how they believe participants might respond during the study. Based on an analysis of the literature a researcher may formulate a few tentative hypotheses about what they expect to find in their qualitative study. Unlike nomothetic hypotheses, these are likely to change during the research process. As the researcher learns more from their participants, they might introduce new concepts that participants talk about. Because the participants are the experts in idiographic causal explanation, a researcher should be open to emerging topics and shift their research questions and hypotheses accordingly.

Complementary approaches to causality

Over time, as more qualitative studies are done and patterns emerge across different studies and locations, more sophisticated theories emerge that explain phenomena across multiple contexts. In this way, qualitative researchers use idiographic causal explanations for theory building or the creation of new theories based on inductive reasoning. Quantitative researchers, on the other hand, use nomothetic causal explanations for theory testing , wherein a hypothesis is created from existing theory (big T or small t) and tested mathematically (i.e., deductive reasoning).  Once a theory is developed from qualitative data, a quantitative researcher can seek to test that theory. In this way, qualitatively-derived theory can inspire a hypothesis for a quantitative research project.

Two different baskets

Idiographic and nomothetic causal explanations form the “two baskets” of research design elements pictured in Figure 4.4 below. Later on, they will also determine the sampling approach, measures, and data analysis in your study.

two baskets of research, one with idiographic research and another with nomothetic research and their comopnents

In most cases, mixing components from one basket with the other would not make sense. If you are using quantitative methods with an idiographic question, you wouldn’t get the deep understanding you need to answer an idiographic question. Knowing, for example, that someone scores 20/35 on a numerical index of depression symptoms does not tell you what depression means to that person. Similarly, qualitative methods are not often used to deductive reasoning because qualitative methods usually seek to understand a participant’s perspective, rather than test what existing theory says about a concept.

However, these are not hard-and-fast rules. There are plenty of qualitative studies that attempt to test a theory. There are fewer social constructionist studies with quantitative methods, though studies will sometimes include quantitative information about participants. Researchers in the critical paradigm can fit into either bucket, depending on their research question, as they focus on the liberation of people from oppressive internal (subjective) or external (objective) forces.

We will explore later on in this chapter how researchers can use both buckets simultaneously in mixed methods research. For now, it’s important that you understand the logic that connects the ideas in each bucket. Not only is this fundamental to how knowledge is created and tested in social work, it speaks to the very assumptions and foundations upon which all theories of the social world are built!

Key Takeaways

  • Idiographic research focuses on subjectivity, context, and meaning.
  • Nomothetic research focuses on objectivity, prediction, and generalizing.
  • In qualitative studies, the goal is generally to understand the multitude of causes that account for the specific instance the researcher is investigating.
  • In quantitative studies, the goal may be to understand the more general causes of some phenomenon rather than the idiosyncrasies of one particular instance.
  • For nomothetic causal explanations, an association must be plausible and nonspurious, and the cause must precede the effect in time.
  • In a nomothetic causal explanations, the independent variable causes changes in a dependent variable.
  • Hypotheses are statements, drawn from theory, which describe a researcher’s expectation about an association between two or more variables.
  • Qualitative research may create theories that can be tested quantitatively.
  • The choice of idiographic or nomothetic causal explanation requires a consideration of methods, paradigm, and reasoning.
  • Depending on whether you seek a nomothetic or idiographic causal explanation, you are likely to employ specific research design components.
  • Causality-the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief
  • Control variables- potential “third variables” effects are controlled for mathematically in the data analysis process to highlight the relationship between the independent and dependent variable
  • Covariation- the degree to which two variables vary together
  • Dependent variable- a variable that depends on changes in the independent variable
  • Generalize- to make claims about a larger population based on an examination of a smaller sample
  • Hypothesis- a statement describing a researcher’s expectation regarding what she anticipates finding
  • Idiographic research- attempts to explain or describe your phenomenon exhaustively, based on the subjective understandings of your participants
  • Independent variable- causes a change in the dependent variable
  • Nomothetic research- provides a more general, sweeping explanation that is universally true for all people
  • Plausibility- in order to make the claim that one event, behavior, or belief causes another, the claim has to make sense
  • Spurious relationship- an association between two variables appears to be causal but can in fact be explained by some third variable
  • Statistical significance- confidence researchers have in a mathematical relationship
  • Temporality- whatever cause you identify must happen before the effect
  • Theory building- the creation of new theories based on inductive reasoning
  • Theory testing- when a hypothesis is created from existing theory and tested mathematically

Image attributions

Mikado by 3dman_eu CC-0

Weather TV Forecast by mohamed_hassan CC-0

Figures 4.2 and 4.3 were copied from Blackstone, A. (2012) Principles of sociological inquiry: Qualitative and quantitative methods. Saylor Foundation. Retrieved from: https://saylordotorg.github.io/text_principles-of-sociological-inquiry-qualitative-and-quantitative-methods/ Shared under CC-BY-NC-SA 3.0 License

Beatrice Birra Storytelling at African Art Museum by Anthony Cross public domain

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories

Market Research

  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Causal Research

Try Qualtrics for free

Causal research: definition, examples and how to use it.

16 min read Causal research enables market researchers to predict hypothetical occurrences & outcomes while improving existing strategies. Discover how this research can decrease employee retention & increase customer success for your business.

What is causal research?

Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables.

It’s often used by companies to determine the impact of changes in products, features, or services process on critical company metrics. Some examples:

  • How does rebranding of a product influence intent to purchase?
  • How would expansion to a new market segment affect projected sales?
  • What would be the impact of a price increase or decrease on customer loyalty?

To maintain the accuracy of causal research, ‘confounding variables’ or influences — e.g. those that could distort the results — are controlled. This is done either by keeping them constant in the creation of data, or by using statistical methods. These variables are identified before the start of the research experiment.

As well as the above, research teams will outline several other variables and principles in causal research:

  • Independent variables

The variables that may cause direct changes in another variable. For example, the effect of truancy on a student’s grade point average. The independent variable is therefore class attendance.

  • Control variables

These are the components that remain unchanged during the experiment so researchers can better understand what conditions create a cause-and-effect relationship.  

This describes the cause-and-effect relationship. When researchers find causation (or the cause), they’ve conducted all the processes necessary to prove it exists.

  • Correlation

Any relationship between two variables in the experiment. It’s important to note that correlation doesn’t automatically mean causation. Researchers will typically establish correlation before proving cause-and-effect.

  • Experimental design

Researchers use experimental design to define the parameters of the experiment — e.g. categorizing participants into different groups.

  • Dependent variables

These are measurable variables that may change or are influenced by the independent variable. For example, in an experiment about whether or not terrain influences running speed, your dependent variable is the terrain.  

Why is causal research useful?

It’s useful because it enables market researchers to predict hypothetical occurrences and outcomes while improving existing strategies. This allows businesses to create plans that benefit the company. It’s also a great research method because researchers can immediately see how variables affect each other and under what circumstances.

Also, once the first experiment has been completed, researchers can use the learnings from the analysis to repeat the experiment or apply the findings to other scenarios. Because of this, it’s widely used to help understand the impact of changes in internal or commercial strategy to the business bottom line.

Some examples include:

  • Understanding how overall training levels are improved by introducing new courses
  • Examining which variations in wording make potential customers more interested in buying a product
  • Testing a market’s response to a brand-new line of products and/or services

So, how does causal research compare and differ from other research types?

Well, there are a few research types that are used to find answers to some of the examples above:

1. Exploratory research

As its name suggests, exploratory research involves assessing a situation (or situations) where the problem isn’t clear. Through this approach, researchers can test different avenues and ideas to establish facts and gain a better understanding.

Researchers can also use it to first navigate a topic and identify which variables are important. Because no area is off-limits, the research is flexible and adapts to the investigations as it progresses.

Finally, this approach is unstructured and often involves gathering qualitative data, giving the researcher freedom to progress the research according to their thoughts and assessment. However, this may make results susceptible to researcher bias and may limit the extent to which a topic is explored.

2. Descriptive research

Descriptive research is all about describing the characteristics of the population, phenomenon or scenario studied. It focuses more on the “what” of the research subject than the “why”.

For example, a clothing brand wants to understand the fashion purchasing trends amongst buyers in California — so they conduct a demographic survey of the region, gather population data and then run descriptive research. The study will help them to uncover purchasing patterns amongst fashion buyers in California, but not necessarily why those patterns exist.

As the research happens in a natural setting, variables can cross-contaminate other variables, making it harder to isolate cause and effect relationships. Therefore, further research will be required if more causal information is needed.

Get started on your market research journey with CoreXM

How is causal research different from the other two methods above?

Well, causal research looks at what variables are involved in a problem and ‘why’ they act a certain way. As the experiment takes place in a controlled setting (thanks to controlled variables) it’s easier to identify cause-and-effect amongst variables.

Furthermore, researchers can carry out causal research at any stage in the process, though it’s usually carried out in the later stages once more is known about a particular topic or situation.

Finally, compared to the other two methods, causal research is more structured, and researchers can combine it with exploratory and descriptive research to assist with research goals.

Summary of three research types

causal research table

What are the advantages of causal research?

  • Improve experiences

By understanding which variables have positive impacts on target variables (like sales revenue or customer loyalty), businesses can improve their processes, return on investment, and the experiences they offer customers and employees.

  • Help companies improve internally

By conducting causal research, management can make informed decisions about improving their employee experience and internal operations. For example, understanding which variables led to an increase in staff turnover.

  • Repeat experiments to enhance reliability and accuracy of results

When variables are identified, researchers can replicate cause-and-effect with ease, providing them with reliable data and results to draw insights from.

  • Test out new theories or ideas

If causal research is able to pinpoint the exact outcome of mixing together different variables, research teams have the ability to test out ideas in the same way to create viable proof of concepts.

  • Fix issues quickly

Once an undesirable effect’s cause is identified, researchers and management can take action to reduce the impact of it or remove it entirely, resulting in better outcomes.

What are the disadvantages of causal research?

  • Provides information to competitors

If you plan to publish your research, it provides information about your plans to your competitors. For example, they might use your research outcomes to identify what you are up to and enter the market before you.

  • Difficult to administer

Causal research is often difficult to administer because it’s not possible to control the effects of extraneous variables.

  • Time and money constraints

Budgetary and time constraints can make this type of research expensive to conduct and repeat. Also, if an initial attempt doesn’t provide a cause and effect relationship, the ROI is wasted and could impact the appetite for future repeat experiments.

  • Requires additional research to ensure validity

You can’t rely on just the outcomes of causal research as it’s inaccurate. It’s best to conduct other types of research alongside it to confirm its output.

  • Trouble establishing cause and effect

Researchers might identify that two variables are connected, but struggle to determine which is the cause and which variable is the effect.

  • Risk of contamination

There’s always the risk that people outside your market or area of study could affect the results of your research. For example, if you’re conducting a retail store study, shoppers outside your ‘test parameters’ shop at your store and skew the results.

How can you use causal research effectively?

To better highlight how you can use causal research across functions or markets, here are a few examples:

Market and advertising research

A company might want to know if their new advertising campaign or marketing campaign is having a positive impact. So, their research team can carry out a causal research project to see which variables cause a positive or negative effect on the campaign.

For example, a cold-weather apparel company in a winter ski-resort town may see an increase in sales generated after a targeted campaign to skiers. To see if one caused the other, the research team could set up a duplicate experiment to see if the same campaign would generate sales from non-skiers. If the results reduce or change, then it’s likely that the campaign had a direct effect on skiers to encourage them to purchase products.

Improving customer experiences and loyalty levels

Customers enjoy shopping with brands that align with their own values, and they’re more likely to buy and present the brand positively to other potential shoppers as a result. So, it’s in your best interest to deliver great experiences and retain your customers.

For example, the Harvard Business Review found that an increase in customer retention rates by 5% increased profits by 25% to 95%. But let’s say you want to increase your own, how can you identify which variables contribute to it?Using causal research, you can test hypotheses about which processes, strategies or changes influence customer retention. For example, is it the streamlined checkout? What about the personalized product suggestions? Or maybe it was a new solution that solved their problem? Causal research will help you find out.

Discover how to use analytics to improve customer retention.

Improving problematic employee turnover rates

If your company has a high attrition rate, causal research can help you narrow down the variables or reasons which have the greatest impact on people leaving. This allows you to prioritize your efforts on tackling the issues in the right order, for the best positive outcomes.

For example, through causal research, you might find that employee dissatisfaction due to a lack of communication and transparency from upper management leads to poor morale, which in turn influences employee retention.

To rectify the problem, you could implement a routine feedback loop or session that enables your people to talk to your company’s C-level executives so that they feel heard and understood.

How to conduct causal research first steps to getting started are:

1. Define the purpose of your research

What questions do you have? What do you expect to come out of your research? Think about which variables you need to test out the theory.

2. Pick a random sampling if participants are needed

Using a technology solution to support your sampling, like a database, can help you define who you want your target audience to be, and how random or representative they should be.

3. Set up the controlled experiment

Once you’ve defined which variables you’d like to measure to see if they interact, think about how best to set up the experiment. This could be in-person or in-house via interviews, or it could be done remotely using online surveys.

4. Carry out the experiment

Make sure to keep all irrelevant variables the same, and only change the causal variable (the one that causes the effect) to gather the correct data. Depending on your method, you could be collecting qualitative or quantitative data, so make sure you note your findings across each regularly.

5. Analyze your findings

Either manually or using technology, analyze your data to see if any trends, patterns or correlations emerge. By looking at the data, you’ll be able to see what changes you might need to do next time, or if there are questions that require further research.

6. Verify your findings

Your first attempt gives you the baseline figures to compare the new results to. You can then run another experiment to verify your findings.

7. Do follow-up or supplemental research

You can supplement your original findings by carrying out research that goes deeper into causes or explores the topic in more detail. One of the best ways to do this is to use a survey. See ‘Use surveys to help your experiment’.

Identifying causal relationships between variables

To verify if a causal relationship exists, you have to satisfy the following criteria:

  • Nonspurious association

A clear correlation exists between one cause and the effect. In other words, no ‘third’ that relates to both (cause and effect) should exist.

  • Temporal sequence

The cause occurs before the effect. For example, increased ad spend on product marketing would contribute to higher product sales.

  • Concomitant variation

The variation between the two variables is systematic. For example, if a company doesn’t change its IT policies and technology stack, then changes in employee productivity were not caused by IT policies or technology.

How surveys help your causal research experiments?

There are some surveys that are perfect for assisting researchers with understanding cause and effect. These include:

  • Employee Satisfaction Survey – An introductory employee satisfaction survey that provides you with an overview of your current employee experience.
  • Manager Feedback Survey – An introductory manager feedback survey geared toward improving your skills as a leader with valuable feedback from your team.
  • Net Promoter Score (NPS) Survey – Measure customer loyalty and understand how your customers feel about your product or service using one of the world’s best-recognized metrics.
  • Employee Engagement Survey – An entry-level employee engagement survey that provides you with an overview of your current employee experience.
  • Customer Satisfaction Survey – Evaluate how satisfied your customers are with your company, including the products and services you provide and how they are treated when they buy from you.
  • Employee Exit Interview Survey – Understand why your employees are leaving and how they’ll speak about your company once they’re gone.
  • Product Research Survey – Evaluate your consumers’ reaction to a new product or product feature across every stage of the product development journey.
  • Brand Awareness Survey – Track the level of brand awareness in your target market, including current and potential future customers.
  • Online Purchase Feedback Survey – Find out how well your online shopping experience performs against customer needs and expectations.

That covers the fundamentals of causal research and should give you a foundation for ongoing studies to assess opportunities, problems, and risks across your market, product, customer, and employee segments.

If you want to transform your research, empower your teams and get insights on tap to get ahead of the competition, maybe it’s time to leverage Qualtrics CoreXM.

Qualtrics CoreXM provides a single platform for data collection and analysis across every part of your business — from customer feedback to product concept testing. What’s more, you can integrate it with your existing tools and services thanks to a flexible API.

Qualtrics CoreXM offers you as much or as little power and complexity as you need, so whether you’re running simple surveys or more advanced forms of research, it can deliver every time.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

What is causal research design?

Last updated

14 May 2023

Reviewed by

Examining these relationships gives researchers valuable insights into the mechanisms that drive the phenomena they are investigating.

Organizations primarily use causal research design to identify, determine, and explore the impact of changes within an organization and the market. You can use a causal research design to evaluate the effects of certain changes on existing procedures, norms, and more.

This article explores causal research design, including its elements, advantages, and disadvantages.

Analyze your causal research

Dovetail streamlines causal research analysis to help you uncover and share actionable insights

  • Components of causal research

You can demonstrate the existence of cause-and-effect relationships between two factors or variables using specific causal information, allowing you to produce more meaningful results and research implications.

These are the key inputs for causal research:

The timeline of events

Ideally, the cause must occur before the effect. You should review the timeline of two or more separate events to determine the independent variables (cause) from the dependent variables (effect) before developing a hypothesis. 

If the cause occurs before the effect, you can link cause and effect and develop a hypothesis .

For instance, an organization may notice a sales increase. Determining the cause would help them reproduce these results. 

Upon review, the business realizes that the sales boost occurred right after an advertising campaign. The business can leverage this time-based data to determine whether the advertising campaign is the independent variable that caused a change in sales. 

Evaluation of confounding variables

In most cases, you need to pinpoint the variables that comprise a cause-and-effect relationship when using a causal research design. This uncovers a more accurate conclusion. 

Co-variations between a cause and effect must be accurate, and a third factor shouldn’t relate to cause and effect. 

Observing changes

Variation links between two variables must be clear. A quantitative change in effect must happen solely due to a quantitative change in the cause. 

You can test whether the independent variable changes the dependent variable to evaluate the validity of a cause-and-effect relationship. A steady change between the two variables must occur to back up your hypothesis of a genuine causal effect. 

  • Why is causal research useful?

Causal research allows market researchers to predict hypothetical occurrences and outcomes while enhancing existing strategies. Organizations can use this concept to develop beneficial plans. 

Causal research is also useful as market researchers can immediately deduce the effect of the variables on each other under real-world conditions. 

Once researchers complete their first experiment, they can use their findings. Applying them to alternative scenarios or repeating the experiment to confirm its validity can produce further insights. 

Businesses widely use causal research to identify and comprehend the effect of strategic changes on their profits. 

  • How does causal research compare and differ from other research types?

Other research types that identify relationships between variables include exploratory and descriptive research . 

Here’s how they compare and differ from causal research designs:

Exploratory research

An exploratory research design evaluates situations where a problem or opportunity's boundaries are unclear. You can use this research type to test various hypotheses and assumptions to establish facts and understand a situation more clearly.

You can also use exploratory research design to navigate a topic and discover the relevant variables. This research type allows flexibility and adaptability as the experiment progresses, particularly since no area is off-limits.

It’s worth noting that exploratory research is unstructured and typically involves collecting qualitative data . This provides the freedom to tweak and amend the research approach according to your ongoing thoughts and assessments. 

Unfortunately, this exposes the findings to the risk of bias and may limit the extent to which a researcher can explore a topic. 

This table compares the key characteristics of causal and exploratory research:

Descriptive research

This research design involves capturing and describing the traits of a population, situation, or phenomenon. Descriptive research focuses more on the " what " of the research subject and less on the " why ."

Since descriptive research typically happens in a real-world setting, variables can cross-contaminate others. This increases the challenge of isolating cause-and-effect relationships. 

You may require further research if you need more causal links. 

This table compares the key characteristics of causal and descriptive research.  

Causal research examines a research question’s variables and how they interact. It’s easier to pinpoint cause and effect since the experiment often happens in a controlled setting. 

Researchers can conduct causal research at any stage, but they typically use it once they know more about the topic.

In contrast, causal research tends to be more structured and can be combined with exploratory and descriptive research to help you attain your research goals. 

  • How can you use causal research effectively?

Here are common ways that market researchers leverage causal research effectively:

Market and advertising research

Do you want to know if your new marketing campaign is affecting your organization positively? You can use causal research to determine the variables causing negative or positive impacts on your campaign. 

Improving customer experiences and loyalty levels

Consumers generally enjoy purchasing from brands aligned with their values. They’re more likely to purchase from such brands and positively represent them to others. 

You can use causal research to identify the variables contributing to increased or reduced customer acquisition and retention rates. 

Could the cause of increased customer retention rates be streamlined checkout? 

Perhaps you introduced a new solution geared towards directly solving their immediate problem. 

Whatever the reason, causal research can help you identify the cause-and-effect relationship. You can use this to enhance your customer experiences and loyalty levels.

Improving problematic employee turnover rates

Is your organization experiencing skyrocketing attrition rates? 

You can leverage the features and benefits of causal research to narrow down the possible explanations or variables with significant effects on employees quitting. 

This way, you can prioritize interventions, focusing on the highest priority causal influences, and begin to tackle high employee turnover rates. 

  • Advantages of causal research

The main benefits of causal research include the following:

Effectively test new ideas

If causal research can pinpoint the precise outcome through combinations of different variables, researchers can test ideas in the same manner to form viable proof of concepts.

Achieve more objective results

Market researchers typically use random sampling techniques to choose experiment participants or subjects in causal research. This reduces the possibility of exterior, sample, or demography-based influences, generating more objective results. 

Improved business processes

Causal research helps businesses understand which variables positively impact target variables, such as customer loyalty or sales revenues. This helps them improve their processes, ROI, and customer and employee experiences.

Guarantee reliable and accurate results

Upon identifying the correct variables, researchers can replicate cause and effect effortlessly. This creates reliable data and results to draw insights from. 

Internal organization improvements

Businesses that conduct causal research can make informed decisions about improving their internal operations and enhancing employee experiences. 

  • Disadvantages of causal research

Like any other research method, casual research has its set of drawbacks that include:

Extra research to ensure validity

Researchers can't simply rely on the outcomes of causal research since it isn't always accurate. There may be a need to conduct other research types alongside it to ensure accurate output.

Coincidence

Coincidence tends to be the most significant error in causal research. Researchers often misinterpret a coincidental link between a cause and effect as a direct causal link. 

Administration challenges

Causal research can be challenging to administer since it's impossible to control the impact of extraneous variables . 

Giving away your competitive advantage

If you intend to publish your research, it exposes your information to the competition. 

Competitors may use your research outcomes to identify your plans and strategies to enter the market before you. 

  • Causal research examples

Multiple fields can use causal research, so it serves different purposes, such as. 

Customer loyalty research

Organizations and employees can use causal research to determine the best customer attraction and retention approaches. 

They monitor interactions between customers and employees to identify cause-and-effect patterns. That could be a product demonstration technique resulting in higher or lower sales from the same customers. 

Example: Business X introduces a new individual marketing strategy for a small customer group and notices a measurable increase in monthly subscriptions. 

Upon getting identical results from different groups, the business concludes that the individual marketing strategy resulted in the intended causal relationship.

Advertising research

Businesses can also use causal research to implement and assess advertising campaigns. 

Example: Business X notices a 7% increase in sales revenue a few months after a business introduces a new advertisement in a certain region. The business can run the same ad in random regions to compare sales data over the same period. 

This will help the company determine whether the ad caused the sales increase. If sales increase in these randomly selected regions, the business could conclude that advertising campaigns and sales share a cause-and-effect relationship. 

Educational research

Academics, teachers, and learners can use causal research to explore the impact of politics on learners and pinpoint learner behavior trends. 

Example: College X notices that more IT students drop out of their program in their second year, which is 8% higher than any other year. 

The college administration can interview a random group of IT students to identify factors leading to this situation, including personal factors and influences. 

With the help of in-depth statistical analysis, the institution's researchers can uncover the main factors causing dropout. They can create immediate solutions to address the problem.

Is a causal variable dependent or independent?

When two variables have a cause-and-effect relationship, the cause is often called the independent variable. As such, the effect variable is dependent, i.e., it depends on the independent causal variable. An independent variable is only causal under experimental conditions. 

What are the three criteria for causality?

The three conditions for causality are:

Temporality/temporal precedence: The cause must precede the effect.

Rationality: One event predicts the other with an explanation, and the effect must vary in proportion to changes in the cause.

Control for extraneous variables: The covariables must not result from other variables.  

Is causal research experimental?

Causal research is mostly explanatory. Causal studies focus on analyzing a situation to explore and explain the patterns of relationships between variables. 

Further, experiments are the primary data collection methods in studies with causal research design. However, as a research design, causal research isn't entirely experimental.

What is the difference between experimental and causal research design?

One of the main differences between causal and experimental research is that in causal research, the research subjects are already in groups since the event has already happened. 

On the other hand, researchers randomly choose subjects in experimental research before manipulating the variables.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 18 May 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

  • Search Menu
  • Reviews and Awards
  • Journals on Oxford Academic
  • Books on Oxford Academic

A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences

  • < Previous chapter
  • Next chapter >

A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences

3 Causes-of-Effects versus Effects-of-Causes

  • Published: September 2012
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter examines two approaches used in social science research: the “causes-of-effects” approach and the “effects-of-causes” approach. The quantitative and qualitative cultures differ in the extent to which and the ways in which they address causes-of-effects and effects-of-causes questions. Quantitative scholars, who favor the effects-of-causes approach, focus on estimating the average effects of particular variables within populations or samples. By contrast, qualitative scholars employ individual case analysis to explain outcomes as well as the effects of particular causal factors. The chapter first considers the type of research question addressed by both quantitative and qualitative researchers before discussing the use of within-case analysis by the latter to investigate individual cases versus cross-case analysis by the former to elucidate central tendencies in populations. It also describes the complementarities between qualitative and quantitative research that make mixed-method research possible.

Signed in as

Institutional accounts.

  • Google Scholar Indexing
  • GoogleCrawler [DO NOT DELETE]

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.2 Causal relationships

Learning objectives.

  • Define and provide an example of idiographic and nomothetic causal relationships
  • Describe the role of causality in quantitative research as compared to qualitative research
  • Identify, define, and describe each of the main criteria for nomothetic causal relationships
  • Describe the difference between and provide examples of independent, dependent, and control variables
  • Define hypothesis, be able to state a clear hypothesis, and discuss the respective roles of quantitative and qualitative research when it comes to hypotheses

Most social scientific studies attempt to provide some kind of causal explanation. A study on an intervention to prevent child abuse is trying to draw a connection between the intervention and changes in child abuse. Causality refers to the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief. In other words, it is about cause and effect. It seems simple, but you may be surprised to learn there is more than one way to explain how one thing causes another. How can that be? How could there be many ways to understand causality?

white 3-d cartoon knocking over a row of dominoes

Think back to our chapter on paradigms, which were analytic lenses comprised of assumptions about the world. You’ll remember the positivist paradigm as the one that believes in objectivity and social constructionist paradigm as the one that believes in subjectivity. Both paradigms are correct, though incomplete, viewpoints on the social world and social science.

A researcher operating in the social constructionist paradigm would view truth as subjective. In causality, that means that in order to try to understand what caused what, we would need to report what people tell us. Well, that seems pretty straightforward, right? Well, what if two different people saw the same event from the exact same viewpoint and came up with two totally different explanations about what caused what? A social constructionist would say that both people are correct. There is not one singular truth that is true for everyone, but many truths created and shared by people.

When social constructionists engage in science, they are trying to establish one type of causality—idiographic causality. An idiographic causal explanation means that you will attempt to explain or describe your phenomenon exhaustively, based on the subjective understandings of your participants. These explanations are bound with the narratives people create about their lives and experience, and are embedded in a cultural, historical, and environmental context. Idiographic causal explanations are so powerful because they convey a deep understanding of a phenomenon and its context. From a social constructionist perspective, the truth is messy. Idiographic research involves finding patterns and themes in the causal relationships established by your research participants.

If that doesn’t sound like what you normally think of as “science,” you’re not alone. Although the ideas behind idiographic research are quite old in philosophy, they were only applied to the sciences at the start of the last century. If we think of famous scientists like Newton or Darwin, they never saw truth as subjective. There were objectively true laws of science that were applicable in all situations. Another paradigm was dominant and continues its dominance today, the positivist paradigm. When positivists try to establish causality, they are like Newton and Darwin, trying to come up with a broad, sweeping explanation that is universally true for all people. This is the hallmark of a nomothetic causal explanation .

Nomothetic causal explanations are also incredibly powerful. They allow scientists to make predictions about what will happen in the future, with a certain margin of error. Moreover, they allow scientists to generalize —that is, make claims about a large population based on a smaller sample of people or items. Generalizing is important. We clearly do not have time to ask everyone their opinion on a topic, nor do we have the ability to look at every interaction in the social world. We need a type of causal explanation that helps us predict and estimate truth in all situations.

If these still seem like obscure philosophy terms, let’s consider an example. Imagine you are working for a community-based non-profit agency serving people with disabilities. You are putting together a report to help lobby the state government for additional funding for community support programs, and you need to support your argument for additional funding at your agency. If you looked at nomothetic research, you might learn how previous studies have shown that, in general, community-based programs like yours are linked with better health and employment outcomes for people with disabilities. Nomothetic research seeks to explain that community-based programs are better for everyone with disabilities. If you looked at idiographic research, you would get stories and experiences of people in community-based programs. These individual stories are full of detail about the lived experience of being in a community-based program. Using idiographic research, you can understand what it’s like to be a person with a disability and then communicate that to the state government. For example, a person might say “I feel at home when I’m at this agency because they treat me like a family member” or “this is the agency that helped me get my first paycheck.”

Neither kind of causal explanation is better than the other. A decision to conduct idiographic research means that you will attempt to explain or describe your phenomenon exhaustively, attending to cultural context and subjective interpretations. A decision to conduct nomothetic research, on the other hand, means that you will try to explain what is true for everyone and predict what will be true in the future. In short, idiographic explanations have greater depth, and nomothetic explanations have greater breadth. More importantly, social workers understand the value of both approaches to understanding the social world. A social worker helping a client with substance abuse issues seeks idiographic knowledge when they ask about that client’s life story, investigate their unique physical environment, or probe how they understand their addiction. At the same time, a social worker also uses nomothetic knowledge to guide their interventions. Nomothetic research may help guide them to minimize risk factors and maximize protective factors or use an evidence-based therapy, relying on knowledge about what in general helps people with substance abuse issues.

cartoon woman pointing to a chart showing probabilities of weather

Nomothetic causal relationships

One of my favorite classroom moments occurred in the early moments of my teaching career. Students were providing peer feedback on research questions. I overheard one group who was helping someone rephrase their research question. A student asked, “Are you trying to generalize or nah?” Teaching is full of fun moments like that one.

Answering that one question can help you understand how to conceptualize and design your research project. If you are trying to generalize, or create a nomothetic causal relationship, then the rest of these statements are likely to be true: you will use quantitative methods, reason deductively, and engage in explanatory research. How can I know all of that? Let’s take it part by part.

Because nomothetic causal relationships try to generalize, they must be able to reduce phenomena to a universal language, mathematics. Mathematics allows us to precisely measure, in universal terms, phenomena in the social world. Not all quantitative studies are explanatory. For example, a descriptive study could reveal the number of people without homes in your county, though it won’t tell you why they are homeless. But nearly all explanatory studies are quantitative. Because explanatory researchers want a clean “x causes y” explanation, they need to use the universal language of mathematics to achieve their goal. That’s why nomothetic causal relationships use quantitative methods.

What we’ve been talking about here is relationships between variables. When one variable causes another, we have what researchers call independent and dependent variables. For our example on spanking and aggressive behavior, spanking would be the independent variable and aggressive behavior addiction would be the dependent variable. An independent variable is the cause, and a dependent variable is the effect. Why are they called that? Dependent variables depend on independent variables. If all of that gets confusing, just remember this graphical relationship:

The letters IV on the left with an arrow pointing towards DV

Relationship strength is another important factor to take into consideration when attempting to make causal claims when your research approach is nomothetic. I’m not talking strength of your friendships or marriage. In this context, relationship strength refers to statistical significance. The more statistically significant a relationship between two variables is shown to be, the greater confidence we can have in the strength of that relationship. You’ll remember from our discussion of statistical significance in Chapter 3, that it is usually represented in statistics as the p value.

A hypothesis is a statement describing a researcher’s expectation regarding what she anticipates finding. Hypotheses in quantitative research are a nomothetic causal relationship that the researcher expects to demonstrate. It is written to describe the expected relationship between the independent and dependent variables. Your prediction should be taken from a theory or model of the social world. For example, you may hypothesize that treating clinical clients with warmth and positive regard is likely to help them achieve their therapeutic goals. That hypothesis would be using the humanistic theories of Carl Rogers. Using previous theories to generate hypotheses is an example of deductive research. If Rogers’ theory of unconditional positive regard is accurate, your hypothesis should be true. This is how we know that all nomothetic causal relationships must use deductive reasoning.

Let’s consider a couple of examples. In research on sexual harassment (Uggen & Blackstone, 2004),  [1] one might hypothesize, based on feminist theories of sexual harassment, that more females than males will experience specific sexually harassing behaviors. What is the causal relationship being predicted here? Which is the independent and which is the dependent variable? In this case, we hypothesized that a person’s gender (independent variable) would predict their likelihood to experience sexual harassment (dependent variable).

Sometimes researchers will hypothesize that a relationship will take a specific direction. As a result, an increase or decrease in one area might be said to cause an increase or decrease in another. For example, you might choose to study the relationship between age and support for legalization of marijuana. Perhaps you’ve taken a sociology class and, based on the theories you’ve read, you hypothesize that age is negatively related to support for marijuana legalization.  [2] What have you just hypothesized? You have hypothesized that as people get older, the likelihood of their supporting marijuana legalization decreases. Thus, as age (your independent variable) moves in one direction (up), support for marijuana legalization (your dependent variable) moves in another direction (down). So, positive relationships involve two variables going in the same direction and negative relationships involve two variables going in opposite directions. If writing hypotheses feels tricky, it is sometimes helpful to draw them out and depict each of the two hypotheses we have just discussed.

sex (IV) on the left with an arrow point towards sexual harassment (DV)

It’s important to note that once a study starts, it is unethical to change your hypothesis to match the data that you found. For example, what happens if you conduct a study to test the hypothesis from Figure 7.3 on support for marijuana legalization, but you find no relationship between age and support for legalization? It means that your hypothesis was wrong, but that’s still valuable information. It would challenge what the existing literature says on your topic, demonstrating that more research needs to be done to figure out the factors that impact support for marijuana legalization. Don’t be embarrassed by negative results, and definitely don’t change your hypothesis to make it appear correct all along!

Let’s say you conduct your study and you find evidence that supports your hypothesis, as age increases, support for marijuana legalization decreases. Success! Causal explanation complete, right? Not quite. You’ve only established one of the criteria for causality. The main criteria for causality have to do with covariation, plausibility, temporality, and spuriousness. In our example from Figure 7.3, we have established only one criteria—covariation. When variables covary , they vary together. Both age and support for marijuana legalization vary in our study. Our sample contains people of varying ages and varying levels of support for marijuana legalization.

Just because there might be some correlation between two variables does not mean that a causal relationship between the two is really plausible. Plausibility means that in order to make the claim that one event, behavior, or belief causes another, the claim has to make sense. It makes sense that people from previous generations would have different attitudes towards marijuana than younger generations. People who grew up in the time of Reefer Madness or the hippies may hold different views than those raised in an era of legalized medicinal and recreational use of marijuana.

Once we’ve established that there is a plausible relationship between the two variables, we also need to establish whether the cause happened before the effect, the criterion of temporality . A person’s age is a quality that appears long before any opinions on drug policy, so temporally the cause comes before the effect. It wouldn’t make any sense to say that support for marijuana legalization makes a person’s age increase. Even if you could predict someone’s age based on their support for marijuana legalization, you couldn’t say someone’s age was caused by their support for legalization.

Finally, scientists must establish nonspuriousness. A spurious relationship is one in which an association between two variables appears to be causal but can in fact be explained by some third variable. For example, we could point to the fact that older cohorts are less likely to have used marijuana. Maybe it is actually use of marijuana that leads people to be more open to legalization, not their age. This is often referred to as the third variable problem, where a seemingly true causal relationship is actually caused by a third variable not in the hypothesis. In this example, the relationship between age and support for legalization could be more about having tried marijuana than the age of the person.

Quantitative researchers are sensitive to the effects of potentially spurious relationships. They are an important form of critique of scientific work. As a result, they will often measure these third variables in their study, so they can control for their effects. These are called control variables , and they refer to variables whose effects are controlled for mathematically in the data analysis process. Control variables can be a bit confusing, but think about it as an argument between you, the researcher, and a critic.

Researcher: “The older a person is, the less likely they are to support marijuana legalization.” Critic: “Actually, it’s more about whether a person has used marijuana before. That is what truly determines whether someone supports marijuana legalization.” Researcher: “Well, I measured previous marijuana use in my study and mathematically controlled for its effects in my analysis. The relationship between age and support for marijuana legalization is still statistically significant and is the most important relationship here.”

Let’s consider a few additional, real-world examples of spuriousness. Did you know, for example, that high rates of ice cream sales have been shown to cause drowning? Of course, that’s not really true, but there is a positive relationship between the two. In this case, the third variable that causes both high ice cream sales and increased deaths by drowning is time of year, as the summer season sees increases in both (Babbie, 2010).  [4] Here’s another good one: it is true that as the salaries of Presbyterian ministers in Massachusetts rise, so too does the price of rum in Havana, Cuba. Well, duh, you might be saying to yourself. Everyone knows how much ministers in Massachusetts love their rum, right? Not so fast. Both salaries and rum prices have increased, true, but so has the price of just about everything else (Huff & Geis, 1993).  [5] Finally, research shows that the more firefighters present at a fire, the more damage is done at the scene. What this statement leaves out, of course, is that as the size of a fire increases so too does the amount of damage caused as does the number of firefighters called on to help (Frankfort-Nachmias & Leon-Guerrero, 2011).  [6] In each of these examples, it is the presence of a third variable that explains the apparent relationship between the two original variables.

In sum, the following criteria must be met for a correlation to be considered causal:

  • The two variables must vary together.
  • The relationship must be plausible.
  • The cause must precede the effect in time.
  • The relationship must be nonspurious (not due to a third variable).

Once these criteria are met, a researcher can say they have achieved a nomothetic causal explanation, one that is objectively true. It’s a difficult challenge for researchers to meet. You will almost never hear researchers say that they have proven their hypotheses. A statement that bold implies that a relationship has been shown to exist with absolute certainty and that there is no chance that there are conditions under which the hypothesis would not be true. Instead, researchers tend to say that their hypotheses have been supported (or not). This more cautious way of discussing findings allows for the possibility that new evidence or new ways of examining a relationship will be discovered. Researchers may also discuss a null hypothesis. We covered in Chapter 3 that the null hypothesis is one that predicts no relationship between the variables being studied. If a researcher rejects the null hypothesis, she is saying that the variables in question are somehow related to one another.

Idiographic causal relationships

Remember our question, “Are you trying to generalize or nah?” If you answered no, you are trying to establish an idiographic causal relationship. I can guess that if you are trying to establish an idiographic causal relationship, you are likely going to use qualitative methods, reason inductively, and engage in exploratory or descriptive research. We can understand these assumptions by walking through them, one by one.

Researchers seeking idiographic causal relationships are not trying to generalize, so they have no need to reduce phenomena to mathematics. In fact, using the language of mathematics to reduce the social world down is a bad thing, as it robs the causal relationship of its meaning and context. Idiographic causal relationships are bound within people’s stories and interpretations. Usually, these are expressed through words. Not all qualitative studies use word data, as some can use interpretations of visual or performance art, though the vast majority of social science studies do use word data.

an woman speaking to a group of children in a room

But wait, I predicted that an idiographic causal relationship would use descriptive or exploratory research. How can we build causal relationships if we are just describing or exploring a topic? Wouldn’t we need to do explanatory research to build any kind of causal explanation? Explanatory research attempts to establish nomothetic causal relationships—an independent variable is demonstrated to cause changes a dependent variable. Exploratory and descriptive qualitative research contains some causal relationships, but they are actually descriptions of the causal relationships established by the participants in your study. Instead of saying “x causes y,” your participants will describe their experiences with “x,” which they will tell you was caused by and influenced a variety of other factors, depending on time, environment, and subjective experience. As we stated before, idiographic causal explanations are messy. Your job as a social science researcher is to accurately describe the patterns in what your participants tell you.

Let’s consider an example. If I asked you why you decided to become a social worker, what might you say? For me, I would say that I wanted to be a mental health clinician since I was in high school. I was interested in how people thought. At my second internship in my undergraduate program, I got the advice to become a social worker because the license provided greater authority for insurance reimbursement and flexibility for career change. That’s not a simple explanation at all! But it does provide a description of the deeper understanding of the many factors that led me to become a social worker. If we interviewed many social workers about their decisions to become social workers, we might begin to notice patterns. We might find out that many social workers begin their careers based on a variety of factors, such as: personal experience with a disability or social injustice, positive experiences with social workers, or a desire to help others. No one factor is the “most important factor,” like with nomothetic causal relationships. Instead, a complex web of factors, contingent on context, emerge in the dataset when you interpret what people have said.

Finding patterns in data, as you’ll remember from Chapter 6, is what inductive reasoning is all about. A researcher collects data, usually word data, and notices patterns. Those patterns inform the theories we use in social work. In many ways, the idiographic causal relationships you create in qualitative research are like the social theories we reviewed in Chapter 6 (e.g. social exchange theory) and other theories you use in your practice and theory courses. Theories are explanations about how different concepts are associated with each other how that network of relationships works in the real world. While you can think of theories like Systems Theory as Theory (with a capital “T”), inductive causal relationships are like theory with a small “t.” They may apply only to the participants, environment, and moment in time in which you gathered your data. Nevertheless, they contribute important information to the body of knowledge on the topic you studied.

Over time, as more qualitative studies are done and patterns emerge across different studies and locations, more sophisticated theories emerge that explain phenomena across multiple contexts. In this way, qualitative researchers use idiographic causal explanations for theory building or the creation of new theories based on inductive reasoning. Quantitative researchers, on the other hand, use nomothetic causal relationships for theory testing , wherein a hypothesis is created from existing theory (big T or small t) and tested mathematically (i.e., deductive reasoning).

If you plan to study domestic and sexual violence, you will likely encounter the Power and Control Wheel. [6] The wheel is a model of how power and control operate in relationships with physical violence. The wheel was developed based on qualitative focus groups conducted by sexual and domestic violence advocates in Duluth, MN. While advocates likely had some tentative hypotheses about what was important in a relationship with domestic violence, participants in these focus groups provided the information that became the Power and Control Wheel. As qualitative inquiry like this one unfolds, hypotheses get more specific and clear, as researchers learn from what their participants share.

Once a theory is developed from qualitative data, a quantitative researcher can seek to test that theory. For example, a quantitative researcher may hypothesize that men who hold traditional gender roles are more likely to engage in domestic violence. That would make sense based on the Power and Control Wheel model, as the category of “using male privilege” speaks to this relationship. In this way, qualitatively-derived theory can inspire a hypothesis for a quantitative research project.

Unlike nomothetic causal relationships, there are no formal criteria (e.g., covariation) for establishing causality in idiographic causal relationships. In fact, some criteria like temporality and nonspuriousness may be violated. For example, if an adolescent client says, “It’s hard for me to tell whether my depression began before my drinking, but both got worse when I was expelled from my first high school,” they are recognizing that oftentimes it’s not so simple that one thing causes another. Sometimes, there is a reciprocal relationship where one variable (depression) impacts another (alcohol abuse), which then feeds back into the first variable (depression) and also into other variables (school). Other criteria, such as covariation and plausibility still make sense, as the relationships you highlight as part of your idiographic causal explanation should still be plausibly true and it elements should vary together.

Similarly, idiographic causal explanations differ in terms of hypotheses. If you recall from the last section, hypotheses in nomothetic causal explanations are testable predictions based on previous theory. In idiographic research, a researcher likely has hypotheses, but they are more tentative. Instead of predicting that “x will decrease y,” researchers will use previous literature to figure out what concepts might be important to participants and how they believe participants might respond during the study. Based on an analysis of the literature a researcher may formulate a few tentative hypotheses about what they expect to find in their qualitative study. Unlike nomothetic hypotheses, these are likely to change during the research process. As the researcher learns more from their participants, they might introduce new concepts that participants talk about. Because the participants are the experts in idiographic causal relationships, a researcher should be open to emerging topics and shift their research questions and hypotheses accordingly.

Two different baskets

Idiographic and nomothetic causal explanations form the “two baskets” of research design elements pictured in Figure 7.4 below. Later on, they will also determine the sampling approach, measures, and data analysis in your study.

two baskets of research, one with idiographic research and another with nomothetic research and their comopnents

In most cases, mixing components from one basket with the other would not make sense. If you are using quantitative methods with an idiographic question, you wouldn’t get the deep understanding you need to answer an idiographic question. Knowing, for example, that someone scores 20/35 on a numerical index of depression symptoms does not tell you what depression means to that person. Similarly, qualitative methods are not often used to deductive reasoning because qualitative methods usually seek to understand a participant’s perspective, rather than test what existing theory says about a concept.

However, these are not hard-and-fast rules. There are plenty of qualitative studies that attempt to test a theory. There are fewer social constructionist studies with quantitative methods, though studies will sometimes include quantitative information about participants. Researchers in the critical paradigm can fit into either bucket, depending on their research question, as they focus on the liberation of people from oppressive internal (subjective) or external (objective) forces.

We will explore later on in this chapter how researchers can use both buckets simultaneously in mixed methods research. For now, it’s important that you understand the logic that connects the ideas in each bucket. Not only is this fundamental to how knowledge is created and tested in social work, it speaks to the very assumptions and foundations upon which all theories of the social world are built!

Key Takeaways

  • Idiographic research focuses on subjectivity, context, and meaning.
  • Nomothetic research focuses on objectivity, prediction, and generalizing.
  • In qualitative studies, the goal is generally to understand the multitude of causes that account for the specific instance the researcher is investigating.
  • In quantitative studies, the goal may be to understand the more general causes of some phenomenon rather than the idiosyncrasies of one particular instance.
  • For nomothetic causal relationships, a relationship must be plausible and nonspurious, and the cause must precede the effect in time.
  • In a nomothetic causal relationship, the independent variable causes changes in a dependent variable.
  • Hypotheses are statements, drawn from theory, which describe a researcher’s expectation about a relationship between two or more variables.
  • Qualitative research may create theories that can be tested quantitatively.
  • The choice of idiographic or nomothetic causal relationships requires a consideration of methods, paradigm, and reasoning.
  • Depending on whether you seek a nomothetic or idiographic causal explanation, you are likely to employ specific research design components.
  • Causality-the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief
  • Control variables- potential “third variables” effects are controlled for mathematically in the data analysis process to highlight the relationship between the independent and dependent variable
  • Covariation- the degree to which two variables vary together
  • Dependent variable- a variable that depends on changes in the independent variable
  • Generalize- to make claims about a larger population based on an examination of a smaller sample
  • Hypothesis- a statement describing a researcher’s expectation regarding what she anticipates finding
  • Idiographic research- attempts to explain or describe your phenomenon exhaustively, based on the subjective understandings of your participants
  • Independent variable- causes a change in the dependent variable
  • Nomothetic research- provides a more general, sweeping explanation that is universally true for all people
  • Plausibility- in order to make the claim that one event, behavior, or belief causes another, the claim has to make sense
  • Spurious relationship- an association between two variables appears to be causal but can in fact be explained by some third variable
  • Statistical significance- confidence researchers have in a mathematical relationship
  • Temporality- whatever cause you identify must happen before the effect
  • Theory building- the creation of new theories based on inductive reasoning
  • Theory testing- when a hypothesis is created from existing theory and tested mathematically

Image attributions

Mikado by 3dman_eu CC-0

Weather TV Forecast by mohamed_hassan CC-0

Beatrice Birra Storytelling at African Art Museum by Anthony Cross public domain

  • Uggen, C., & Blackstone, A. (2004). Sexual harassment as a gendered expression of power. American Sociological Review, 69 , 64–92. ↵
  • In fact, there are empirical data that support this hypothesis. Gallup has conducted research on this very question since the 1960s. For more on their findings, see Carroll, J. (2005). Who supports marijuana legalization? Retrieved from http://www.gallup.com/poll/19561/who-supports-marijuana-legalization.aspx ↵
  • Figures 7.2 and 7.3 were copied from Blackstone, A. (2012) Principles of sociological inquiry: Qualitative and quantitative methods. Saylor Foundation. Retrieved from: https://saylordotorg.github.io/text_principles-of-sociological-inquiry-qualitative-and-quantitative-methods/ Shared under CC-BY-NC-SA 3.0 License (https://creativecommons.org/licenses/by-nc-sa/3.0/) ↵
  • Babbie, E. (2010). The practice of social research (12th ed.) . Belmont, CA: Wadsworth. ↵
  • Huff, D. & Geis, I. (1993). How to lie with statistics . New York, NY: W. W. Norton & Co. ↵
  • Frankfort-Nachmias, C. & Leon-Guerrero, A. (2011). Social statistics for a diverse society . Washington, DC: Pine Forge Press. ↵

Scientific Inquiry in Social Work Copyright © 2018 by Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Causal implicatures from correlational statements

Roles Conceptualization, Writing – original draft

* E-mail: [email protected] (SJG); [email protected] (TDU)

Affiliation Department of Psychology, Harvard University, Cambridge, MA, United States of America

ORCID logo

Roles Conceptualization, Data curation, Formal analysis, Investigation, Visualization, Writing – original draft

  • Samuel J. Gershman, 
  • Tomer D. Ullman

PLOS

  • Published: May 18, 2023
  • https://doi.org/10.1371/journal.pone.0286067
  • Peer Review
  • Reader Comments

Fig 1

Correlation does not imply causation, but this does not necessarily stop people from drawing causal inferences from correlational statements. We show that people do in fact infer causality from statements of association, under minimal conditions. In Study 1, participants interpreted statements of the form “X is associated with Y” to imply that Y causes X. In Studies 2 and 3, participants interpreted statements of the form “X is associated with an increased risk of Y” to imply that X causes Y. Thus, even the most orthodox correlational language can give rise to causal inferences.

Citation: Gershman SJ, Ullman TD (2023) Causal implicatures from correlational statements. PLoS ONE 18(5): e0286067. https://doi.org/10.1371/journal.pone.0286067

Editor: Micah B. Goldwater, University of Sydney, AUSTRALIA

Received: July 8, 2022; Accepted: May 9, 2023; Published: May 18, 2023

Copyright: © 2023 Gershman, Ullman. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All data are available at https://osf.io/u34ex/?view_only=1ffdf6729af149d6b14f7e32692c0334 .

Funding: This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF1231216. There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Modern scientists are carefully trained to avoid conflating causation and correlation when describing research results. A correlation between two variables may reflect the causal effect of one variable on the other, or the causal effect of another variable on both. For this reason, observational studies report findings using seemingly non-causal stock phrases: variables are “associated” or “linked” with one another. However, it is possible that readers of these correlational statements may interpret them as causal.

Much work has demonstrated that people draw pragmatic inferences from ambiguous or incomplete linguistic utterances [ 1 – 3 ]. In particular, studies of “implicit causality” in language understanding have shown that people draw inferences about ambiguous causal roles from verbs [ 4 – 6 ]. For example, people infer that “she” refers to the daughter in the following sentence: “The mother punished her daughter because she admitted her guilt” [ 4 ]. In contrast, people infer that “she” refers to the mother in the following sentence: “The mother punished her daughter because she discovered her guilt.” The verbs “admitted” and “discovered” induce different causal role assignments. Similar results have been reported for sentence completion tasks, where the referent of the completed sentence picks out different nouns depending on the verb.

One influential view of implicit causality is that it reflects inferences about event structure [ 5 ], rather than reflecting linguistic structure (though see [ 6 ]). Supporting this view is evidence that causal role assignments are sensitive to general knowledge and social context, such as the gender and typicality of the agents/patients in the sentence. More broadly, the literature on implicit causality suggests that language is a rich source of causal information.

For our purposes, an important limitation of the implicit causality concept is that it takes as given some causal background knowledge (e.g., that mothers punish daughters when they admit guilt) and asks how people use this knowledge to make inferences about linguistic referents. Our goal in this paper is to flip this around: what happens when people know the referents but not the causal background knowledge? This situation is commonly encountered when people are reading newspaper headlines about scientific discoveries: if scientists report that eating ice cream increases the risk for cancer, a natural question is whether ice cream causes cancer. Careful scientists and journalists can expunge causal language from correlation studies, but can they expunge causal representations from the mental models of readers? To answer this question, we undertook a series of studies that assess what kinds of causal inferences people draw from correlational statements.

We conducted three studies to examine the inference of causality from association statements. The results of Studies 1 and 2 are summarized in Fig 1 , and the results of Study 3 are shown in Fig 2 . In Study 1, participants were presented with statements of the form “X is associated with Y” and asked to judge whether X caused Y, or Y caused X. In Study 2, participants were presented with statements of the form “X is associated with an increased probability of Y”, and asked to make the similar causal judgments. In both studies, we ran versions with nonsense names designed to sound similar to medical terminology (e.g., “Themaglin” or “Pneuben”) or arbitrary letter symbols (see Methods for details).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

qualitative research statements imply cause and effect

https://doi.org/10.1371/journal.pone.0286067.g001

thumbnail

Participants were given the same statements as in Study 2A, with the addition of a ‘Neither’ option to report that no causal direction was preferred. The y-axis now indicates the proportion of participants who chose each response. Error bars show standard error estimates for a proportion. The dotted line indicated expected random response levels.

https://doi.org/10.1371/journal.pone.0286067.g002

We analyzed the data from Study 1 as follows. If participants chose the first variable as causing the second variable after being presented with the sentence “[X] is associated with [Y]”, this was coded as 1, and if they chose the second variable as causing the first, this was coded as 0. All responses within a study were averaged together (288 responses total in the ‘nonsense names’ condition, 291 responses total in the ‘symbols’ condition). If people were responding randomly, we would expect the average value to be 0.5. Note that a strategy such as ‘just pick the first answer’ is negated by the randomization of the variable order in both questions and answers. However, the mean response for both studies was significantly below chance, by a two-sided proportion z-test using Holm-Bonferroni correction for multiple comparisons (‘nonsense names’ condition: M = 0.23, Z = −11.14, SE = 0.025, p < 10 −28 ; ‘symbols’ condition: M = 0.38, SE = 0.029, Z = −4.29, p < 10 −4 ).

These results suggest that when given a simple association statement between two variables, participants inferred that the second variable causes the first. Put plainly, when presented with a sentence such as ‘Themaglin is associated with Pneuben’, or ‘X is associated with Y’, participants took this to imply ‘Pneuben causes Themaglin’, and ‘Y causes X’.

The analysis of data from Study 2 followed that of Study 1. If participants chose the first variable as causing the second variable after being presented with the sentence “[X] is associated with [RELATIONSHIP] [Y]”, this was coded as 1, otherwise the response was coded as 0. All responses within each relationship and within each study were averaged together. Again, if people were responding randomly, we would expect the average value to be 0.5.

We found that the addition of context drives all responses to be significantly above chance, by a two-sided proportion z-test using Holm-Bonferroni correction for multiple comparisons (‘nonsense names’ condition: M risk increase = 0.87, SE = 0.03, Z = 10.58, p < 10 −25 , M risk decrease = 0.94, SE = 0.02, Z = 17.91, p < 10 −71 , M probability increase = 0.88, SE = 0.03, Z = 11.25, p < 10 −28 , M probability decrease = 0.89, SE = 0.03, Z = 12.00, p < 10 −32 ; ‘symbols’ condition: M risk increase = 0.84, SE = 0.04, Z = 9.14, p < 10 −19 , M risk decrease = 0.87, SE = 0.03, Z = 10.96, p < 10 −27 , M probability increase = 0.86, SE = 0.04, Z = 10.30, p < 10 −24 , M probability decrease = 0.82, SE = 0.04, Z = 8.16, p < 10 −15 ). By contrast, the simple association statements (which do not include any context) were now not significantly different from chance. Although it is not entirely clear why this aspect of the data did not replicate Study 1, we suspect that it may be related to the context of more causally salient statements (risk increase/decrease, probability increase/decrease) which may have attuned participants to scrutinize association statements more carefully.

One concern with Studies 1 and 2 is that participants were forced to choose one causal direction. We addressed this in Study 3 by giving participants the option to report that neither causal direction was preferred. The results indicate that even with the option of reporting no preferred causal direction, people still typically took the association statement to imply that the first variable caused the second ( Fig 2 ). For the pure association statements, the proportion between ‘X → Y’ and ‘Y → X’ was roughly twice that of Study 2, which we take to indicate that in Study 2 some participants were using the ‘Y → X’ statement to stand in for ‘neither are directly related’.

All proportions were different from one another within the different context questions. That is, we ran 3 two-sided proportion z-tests within each context (risk increase, risk decrease, probability increase, probability decrease, and simple association), comparing ‘X → Y’ to ‘Y → X’, ‘X → Y’ to ‘Neither’, and ‘Y → X’ to ‘Neither, for a total of 15 comparisons, and using the Holm-Bonferroni correction for multiple comparisons (note that the pre-registration has this as 7 questions and so 21 comparisons, due to an typographic error on the part of the experimenters in considering the number of questions asked).

All proportions were also significantly different from the chance response of 33%, except for ‘X → Y’ in the Association context, and ‘Neither’ in the Probability Decrease context. Given the overall pattern of responses, we take these latter two to be coincidental.

These results indicate that when given an association statement between two variables with minimal context that indicates a change in the relationship for the second variable, participants inferred a causal relation, such that the first variable causes the second. In other words, when presented with a sentence such as ‘Themaglin is associated with an increased risk of Pneuben’, or ‘X is associated with an increased risk of Y’, participants took this to strongly imply ‘Themaglin causes Pneuben’, and ‘X causes Y’.

While we take the data to show that people draw causal implicatures from correlational statements, a possible concern is that participants were given a forced-choice between two causal relationships, without the option of declaring uncertainty, or rejecting both relationships. But this forced choice was a deliberate feature: because many people are highly drilled in the mantra that correlation does not imply causation, it is likely that when appropriately cued, they will activate the mantra. However, our hypothesis here is that there are implicit causal expectations about linguistic structures that are activated by correlational statements, and these expectations can be made explicit when people are forced to choose between different causal interpretations. If people truly are committed to non-causal interpretations of non-causal language, then participants should just choose arbitrarily between the different causal interpretations available to them. The fact that we found a large systematic preference argues in favor of our causal implicature hypothesis.

To more directly address the concern that forced choices between causal interpretations might yield artifactual results, we conducted a third study in which participants could choose a ‘Neither’ option. Strikingly, the results were essentially the same: participants still showed a strong preference for a particular causal direction in all of the experimental conditions apart from the association condition. Thus, it is unlikely that the response format produced a systematic bias.

One potential concern about our findings is that the association condition produced different results in Studies 1 and 2, deviating from random in the former but not in the latter. We speculated that this might be due to some kind of context effect. Study 3 sheds some additional light on this discrepancy, finding that most people endorse a non-causal interpretation of associative statements, although a large minority (47%) still favor a causal interpretation. Within that large minority, we find again a preference for X → Y rather than Y → X . We tentatively propose that many or even most of the people endorsing Y → X in Study 2 were doing so as a signal of protest against being asked to assign a causal direction.

Our results cannot unambiguously rule out several alternative hypotheses that hinge on different interpretations of the information we provided to participants. First, it is possible that participants did not distinguish between probabilistic and causal interpretations of the statements. For example, the statement that X increases the risk of Y might also imply that Y increases the risk of X, but the increase for Y given X might still be larger than the increase for X given Y. In this case, there is an asymmetry in the risk pattern, even if there is no causal relationship between X and Y. If participants align their causal preferences with the directional asymmetry, then they might show preferences deviating from random, which we have treated as a signature of causal implicature. In essence, this alternative hypothesis posits either that participants do not fully understand what causality is, or that risk directionality is used as a heuristic for causal implicature. We think it is somewhat unlikely that participants simply do not understand what we are asking when we elicit causal interpretations, given the sophisticated causal reasoning abilities demonstrated in many other studies.

A second alternative hypothesis is that participants interpreted statements of the form “X is associated with increased risk of Y” as “Doing X is associated with increased risk of Y” which would seem to imply a kind of causal intervention. Thus, on this hypothesis apparent causal preferences arise from vagueness in the statements. While we cannot rule out this possibility, we would argue that the vagueness hypothesis is really another version of causal implicature, where participants make a pragmatic interpretation that the speaker intends to communicate a causal relationship.

We have argued that the alignment and vagueness hypotheses may or may not be consistent with a causal implicature hypothesis. More work is required to answer this question.

In summary, certain correlational statements are associated with an increased probability of causal implicature. To be clear, we are not implying that these correlational statements cause causal implicature, but rather that they are correlated with causal implicature. In other words, correlation does not imply causation, but it does sometimes “imply” causation.

The studies were approved by the Harvard Institutional Review Board, protocol no. 19-1861.

Participants were recruited online [ 7 ] via the Prolific platform ( https://www.prolific.co ). Participants were restricted to those located in the USA, and having completed at least 100 prior studies on Prolific, with an acceptance rate of at least 90%. We recruited 100 participants in each study (total of N = 400), following a pilot indicating the expected effect size is such that this number of participants had a power >90%. Participants in any study described here (meaning, each sub-condition) were prohibited from participating in any other study described here.

At the end of each study, participants were asked “Please describe, in a few words, what you were asked to do in this experiment?”. We excluded participants who gave nonsensical or non-sequitur answers, such as ‘Do the causes’ or ‘Opinions’ .

The analyses, experiments, exclusion criteria, and sample sizes were pre-registered, as available here: https://aspredicted.org/blind.php?x=JHK_JR1 and here https://aspredicted.org/blind.php?x=BP6_Z8D .

All data are available at https://osf.io/u34ex/?view_only=1ffdf6729af149d6b14f7e32692c0334 .

To examine the basic implicature of the phrasing ‘X is associated with Y’, without any further context, we designed a simple study in which participants read statements about the existence of an association between nonsense terms such as Themalgin and Pneuben (‘nonsense names’ condition), or abstract symbols like X and Y (‘symbols’ condition).

Participants.

We recruited 200 participants, and randomly assigned them evenly to the two conditions (‘nonsense names’ and ‘symbols’). Written/verbal informed consent was obtained from all participants for inclusion in the study. After excluding 4 participants, the mean age of participants in the ‘nonsense names’ condition was 35.4, and 57 identified as female. After excluding 3 participants in the ‘symbols’ condition, the mean age of the remaining participants was 34.7, and 66 identified as Female.

Participants were informed that they will be asked a few simple questions about the possible relationship between different things, and that the questions are unrelated to one another. Participants then saw 3 questions in succession, each phrased as

Suppose you read the following piece of information:

“[X] is associated with [Y]”

Which of the following is more likely?

Participants were then given a forced choice between the statements “[X] causes [Y]” and “[Y] causes [X]”.

In the ‘nonsense names’ condition, [X] and [Y] were replaced with the following nonsense terms: Themaglin, Rebosen, Denoden, Flembers, Agoriv, and Ceflar. In the ‘symbols’ condition, [X] and [Y] were replaced with the following symbols: X, Y, P, Z, G, and D. The ordering of the nonsense term pairs or symbol pairs within each question, the ordering of the forced choice answers to each question, and the question order of the three questions was randomized.

In Study 2, we provided additional context of the sort that is often found in the scientific literature, as well as the popular press. In particular, we examined the causal implicature of phrases such as ‘Ceflar is associated with a lower risk of Agoriv’, ‘Pneuben is associated with higher probability of Efogen’, and so on. The design was similar to Study 1: Participants were given statements about the existence of an association and minimal context about the relationship (risk, probability, increase, decrease) between nonsense terms (‘nonsense names’ condition), or abstract symbols like X and Y (‘symbols’ condition).

We recruited 200 participants in total, randomly and evenly distributing them to the two conditions (‘nonsense names’ and ‘symbols’). Written/verbal informed consent was obtained from all participants for inclusion in the study. After excluding 3 participants, the mean age of participants in the ‘nonsense names’ condition was 35.0, and 55 identified as female. After excluding 6 participants in the ‘symbols’ condition, the mean age of the remaining participants was 34.6, and 70 identified as Female.

Participants were informed that they would be asked a few simple questions about the possible relationship between different things, and that the questions were unrelated to one another. Participants then saw 5 questions in succession, each phrased as:

“[X] is associated with [RELATIONSHIP] [Y]”

Participants were then given a forced choice between the statements “[X] causes [CHANGE] [Y]” and “[Y] causes [CHANGE] [X]”.

In the ‘nonsense names’ condition, [X] and [Y] were replaced with the following nonsense terms: Themaglin, Rebosen, Denoden, Flembers, Agoriv, Ceflar, Pneuben, Efogen, Turilin, and Laurem. In the ‘symbols’ condition, [X] and [Y] were replaced with the following symbols: T, R, P, E, A, C, X, Y, D, and F. The ordering of the nonsense term pairs or symbol pairs within each question, the ordering of the forced choice answers to each question, and the question order of the five questions was randomized. The [RELATIONSHIP] variable was replaced with: Higher probability, higher risk, lower probability, lower risk, or was left empty. When the [RELATIONSHIP] variable was changed to ‘higher’ the [CHANGE] relationship was replaced with: ‘an increase in’. When the [RELATIONSHIP] variable was changed to ‘lower’, the [CHANGE] relationship was replaced with: ‘a decrease in’. When the [RELATIONSHIP] variable was left empty, the [CHANGE] relationship was left empty, recreating the structure of Study 1.

In Study 3, we gave participants a tertiary response rather than a binary response, allowing them to express that neither of the two entities were causally related. More specifically, the study replicated Study 2, condition A (‘nonsense names’), with an additional response option: ‘A third factor is causally related to [X] and [Y], they are not directly related’, where X and Y were the same nonsense terms used in Study 2.

We recruited 100 participants, matching the sample sizes of the different conditions in Studies 1 and 3. Written/verbal informed consent was obtained from all participants for inclusion in the study. Participants were US-based No participants were excluded. The mean age of participants was 34.8, 63 identified as female, and 37 identified as male.

Study 3 following the logic and stimuli of Study 2A. Participants were informed that they would be asked a few simple questions about the possible relationship between different things, and that the questions were unrelated to one another. Same as Study 2, the ordering of the nonsense term pairs within each question, the ordering of the forced choice answers to each question, and the question order of the five questions was randomized.

Data quality assurance

We appreciate the growing concern about using AI tools to answer online surveys, and we do not think there is an agreed-on safeguard yet against the use of latest tools like ChatGPT-4 to answer online catch questions. We would note that the first two studies were run in early 2022, when the use of large language models was not widespread, and their performance when used was rather limited. Our catch question was meant to weed out low-effort, automatic responses such as ‘have a good day or ‘relationships’. To the degree that we did not weed out all auto-replies, we believe that they only introduced a bit of noise, and given that our studies report differential effects this extra noise on top of the main effect is not a primary concern. The third study was run in late 2022, around the advent of the (now legacy) version of ChatGPT, and before adjusting for how well (or not) such tools can pass various tasks. We cannot guarantee our last question weeds out malicious users of the latest automatic tools, and moving forward we plan to adjust our data validation. However, we believe that the timing of the study was such that it is unlikely a large amount of malicious users on Prolific (to the degree such a group existed) had time to mass adopt ChatGPT into their workflow and rack up hundreds of completed and approved studies in time for our study. We also note that the marginal financial gain of using these tools is likely outweighed by the cost of per-token expenses.

Acknowledgments

We are grateful to Michael Franke for helpful discussions.

  • 1. Grice HP. Logic and conversation. In: Speech Acts. Brill; 1975. p. 41–58.
  • 2. Clark HH. Using Language. Cambridge University Press; 1996.
  • View Article
  • Google Scholar

What is Qualitative in Qualitative Research

  • Open access
  • Published: 27 February 2019
  • Volume 42 , pages 139–160, ( 2019 )

Cite this article

You have full access to this open access article

qualitative research statements imply cause and effect

  • Patrik Aspers 1 , 2 &
  • Ugo Corte 3  

591k Accesses

286 Citations

24 Altmetric

Explore all metrics

What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being “qualitative,” the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term “qualitative.” Then, drawing on ideas we find scattered across existing work, and based on Becker’s classic study of marijuana consumption, we formulate and illustrate a definition that tries to capture its core elements. We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. This formulation is developed as a tool to help improve research designs while stressing that a qualitative dimension is present in quantitative work as well. Additionally, it can facilitate teaching, communication between researchers, diminish the gap between qualitative and quantitative researchers, help to address critiques of qualitative methods, and be used as a standard of evaluation of qualitative research.

Similar content being viewed by others

qualitative research statements imply cause and effect

What is Qualitative in Research

Unsettling definitions of qualitative research, what is “qualitative” in qualitative research why the answer does not matter but the question is important.

Avoid common mistakes on your manuscript.

If we assume that there is something called qualitative research, what exactly is this qualitative feature? And how could we evaluate qualitative research as good or not? Is it fundamentally different from quantitative research? In practice, most active qualitative researchers working with empirical material intuitively know what is involved in doing qualitative research, yet perhaps surprisingly, a clear definition addressing its key feature is still missing.

To address the question of what is qualitative we turn to the accounts of “qualitative research” in textbooks and also in empirical work. In his classic, explorative, interview study of deviance Howard Becker ( 1963 ) asks ‘How does one become a marijuana user?’ In contrast to pre-dispositional and psychological-individualistic theories of deviant behavior, Becker’s inherently social explanation contends that becoming a user of this substance is the result of a three-phase sequential learning process. First, potential users need to learn how to smoke it properly to produce the “correct” effects. If not, they are likely to stop experimenting with it. Second, they need to discover the effects associated with it; in other words, to get “high,” individuals not only have to experience what the drug does, but also to become aware that those sensations are related to using it. Third, they require learning to savor the feelings related to its consumption – to develop an acquired taste. Becker, who played music himself, gets close to the phenomenon by observing, taking part, and by talking to people consuming the drug: “half of the fifty interviews were conducted with musicians, the other half covered a wide range of people, including laborers, machinists, and people in the professions” (Becker 1963 :56).

Another central aspect derived through the common-to-all-research interplay between induction and deduction (Becker 2017 ), is that during the course of his research Becker adds scientifically meaningful new distinctions in the form of three phases—distinctions, or findings if you will, that strongly affect the course of his research: its focus, the material that he collects, and which eventually impact his findings. Each phase typically unfolds through social interaction, and often with input from experienced users in “a sequence of social experiences during which the person acquires a conception of the meaning of the behavior, and perceptions and judgments of objects and situations, all of which make the activity possible and desirable” (Becker 1963 :235). In this study the increased understanding of smoking dope is a result of a combination of the meaning of the actors, and the conceptual distinctions that Becker introduces based on the views expressed by his respondents. Understanding is the result of research and is due to an iterative process in which data, concepts and evidence are connected with one another (Becker 2017 ).

Indeed, there are many definitions of qualitative research, but if we look for a definition that addresses its distinctive feature of being “qualitative,” the literature across the broad field of social science is meager. The main reason behind this article lies in the paradox, which, to put it bluntly, is that researchers act as if they know what it is, but they cannot formulate a coherent definition. Sociologists and others will of course continue to conduct good studies that show the relevance and value of qualitative research addressing scientific and practical problems in society. However, our paper is grounded in the idea that providing a clear definition will help us improve the work that we do. Among researchers who practice qualitative research there is clearly much knowledge. We suggest that a definition makes this knowledge more explicit. If the first rationale for writing this paper refers to the “internal” aim of improving qualitative research, the second refers to the increased “external” pressure that especially many qualitative researchers feel; pressure that comes both from society as well as from other scientific approaches. There is a strong core in qualitative research, and leading researchers tend to agree on what it is and how it is done. Our critique is not directed at the practice of qualitative research, but we do claim that the type of systematic work we do has not yet been done, and that it is useful to improve the field and its status in relation to quantitative research.

The literature on the “internal” aim of improving, or at least clarifying qualitative research is large, and we do not claim to be the first to notice the vagueness of the term “qualitative” (Strauss and Corbin 1998 ). Also, others have noted that there is no single definition of it (Long and Godfrey 2004 :182), that there are many different views on qualitative research (Denzin and Lincoln 2003 :11; Jovanović 2011 :3), and that more generally, we need to define its meaning (Best 2004 :54). Strauss and Corbin ( 1998 ), for example, as well as Nelson et al. (1992:2 cited in Denzin and Lincoln 2003 :11), and Flick ( 2007 :ix–x), have recognized that the term is problematic: “Actually, the term ‘qualitative research’ is confusing because it can mean different things to different people” (Strauss and Corbin 1998 :10–11). Hammersley has discussed the possibility of addressing the problem, but states that “the task of providing an account of the distinctive features of qualitative research is far from straightforward” ( 2013 :2). This confusion, as he has recently further argued (Hammersley 2018 ), is also salient in relation to ethnography where different philosophical and methodological approaches lead to a lack of agreement about what it means.

Others (e.g. Hammersley 2018 ; Fine and Hancock 2017 ) have also identified the treat to qualitative research that comes from external forces, seen from the point of view of “qualitative research.” This threat can be further divided into that which comes from inside academia, such as the critique voiced by “quantitative research” and outside of academia, including, for example, New Public Management. Hammersley ( 2018 ), zooming in on one type of qualitative research, ethnography, has argued that it is under treat. Similarly to Fine ( 2003 ), and before him Gans ( 1999 ), he writes that ethnography’ has acquired a range of meanings, and comes in many different versions, these often reflecting sharply divergent epistemological orientations. And already more than twenty years ago while reviewing Denzin and Lincoln’ s Handbook of Qualitative Methods Fine argued:

While this increasing centrality [of qualitative research] might lead one to believe that consensual standards have developed, this belief would be misleading. As the methodology becomes more widely accepted, querulous challengers have raised fundamental questions that collectively have undercut the traditional models of how qualitative research is to be fashioned and presented (1995:417).

According to Hammersley, there are today “serious treats to the practice of ethnographic work, on almost any definition” ( 2018 :1). He lists five external treats: (1) that social research must be accountable and able to show its impact on society; (2) the current emphasis on “big data” and the emphasis on quantitative data and evidence; (3) the labor market pressure in academia that leaves less time for fieldwork (see also Fine and Hancock 2017 ); (4) problems of access to fields; and (5) the increased ethical scrutiny of projects, to which ethnography is particularly exposed. Hammersley discusses some more or less insufficient existing definitions of ethnography.

The current situation, as Hammersley and others note—and in relation not only to ethnography but also qualitative research in general, and as our empirical study shows—is not just unsatisfactory, it may even be harmful for the entire field of qualitative research, and does not help social science at large. We suggest that the lack of clarity of qualitative research is a real problem that must be addressed.

Towards a Definition of Qualitative Research

Seen in an historical light, what is today called qualitative, or sometimes ethnographic, interpretative research – or a number of other terms – has more or less always existed. At the time the founders of sociology – Simmel, Weber, Durkheim and, before them, Marx – were writing, and during the era of the Methodenstreit (“dispute about methods”) in which the German historical school emphasized scientific methods (cf. Swedberg 1990 ), we can at least speak of qualitative forerunners.

Perhaps the most extended discussion of what later became known as qualitative methods in a classic work is Bronisław Malinowski’s ( 1922 ) Argonauts in the Western Pacific , although even this study does not explicitly address the meaning of “qualitative.” In Weber’s ([1921–-22] 1978) work we find a tension between scientific explanations that are based on observation and quantification and interpretative research (see also Lazarsfeld and Barton 1982 ).

If we look through major sociology journals like the American Sociological Review , American Journal of Sociology , or Social Forces we will not find the term qualitative sociology before the 1970s. And certainly before then much of what we consider qualitative classics in sociology, like Becker’ study ( 1963 ), had already been produced. Indeed, the Chicago School often combined qualitative and quantitative data within the same study (Fine 1995 ). Our point being that before a disciplinary self-awareness the term quantitative preceded qualitative, and the articulation of the former was a political move to claim scientific status (Denzin and Lincoln 2005 ). In the US the World War II seem to have sparked a critique of sociological work, including “qualitative work,” that did not follow the scientific canon (Rawls 2018 ), which was underpinned by a scientifically oriented and value free philosophy of science. As a result the attempts and practice of integrating qualitative and quantitative sociology at Chicago lost ground to sociology that was more oriented to surveys and quantitative work at Columbia under Merton-Lazarsfeld. The quantitative tradition was also able to present textbooks (Lundberg 1951 ) that facilitated the use this approach and its “methods.” The practices of the qualitative tradition, by and large, remained tacit or was part of the mentoring transferred from the renowned masters to their students.

This glimpse into history leads us back to the lack of a coherent account condensed in a definition of qualitative research. Many of the attempts to define the term do not meet the requirements of a proper definition: A definition should be clear, avoid tautology, demarcate its domain in relation to the environment, and ideally only use words in its definiens that themselves are not in need of definition (Hempel 1966 ). A definition can enhance precision and thus clarity by identifying the core of the phenomenon. Preferably, a definition should be short. The typical definition we have found, however, is an ostensive definition, which indicates what qualitative research is about without informing us about what it actually is :

Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives. (Denzin and Lincoln 2005 :2)

Flick claims that the label “qualitative research” is indeed used as an umbrella for a number of approaches ( 2007 :2–4; 2002 :6), and it is not difficult to identify research fitting this designation. Moreover, whatever it is, it has grown dramatically over the past five decades. In addition, courses have been developed, methods have flourished, arguments about its future have been advanced (for example, Denzin and Lincoln 1994) and criticized (for example, Snow and Morrill 1995 ), and dedicated journals and books have mushroomed. Most social scientists have a clear idea of research and how it differs from journalism, politics and other activities. But the question of what is qualitative in qualitative research is either eluded or eschewed.

We maintain that this lacuna hinders systematic knowledge production based on qualitative research. Paul Lazarsfeld noted the lack of “codification” as early as 1955 when he reviewed 100 qualitative studies in order to offer a codification of the practices (Lazarsfeld and Barton 1982 :239). Since then many texts on “qualitative research” and its methods have been published, including recent attempts (Goertz and Mahoney 2012 ) similar to Lazarsfeld’s. These studies have tried to extract what is qualitative by looking at the large number of empirical “qualitative” studies. Our novel strategy complements these endeavors by taking another approach and looking at the attempts to codify these practices in the form of a definition, as well as to a minor extent take Becker’s study as an exemplar of what qualitative researchers actually do, and what the characteristic of being ‘qualitative’ denotes and implies. We claim that qualitative researchers, if there is such a thing as “qualitative research,” should be able to codify their practices in a condensed, yet general way expressed in language.

Lingering problems of “generalizability” and “how many cases do I need” (Small 2009 ) are blocking advancement – in this line of work qualitative approaches are said to differ considerably from quantitative ones, while some of the former unsuccessfully mimic principles related to the latter (Small 2009 ). Additionally, quantitative researchers sometimes unfairly criticize the first based on their own quality criteria. Scholars like Goertz and Mahoney ( 2012 ) have successfully focused on the different norms and practices beyond what they argue are essentially two different cultures: those working with either qualitative or quantitative methods. Instead, similarly to Becker ( 2017 ) who has recently questioned the usefulness of the distinction between qualitative and quantitative research, we focus on similarities.

The current situation also impedes both students and researchers in focusing their studies and understanding each other’s work (Lazarsfeld and Barton 1982 :239). A third consequence is providing an opening for critiques by scholars operating within different traditions (Valsiner 2000 :101). A fourth issue is that the “implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm” (Goertz and Mahoney 2012 :9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving strategies to improve it and to develop standards of evaluation in qualitative research. However, a specific focus on its distinguishing feature of being “qualitative” while being implicitly acknowledged, was discussed only briefly (for example, Best 2004 ).

In 2014 a theme issue was published in this journal on “Methods, Materials, and Meanings: Designing Cultural Analysis,” discussing central issues in (cultural) qualitative research (Berezin 2014 ; Biernacki 2014 ; Glaeser 2014 ; Lamont and Swidler 2014 ; Spillman 2014). We agree with many of the arguments put forward, such as the risk of methodological tribalism, and that we should not waste energy on debating methods separated from research questions. Nonetheless, a clarification of the relation to what is called “quantitative research” is of outmost importance to avoid misunderstandings and misguided debates between “qualitative” and “quantitative” researchers. Our strategy means that researchers, “qualitative” or “quantitative” they may be, in their actual practice may combine qualitative work and quantitative work.

In this article we accomplish three tasks. First, we systematically survey the literature for meanings of qualitative research by looking at how researchers have defined it. Drawing upon existing knowledge we find that the different meanings and ideas of qualitative research are not yet coherently integrated into one satisfactory definition. Next, we advance our contribution by offering a definition of qualitative research and illustrate its meaning and use partially by expanding on the brief example introduced earlier related to Becker’s work ( 1963 ). We offer a systematic analysis of central themes of what researchers consider to be the core of “qualitative,” regardless of style of work. These themes – which we summarize in terms of four keywords: distinction, process, closeness, improved understanding – constitute part of our literature review, in which each one appears, sometimes with others, but never all in the same definition. They serve as the foundation of our contribution. Our categories are overlapping. Their use is primarily to organize the large amount of definitions we have identified and analyzed, and not necessarily to draw a clear distinction between them. Finally, we continue the elaboration discussed above on the advantages of a clear definition of qualitative research.

In a hermeneutic fashion we propose that there is something meaningful that deserves to be labelled “qualitative research” (Gadamer 1990 ). To approach the question “What is qualitative in qualitative research?” we have surveyed the literature. In conducting our survey we first traced the word’s etymology in dictionaries, encyclopedias, handbooks of the social sciences and of methods and textbooks, mainly in English, which is common to methodology courses. It should be noted that we have zoomed in on sociology and its literature. This discipline has been the site of the largest debate and development of methods that can be called “qualitative,” which suggests that this field should be examined in great detail.

In an ideal situation we should expect that one good definition, or at least some common ideas, would have emerged over the years. This common core of qualitative research should be so accepted that it would appear in at least some textbooks. Since this is not what we found, we decided to pursue an inductive approach to capture maximal variation in the field of qualitative research; we searched in a selection of handbooks, textbooks, book chapters, and books, to which we added the analysis of journal articles. Our sample comprises a total of 89 references.

In practice we focused on the discipline that has had a clear discussion of methods, namely sociology. We also conducted a broad search in the JSTOR database to identify scholarly sociology articles published between 1998 and 2017 in English with a focus on defining or explaining qualitative research. We specifically zoom in on this time frame because we would have expect that this more mature period would have produced clear discussions on the meaning of qualitative research. To find these articles we combined a number of keywords to search the content and/or the title: qualitative (which was always included), definition, empirical, research, methodology, studies, fieldwork, interview and observation .

As a second phase of our research we searched within nine major sociological journals ( American Journal of Sociology , Sociological Theory , American Sociological Review , Contemporary Sociology , Sociological Forum , Sociological Theory , Qualitative Research , Qualitative Sociology and Qualitative Sociology Review ) for articles also published during the past 19 years (1998–2017) that had the term “qualitative” in the title and attempted to define qualitative research.

Lastly we picked two additional journals, Qualitative Research and Qualitative Sociology , in which we could expect to find texts addressing the notion of “qualitative.” From Qualitative Research we chose Volume 14, Issue 6, December 2014, and from Qualitative Sociology we chose Volume 36, Issue 2, June 2017. Within each of these we selected the first article; then we picked the second article of three prior issues. Again we went back another three issues and investigated article number three. Finally we went back another three issues and perused article number four. This selection criteria was used to get a manageable sample for the analysis.

The coding process of the 89 references we gathered in our selected review began soon after the first round of material was gathered, and we reduced the complexity created by our maximum variation sampling (Snow and Anderson 1993 :22) to four different categories within which questions on the nature and properties of qualitative research were discussed. We call them: Qualitative and Quantitative Research, Qualitative Research, Fieldwork, and Grounded Theory. This – which may appear as an illogical grouping – merely reflects the “context” in which the matter of “qualitative” is discussed. If the selection process of the material – books and articles – was informed by pre-knowledge, we used an inductive strategy to code the material. When studying our material, we identified four central notions related to “qualitative” that appear in various combinations in the literature which indicate what is the core of qualitative research. We have labeled them: “distinctions”, “process,” “closeness,” and “improved understanding.” During the research process the categories and notions were improved, refined, changed, and reordered. The coding ended when a sense of saturation in the material arose. In the presentation below all quotations and references come from our empirical material of texts on qualitative research.

Analysis – What is Qualitative Research?

In this section we describe the four categories we identified in the coding, how they differently discuss qualitative research, as well as their overall content. Some salient quotations are selected to represent the type of text sorted under each of the four categories. What we present are examples from the literature.

Qualitative and Quantitative

This analytic category comprises quotations comparing qualitative and quantitative research, a distinction that is frequently used (Brown 2010 :231); in effect this is a conceptual pair that structures the discussion and that may be associated with opposing interests. While the general goal of quantitative and qualitative research is the same – to understand the world better – their methodologies and focus in certain respects differ substantially (Becker 1966 :55). Quantity refers to that property of something that can be determined by measurement. In a dictionary of Statistics and Methodology we find that “(a) When referring to *variables, ‘qualitative’ is another term for *categorical or *nominal. (b) When speaking of kinds of research, ‘qualitative’ refers to studies of subjects that are hard to quantify, such as art history. Qualitative research tends to be a residual category for almost any kind of non-quantitative research” (Stiles 1998:183). But it should be obvious that one could employ a quantitative approach when studying, for example, art history.

The same dictionary states that quantitative is “said of variables or research that can be handled numerically, usually (too sharply) contrasted with *qualitative variables and research” (Stiles 1998:184). From a qualitative perspective “quantitative research” is about numbers and counting, and from a quantitative perspective qualitative research is everything that is not about numbers. But this does not say much about what is “qualitative.” If we turn to encyclopedias we find that in the 1932 edition of the Encyclopedia of the Social Sciences there is no mention of “qualitative.” In the Encyclopedia from 1968 we can read:

Qualitative Analysis. For methods of obtaining, analyzing, and describing data, see [the various entries:] CONTENT ANALYSIS; COUNTED DATA; EVALUATION RESEARCH, FIELD WORK; GRAPHIC PRESENTATION; HISTORIOGRAPHY, especially the article on THE RHETORIC OF HISTORY; INTERVIEWING; OBSERVATION; PERSONALITY MEASUREMENT; PROJECTIVE METHODS; PSYCHOANALYSIS, article on EXPERIMENTAL METHODS; SURVEY ANALYSIS, TABULAR PRESENTATION; TYPOLOGIES. (Vol. 13:225)

Some, like Alford, divide researchers into methodologists or, in his words, “quantitative and qualitative specialists” (Alford 1998 :12). Qualitative research uses a variety of methods, such as intensive interviews or in-depth analysis of historical materials, and it is concerned with a comprehensive account of some event or unit (King et al. 1994 :4). Like quantitative research it can be utilized to study a variety of issues, but it tends to focus on meanings and motivations that underlie cultural symbols, personal experiences, phenomena and detailed understanding of processes in the social world. In short, qualitative research centers on understanding processes, experiences, and the meanings people assign to things (Kalof et al. 2008 :79).

Others simply say that qualitative methods are inherently unscientific (Jovanović 2011 :19). Hood, for instance, argues that words are intrinsically less precise than numbers, and that they are therefore more prone to subjective analysis, leading to biased results (Hood 2006 :219). Qualitative methodologies have raised concerns over the limitations of quantitative templates (Brady et al. 2004 :4). Scholars such as King et al. ( 1994 ), for instance, argue that non-statistical research can produce more reliable results if researchers pay attention to the rules of scientific inference commonly stated in quantitative research. Also, researchers such as Becker ( 1966 :59; 1970 :42–43) have asserted that, if conducted properly, qualitative research and in particular ethnographic field methods, can lead to more accurate results than quantitative studies, in particular, survey research and laboratory experiments.

Some researchers, such as Kalof, Dan, and Dietz ( 2008 :79) claim that the boundaries between the two approaches are becoming blurred, and Small ( 2009 ) argues that currently much qualitative research (especially in North America) tries unsuccessfully and unnecessarily to emulate quantitative standards. For others, qualitative research tends to be more humanistic and discursive (King et al. 1994 :4). Ragin ( 1994 ), and similarly also Becker, ( 1996 :53), Marchel and Owens ( 2007 :303) think that the main distinction between the two styles is overstated and does not rest on the simple dichotomy of “numbers versus words” (Ragin 1994 :xii). Some claim that quantitative data can be utilized to discover associations, but in order to unveil cause and effect a complex research design involving the use of qualitative approaches needs to be devised (Gilbert 2009 :35). Consequently, qualitative data are useful for understanding the nuances lying beyond those processes as they unfold (Gilbert 2009 :35). Others contend that qualitative research is particularly well suited both to identify causality and to uncover fine descriptive distinctions (Fine and Hallett 2014 ; Lichterman and Isaac Reed 2014 ; Katz 2015 ).

There are other ways to separate these two traditions, including normative statements about what qualitative research should be (that is, better or worse than quantitative approaches, concerned with scientific approaches to societal change or vice versa; Snow and Morrill 1995 ; Denzin and Lincoln 2005 ), or whether it should develop falsifiable statements; Best 2004 ).

We propose that quantitative research is largely concerned with pre-determined variables (Small 2008 ); the analysis concerns the relations between variables. These categories are primarily not questioned in the study, only their frequency or degree, or the correlations between them (cf. Franzosi 2016 ). If a researcher studies wage differences between women and men, he or she works with given categories: x number of men are compared with y number of women, with a certain wage attributed to each person. The idea is not to move beyond the given categories of wage, men and women; they are the starting point as well as the end point, and undergo no “qualitative change.” Qualitative research, in contrast, investigates relations between categories that are themselves subject to change in the research process. Returning to Becker’s study ( 1963 ), we see that he questioned pre-dispositional theories of deviant behavior working with pre-determined variables such as an individual’s combination of personal qualities or emotional problems. His take, in contrast, was to understand marijuana consumption by developing “variables” as part of the investigation. Thereby he presented new variables, or as we would say today, theoretical concepts, but which are grounded in the empirical material.

Qualitative Research

This category contains quotations that refer to descriptions of qualitative research without making comparisons with quantitative research. Researchers such as Denzin and Lincoln, who have written a series of influential handbooks on qualitative methods (1994; Denzin and Lincoln 2003 ; 2005 ), citing Nelson et al. (1992:4), argue that because qualitative research is “interdisciplinary, transdisciplinary, and sometimes counterdisciplinary” it is difficult to derive one single definition of it (Jovanović 2011 :3). According to them, in fact, “the field” is “many things at the same time,” involving contradictions, tensions over its focus, methods, and how to derive interpretations and findings ( 2003 : 11). Similarly, others, such as Flick ( 2007 :ix–x) contend that agreeing on an accepted definition has increasingly become problematic, and that qualitative research has possibly matured different identities. However, Best holds that “the proliferation of many sorts of activities under the label of qualitative sociology threatens to confuse our discussions” ( 2004 :54). Atkinson’s position is more definite: “the current state of qualitative research and research methods is confused” ( 2005 :3–4).

Qualitative research is about interpretation (Blumer 1969 ; Strauss and Corbin 1998 ; Denzin and Lincoln 2003 ), or Verstehen [understanding] (Frankfort-Nachmias and Nachmias 1996 ). It is “multi-method,” involving the collection and use of a variety of empirical materials (Denzin and Lincoln 1998; Silverman 2013 ) and approaches (Silverman 2005 ; Flick 2007 ). It focuses not only on the objective nature of behavior but also on its subjective meanings: individuals’ own accounts of their attitudes, motivations, behavior (McIntyre 2005 :127; Creswell 2009 ), events and situations (Bryman 1989) – what people say and do in specific places and institutions (Goodwin and Horowitz 2002 :35–36) in social and temporal contexts (Morrill and Fine 1997). For this reason, following Weber ([1921-22] 1978), it can be described as an interpretative science (McIntyre 2005 :127). But could quantitative research also be concerned with these questions? Also, as pointed out below, does all qualitative research focus on subjective meaning, as some scholars suggest?

Others also distinguish qualitative research by claiming that it collects data using a naturalistic approach (Denzin and Lincoln 2005 :2; Creswell 2009 ), focusing on the meaning actors ascribe to their actions. But again, does all qualitative research need to be collected in situ? And does qualitative research have to be inherently concerned with meaning? Flick ( 2007 ), referring to Denzin and Lincoln ( 2005 ), mentions conversation analysis as an example of qualitative research that is not concerned with the meanings people bring to a situation, but rather with the formal organization of talk. Still others, such as Ragin ( 1994 :85), note that qualitative research is often (especially early on in the project, we would add) less structured than other kinds of social research – a characteristic connected to its flexibility and that can lead both to potentially better, but also worse results. But is this not a feature of this type of research, rather than a defining description of its essence? Wouldn’t this comment also apply, albeit to varying degrees, to quantitative research?

In addition, Strauss ( 2003 ), along with others, such as Alvesson and Kärreman ( 2011 :10–76), argue that qualitative researchers struggle to capture and represent complex phenomena partially because they tend to collect a large amount of data. While his analysis is correct at some points – “It is necessary to do detailed, intensive, microscopic examination of the data in order to bring out the amazing complexity of what lies in, behind, and beyond those data” (Strauss 2003 :10) – much of his analysis concerns the supposed focus of qualitative research and its challenges, rather than exactly what it is about. But even in this instance we would make a weak case arguing that these are strictly the defining features of qualitative research. Some researchers seem to focus on the approach or the methods used, or even on the way material is analyzed. Several researchers stress the naturalistic assumption of investigating the world, suggesting that meaning and interpretation appear to be a core matter of qualitative research.

We can also see that in this category there is no consensus about specific qualitative methods nor about qualitative data. Many emphasize interpretation, but quantitative research, too, involves interpretation; the results of a regression analysis, for example, certainly have to be interpreted, and the form of meta-analysis that factor analysis provides indeed requires interpretation However, there is no interpretation of quantitative raw data, i.e., numbers in tables. One common thread is that qualitative researchers have to get to grips with their data in order to understand what is being studied in great detail, irrespective of the type of empirical material that is being analyzed. This observation is connected to the fact that qualitative researchers routinely make several adjustments of focus and research design as their studies progress, in many cases until the very end of the project (Kalof et al. 2008 ). If you, like Becker, do not start out with a detailed theory, adjustments such as the emergence and refinement of research questions will occur during the research process. We have thus found a number of useful reflections about qualitative research scattered across different sources, but none of them effectively describe the defining characteristics of this approach.

Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research. But because we understand that fieldwork tends to focus primarily on the collection and analysis of qualitative data, we expected to find within it discussions on the meaning of “qualitative.” But, again, this was not the case.

Instead, we found material on the history of this approach (for example, Frankfort-Nachmias and Nachmias 1996 ; Atkinson et al. 2001), including how it has changed; for example, by adopting a more self-reflexive practice (Heyl 2001), as well as the different nomenclature that has been adopted, such as fieldwork, ethnography, qualitative research, naturalistic research, participant observation and so on (for example, Lofland et al. 2006 ; Gans 1999 ).

We retrieved definitions of ethnography, such as “the study of people acting in the natural courses of their daily lives,” involving a “resocialization of the researcher” (Emerson 1988 :1) through intense immersion in others’ social worlds (see also examples in Hammersley 2018 ). This may be accomplished by direct observation and also participation (Neuman 2007 :276), although others, such as Denzin ( 1970 :185), have long recognized other types of observation, including non-participant (“fly on the wall”). In this category we have also isolated claims and opposing views, arguing that this type of research is distinguished primarily by where it is conducted (natural settings) (Hughes 1971:496), and how it is carried out (a variety of methods are applied) or, for some most importantly, by involving an active, empathetic immersion in those being studied (Emerson 1988 :2). We also retrieved descriptions of the goals it attends in relation to how it is taught (understanding subjective meanings of the people studied, primarily develop theory, or contribute to social change) (see for example, Corte and Irwin 2017 ; Frankfort-Nachmias and Nachmias 1996 :281; Trier-Bieniek 2012 :639) by collecting the richest possible data (Lofland et al. 2006 ) to derive “thick descriptions” (Geertz 1973 ), and/or to aim at theoretical statements of general scope and applicability (for example, Emerson 1988 ; Fine 2003 ). We have identified guidelines on how to evaluate it (for example Becker 1996 ; Lamont 2004 ) and have retrieved instructions on how it should be conducted (for example, Lofland et al. 2006 ). For instance, analysis should take place while the data gathering unfolds (Emerson 1988 ; Hammersley and Atkinson 2007 ; Lofland et al. 2006 ), observations should be of long duration (Becker 1970 :54; Goffman 1989 ), and data should be of high quantity (Becker 1970 :52–53), as well as other questionable distinctions between fieldwork and other methods:

Field studies differ from other methods of research in that the researcher performs the task of selecting topics, decides what questions to ask, and forges interest in the course of the research itself . This is in sharp contrast to many ‘theory-driven’ and ‘hypothesis-testing’ methods. (Lofland and Lofland 1995 :5)

But could not, for example, a strictly interview-based study be carried out with the same amount of flexibility, such as sequential interviewing (for example, Small 2009 )? Once again, are quantitative approaches really as inflexible as some qualitative researchers think? Moreover, this category stresses the role of the actors’ meaning, which requires knowledge and close interaction with people, their practices and their lifeworld.

It is clear that field studies – which are seen by some as the “gold standard” of qualitative research – are nonetheless only one way of doing qualitative research. There are other methods, but it is not clear why some are more qualitative than others, or why they are better or worse. Fieldwork is characterized by interaction with the field (the material) and understanding of the phenomenon that is being studied. In Becker’s case, he had general experience from fields in which marihuana was used, based on which he did interviews with actual users in several fields.

Grounded Theory

Another major category we identified in our sample is Grounded Theory. We found descriptions of it most clearly in Glaser and Strauss’ ([1967] 2010 ) original articulation, Strauss and Corbin ( 1998 ) and Charmaz ( 2006 ), as well as many other accounts of what it is for: generating and testing theory (Strauss 2003 :xi). We identified explanations of how this task can be accomplished – such as through two main procedures: constant comparison and theoretical sampling (Emerson 1998:96), and how using it has helped researchers to “think differently” (for example, Strauss and Corbin 1998 :1). We also read descriptions of its main traits, what it entails and fosters – for instance, an exceptional flexibility, an inductive approach (Strauss and Corbin 1998 :31–33; 1990; Esterberg 2002 :7), an ability to step back and critically analyze situations, recognize tendencies towards bias, think abstractly and be open to criticism, enhance sensitivity towards the words and actions of respondents, and develop a sense of absorption and devotion to the research process (Strauss and Corbin 1998 :5–6). Accordingly, we identified discussions of the value of triangulating different methods (both using and not using grounded theory), including quantitative ones, and theories to achieve theoretical development (most comprehensively in Denzin 1970 ; Strauss and Corbin 1998 ; Timmermans and Tavory 2012 ). We have also located arguments about how its practice helps to systematize data collection, analysis and presentation of results (Glaser and Strauss [1967] 2010 :16).

Grounded theory offers a systematic approach which requires researchers to get close to the field; closeness is a requirement of identifying questions and developing new concepts or making further distinctions with regard to old concepts. In contrast to other qualitative approaches, grounded theory emphasizes the detailed coding process, and the numerous fine-tuned distinctions that the researcher makes during the process. Within this category, too, we could not find a satisfying discussion of the meaning of qualitative research.

Defining Qualitative Research

In sum, our analysis shows that some notions reappear in the discussion of qualitative research, such as understanding, interpretation, “getting close” and making distinctions. These notions capture aspects of what we think is “qualitative.” However, a comprehensive definition that is useful and that can further develop the field is lacking, and not even a clear picture of its essential elements appears. In other words no definition emerges from our data, and in our research process we have moved back and forth between our empirical data and the attempt to present a definition. Our concrete strategy, as stated above, is to relate qualitative and quantitative research, or more specifically, qualitative and quantitative work. We use an ideal-typical notion of quantitative research which relies on taken for granted and numbered variables. This means that the data consists of variables on different scales, such as ordinal, but frequently ratio and absolute scales, and the representation of the numbers to the variables, i.e. the justification of the assignment of numbers to object or phenomenon, are not questioned, though the validity may be questioned. In this section we return to the notion of quality and try to clarify it while presenting our contribution.

Broadly, research refers to the activity performed by people trained to obtain knowledge through systematic procedures. Notions such as “objectivity” and “reflexivity,” “systematic,” “theory,” “evidence” and “openness” are here taken for granted in any type of research. Next, building on our empirical analysis we explain the four notions that we have identified as central to qualitative work: distinctions, process, closeness, and improved understanding. In discussing them, ultimately in relation to one another, we make their meaning even more precise. Our idea, in short, is that only when these ideas that we present separately for analytic purposes are brought together can we speak of qualitative research.

Distinctions

We believe that the possibility of making new distinctions is one the defining characteristics of qualitative research. It clearly sets it apart from quantitative analysis which works with taken-for-granted variables, albeit as mentioned, meta-analyses, for example, factor analysis may result in new variables. “Quality” refers essentially to distinctions, as already pointed out by Aristotle. He discusses the term “qualitative” commenting: “By a quality I mean that in virtue of which things are said to be qualified somehow” (Aristotle 1984:14). Quality is about what something is or has, which means that the distinction from its environment is crucial. We see qualitative research as a process in which significant new distinctions are made to the scholarly community; to make distinctions is a key aspect of obtaining new knowledge; a point, as we will see, that also has implications for “quantitative research.” The notion of being “significant” is paramount. New distinctions by themselves are not enough; just adding concepts only increases complexity without furthering our knowledge. The significance of new distinctions is judged against the communal knowledge of the research community. To enable this discussion and judgements central elements of rational discussion are required (cf. Habermas [1981] 1987 ; Davidsson [ 1988 ] 2001) to identify what is new and relevant scientific knowledge. Relatedly, Ragin alludes to the idea of new and useful knowledge at a more concrete level: “Qualitative methods are appropriate for in-depth examination of cases because they aid the identification of key features of cases. Most qualitative methods enhance data” (1994:79). When Becker ( 1963 ) studied deviant behavior and investigated how people became marihuana smokers, he made distinctions between the ways in which people learned how to smoke. This is a classic example of how the strategy of “getting close” to the material, for example the text, people or pictures that are subject to analysis, may enable researchers to obtain deeper insight and new knowledge by making distinctions – in this instance on the initial notion of learning how to smoke. Others have stressed the making of distinctions in relation to coding or theorizing. Emerson et al. ( 1995 ), for example, hold that “qualitative coding is a way of opening up avenues of inquiry,” meaning that the researcher identifies and develops concepts and analytic insights through close examination of and reflection on data (Emerson et al. 1995 :151). Goodwin and Horowitz highlight making distinctions in relation to theory-building writing: “Close engagement with their cases typically requires qualitative researchers to adapt existing theories or to make new conceptual distinctions or theoretical arguments to accommodate new data” ( 2002 : 37). In the ideal-typical quantitative research only existing and so to speak, given, variables would be used. If this is the case no new distinction are made. But, would not also many “quantitative” researchers make new distinctions?

Process does not merely suggest that research takes time. It mainly implies that qualitative new knowledge results from a process that involves several phases, and above all iteration. Qualitative research is about oscillation between theory and evidence, analysis and generating material, between first- and second -order constructs (Schütz 1962 :59), between getting in contact with something, finding sources, becoming deeply familiar with a topic, and then distilling and communicating some of its essential features. The main point is that the categories that the researcher uses, and perhaps takes for granted at the beginning of the research process, usually undergo qualitative changes resulting from what is found. Becker describes how he tested hypotheses and let the jargon of the users develop into theoretical concepts. This happens over time while the study is being conducted, exemplifying what we mean by process.

In the research process, a pilot-study may be used to get a first glance of, for example, the field, how to approach it, and what methods can be used, after which the method and theory are chosen or refined before the main study begins. Thus, the empirical material is often central from the start of the project and frequently leads to adjustments by the researcher. Likewise, during the main study categories are not fixed; the empirical material is seen in light of the theory used, but it is also given the opportunity to kick back, thereby resisting attempts to apply theoretical straightjackets (Becker 1970 :43). In this process, coding and analysis are interwoven, and thus are often important steps for getting closer to the phenomenon and deciding what to focus on next. Becker began his research by interviewing musicians close to him, then asking them to refer him to other musicians, and later on doubling his original sample of about 25 to include individuals in other professions (Becker 1973:46). Additionally, he made use of some participant observation, documents, and interviews with opiate users made available to him by colleagues. As his inductive theory of deviance evolved, Becker expanded his sample in order to fine tune it, and test the accuracy and generality of his hypotheses. In addition, he introduced a negative case and discussed the null hypothesis ( 1963 :44). His phasic career model is thus based on a research design that embraces processual work. Typically, process means to move between “theory” and “material” but also to deal with negative cases, and Becker ( 1998 ) describes how discovering these negative cases impacted his research design and ultimately its findings.

Obviously, all research is process-oriented to some degree. The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. Merging of data, which of course can be done in a quantitative research process, does not mean new data. New hypotheses are frequently tested, but the “raw data is often the “the same.” Obviously, over time new datasets are made available and put into use.

Another characteristic that is emphasized in our sample is that qualitative researchers – and in particular ethnographers – can, or as Goffman put it, ought to ( 1989 ), get closer to the phenomenon being studied and their data than quantitative researchers (for example, Silverman 2009 :85). Put differently, essentially because of their methods qualitative researchers get into direct close contact with those being investigated and/or the material, such as texts, being analyzed. Becker started out his interview study, as we noted, by talking to those he knew in the field of music to get closer to the phenomenon he was studying. By conducting interviews he got even closer. Had he done more observations, he would undoubtedly have got even closer to the field.

Additionally, ethnographers’ design enables researchers to follow the field over time, and the research they do is almost by definition longitudinal, though the time in the field is studied obviously differs between studies. The general characteristic of closeness over time maximizes the chances of unexpected events, new data (related, for example, to archival research as additional sources, and for ethnography for situations not necessarily previously thought of as instrumental – what Mannay and Morgan ( 2015 ) term the “waiting field”), serendipity (Merton and Barber 2004 ; Åkerström 2013 ), and possibly reactivity, as well as the opportunity to observe disrupted patterns that translate into exemplars of negative cases. Two classic examples of this are Becker’s finding of what medical students call “crocks” (Becker et al. 1961 :317), and Geertz’s ( 1973 ) study of “deep play” in Balinese society.

By getting and staying so close to their data – be it pictures, text or humans interacting (Becker was himself a musician) – for a long time, as the research progressively focuses, qualitative researchers are prompted to continually test their hunches, presuppositions and hypotheses. They test them against a reality that often (but certainly not always), and practically, as well as metaphorically, talks back, whether by validating them, or disqualifying their premises – correctly, as well as incorrectly (Fine 2003 ; Becker 1970 ). This testing nonetheless often leads to new directions for the research. Becker, for example, says that he was initially reading psychological theories, but when facing the data he develops a theory that looks at, you may say, everything but psychological dispositions to explain the use of marihuana. Especially researchers involved with ethnographic methods have a fairly unique opportunity to dig up and then test (in a circular, continuous and temporal way) new research questions and findings as the research progresses, and thereby to derive previously unimagined and uncharted distinctions by getting closer to the phenomenon under study.

Let us stress that getting close is by no means restricted to ethnography. The notion of hermeneutic circle and hermeneutics as a general way of understanding implies that we must get close to the details in order to get the big picture. This also means that qualitative researchers can literally also make use of details of pictures as evidence (cf. Harper 2002). Thus, researchers may get closer both when generating the material or when analyzing it.

Quantitative research, we maintain, in the ideal-typical representation cannot get closer to the data. The data is essentially numbers in tables making up the variables (Franzosi 2016 :138). The data may originally have been “qualitative,” but once reduced to numbers there can only be a type of “hermeneutics” about what the number may stand for. The numbers themselves, however, are non-ambiguous. Thus, in quantitative research, interpretation, if done, is not about the data itself—the numbers—but what the numbers stand for. It follows that the interpretation is essentially done in a more “speculative” mode without direct empirical evidence (cf. Becker 2017 ).

Improved Understanding

While distinction, process and getting closer refer to the qualitative work of the researcher, improved understanding refers to its conditions and outcome of this work. Understanding cuts deeper than explanation, which to some may mean a causally verified correlation between variables. The notion of explanation presupposes the notion of understanding since explanation does not include an idea of how knowledge is gained (Manicas 2006 : 15). Understanding, we argue, is the core concept of what we call the outcome of the process when research has made use of all the other elements that were integrated in the research. Understanding, then, has a special status in qualitative research since it refers both to the conditions of knowledge and the outcome of the process. Understanding can to some extent be seen as the condition of explanation and occurs in a process of interpretation, which naturally refers to meaning (Gadamer 1990 ). It is fundamentally connected to knowing, and to the knowing of how to do things (Heidegger [1927] 2001 ). Conceptually the term hermeneutics is used to account for this process. Heidegger ties hermeneutics to human being and not possible to separate from the understanding of being ( 1988 ). Here we use it in a broader sense, and more connected to method in general (cf. Seiffert 1992 ). The abovementioned aspects – for example, “objectivity” and “reflexivity” – of the approach are conditions of scientific understanding. Understanding is the result of a circular process and means that the parts are understood in light of the whole, and vice versa. Understanding presupposes pre-understanding, or in other words, some knowledge of the phenomenon studied. The pre-understanding, even in the form of prejudices, are in qualitative research process, which we see as iterative, questioned, which gradually or suddenly change due to the iteration of data, evidence and concepts. However, qualitative research generates understanding in the iterative process when the researcher gets closer to the data, e.g., by going back and forth between field and analysis in a process that generates new data that changes the evidence, and, ultimately, the findings. Questioning, to ask questions, and put what one assumes—prejudices and presumption—in question, is central to understand something (Heidegger [1927] 2001 ; Gadamer 1990 :368–384). We propose that this iterative process in which the process of understanding occurs is characteristic of qualitative research.

Improved understanding means that we obtain scientific knowledge of something that we as a scholarly community did not know before, or that we get to know something better. It means that we understand more about how parts are related to one another, and to other things we already understand (see also Fine and Hallett 2014 ). Understanding is an important condition for qualitative research. It is not enough to identify correlations, make distinctions, and work in a process in which one gets close to the field or phenomena. Understanding is accomplished when the elements are integrated in an iterative process.

It is, moreover, possible to understand many things, and researchers, just like children, may come to understand new things every day as they engage with the world. This subjective condition of understanding – namely, that a person gains a better understanding of something –is easily met. To be qualified as “scientific,” the understanding must be general and useful to many; it must be public. But even this generally accessible understanding is not enough in order to speak of “scientific understanding.” Though we as a collective can increase understanding of everything in virtually all potential directions as a result also of qualitative work, we refrain from this “objective” way of understanding, which has no means of discriminating between what we gain in understanding. Scientific understanding means that it is deemed relevant from the scientific horizon (compare Schütz 1962 : 35–38, 46, 63), and that it rests on the pre-understanding that the scientists have and must have in order to understand. In other words, the understanding gained must be deemed useful by other researchers, so that they can build on it. We thus see understanding from a pragmatic, rather than a subjective or objective perspective. Improved understanding is related to the question(s) at hand. Understanding, in order to represent an improvement, must be an improvement in relation to the existing body of knowledge of the scientific community (James [ 1907 ] 1955). Scientific understanding is, by definition, collective, as expressed in Weber’s famous note on objectivity, namely that scientific work aims at truths “which … can claim, even for a Chinese, the validity appropriate to an empirical analysis” ([1904] 1949 :59). By qualifying “improved understanding” we argue that it is a general defining characteristic of qualitative research. Becker‘s ( 1966 ) study and other research of deviant behavior increased our understanding of the social learning processes of how individuals start a behavior. And it also added new knowledge about the labeling of deviant behavior as a social process. Few studies, of course, make the same large contribution as Becker’s, but are nonetheless qualitative research.

Understanding in the phenomenological sense, which is a hallmark of qualitative research, we argue, requires meaning and this meaning is derived from the context, and above all the data being analyzed. The ideal-typical quantitative research operates with given variables with different numbers. This type of material is not enough to establish meaning at the level that truly justifies understanding. In other words, many social science explanations offer ideas about correlations or even causal relations, but this does not mean that the meaning at the level of the data analyzed, is understood. This leads us to say that there are indeed many explanations that meet the criteria of understanding, for example the explanation of how one becomes a marihuana smoker presented by Becker. However, we may also understand a phenomenon without explaining it, and we may have potential explanations, or better correlations, that are not really understood.

We may speak more generally of quantitative research and its data to clarify what we see as an important distinction. The “raw data” that quantitative research—as an idealtypical activity, refers to is not available for further analysis; the numbers, once created, are not to be questioned (Franzosi 2016 : 138). If the researcher is to do “more” or “change” something, this will be done by conjectures based on theoretical knowledge or based on the researcher’s lifeworld. Both qualitative and quantitative research is based on the lifeworld, and all researchers use prejudices and pre-understanding in the research process. This idea is present in the works of Heidegger ( 2001 ) and Heisenberg (cited in Franzosi 2010 :619). Qualitative research, as we argued, involves the interaction and questioning of concepts (theory), data, and evidence.

Ragin ( 2004 :22) points out that “a good definition of qualitative research should be inclusive and should emphasize its key strengths and features, not what it lacks (for example, the use of sophisticated quantitative techniques).” We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. Qualitative research, as defined here, is consequently a combination of two criteria: (i) how to do things –namely, generating and analyzing empirical material, in an iterative process in which one gets closer by making distinctions, and (ii) the outcome –improved understanding novel to the scholarly community. Is our definition applicable to our own study? In this study we have closely read the empirical material that we generated, and the novel distinction of the notion “qualitative research” is the outcome of an iterative process in which both deduction and induction were involved, in which we identified the categories that we analyzed. We thus claim to meet the first criteria, “how to do things.” The second criteria cannot be judged but in a partial way by us, namely that the “outcome” —in concrete form the definition-improves our understanding to others in the scientific community.

We have defined qualitative research, or qualitative scientific work, in relation to quantitative scientific work. Given this definition, qualitative research is about questioning the pre-given (taken for granted) variables, but it is thus also about making new distinctions of any type of phenomenon, for example, by coining new concepts, including the identification of new variables. This process, as we have discussed, is carried out in relation to empirical material, previous research, and thus in relation to theory. Theory and previous research cannot be escaped or bracketed. According to hermeneutic principles all scientific work is grounded in the lifeworld, and as social scientists we can thus never fully bracket our pre-understanding.

We have proposed that quantitative research, as an idealtype, is concerned with pre-determined variables (Small 2008 ). Variables are epistemically fixed, but can vary in terms of dimensions, such as frequency or number. Age is an example; as a variable it can take on different numbers. In relation to quantitative research, qualitative research does not reduce its material to number and variables. If this is done the process of comes to a halt, the researcher gets more distanced from her data, and it makes it no longer possible to make new distinctions that increase our understanding. We have above discussed the components of our definition in relation to quantitative research. Our conclusion is that in the research that is called quantitative there are frequent and necessary qualitative elements.

Further, comparative empirical research on researchers primarily working with ”quantitative” approaches and those working with ”qualitative” approaches, we propose, would perhaps show that there are many similarities in practices of these two approaches. This is not to deny dissimilarities, or the different epistemic and ontic presuppositions that may be more or less strongly associated with the two different strands (see Goertz and Mahoney 2012 ). Our point is nonetheless that prejudices and preconceptions about researchers are unproductive, and that as other researchers have argued, differences may be exaggerated (e.g., Becker 1996 : 53, 2017 ; Marchel and Owens 2007 :303; Ragin 1994 ), and that a qualitative dimension is present in both kinds of work.

Several things follow from our findings. The most important result is the relation to quantitative research. In our analysis we have separated qualitative research from quantitative research. The point is not to label individual researchers, methods, projects, or works as either “quantitative” or “qualitative.” By analyzing, i.e., taking apart, the notions of quantitative and qualitative, we hope to have shown the elements of qualitative research. Our definition captures the elements, and how they, when combined in practice, generate understanding. As many of the quotations we have used suggest, one conclusion of our study holds that qualitative approaches are not inherently connected with a specific method. Put differently, none of the methods that are frequently labelled “qualitative,” such as interviews or participant observation, are inherently “qualitative.” What matters, given our definition, is whether one works qualitatively or quantitatively in the research process, until the results are produced. Consequently, our analysis also suggests that those researchers working with what in the literature and in jargon is often called “quantitative research” are almost bound to make use of what we have identified as qualitative elements in any research project. Our findings also suggest that many” quantitative” researchers, at least to some extent, are engaged with qualitative work, such as when research questions are developed, variables are constructed and combined, and hypotheses are formulated. Furthermore, a research project may hover between “qualitative” and “quantitative” or start out as “qualitative” and later move into a “quantitative” (a distinct strategy that is not similar to “mixed methods” or just simply combining induction and deduction). More generally speaking, the categories of “qualitative” and “quantitative,” unfortunately, often cover up practices, and it may lead to “camps” of researchers opposing one another. For example, regardless of the researcher is primarily oriented to “quantitative” or “qualitative” research, the role of theory is neglected (cf. Swedberg 2017 ). Our results open up for an interaction not characterized by differences, but by different emphasis, and similarities.

Let us take two examples to briefly indicate how qualitative elements can fruitfully be combined with quantitative. Franzosi ( 2010 ) has discussed the relations between quantitative and qualitative approaches, and more specifically the relation between words and numbers. He analyzes texts and argues that scientific meaning cannot be reduced to numbers. Put differently, the meaning of the numbers is to be understood by what is taken for granted, and what is part of the lifeworld (Schütz 1962 ). Franzosi shows how one can go about using qualitative and quantitative methods and data to address scientific questions analyzing violence in Italy at the time when fascism was rising (1919–1922). Aspers ( 2006 ) studied the meaning of fashion photographers. He uses an empirical phenomenological approach, and establishes meaning at the level of actors. In a second step this meaning, and the different ideal-typical photographers constructed as a result of participant observation and interviews, are tested using quantitative data from a database; in the first phase to verify the different ideal-types, in the second phase to use these types to establish new knowledge about the types. In both of these cases—and more examples can be found—authors move from qualitative data and try to keep the meaning established when using the quantitative data.

A second main result of our study is that a definition, and we provided one, offers a way for research to clarify, and even evaluate, what is done. Hence, our definition can guide researchers and students, informing them on how to think about concrete research problems they face, and to show what it means to get closer in a process in which new distinctions are made. The definition can also be used to evaluate the results, given that it is a standard of evaluation (cf. Hammersley 2007 ), to see whether new distinctions are made and whether this improves our understanding of what is researched, in addition to the evaluation of how the research was conducted. By making what is qualitative research explicit it becomes easier to communicate findings, and it is thereby much harder to fly under the radar with substandard research since there are standards of evaluation which make it easier to separate “good” from “not so good” qualitative research.

To conclude, our analysis, which ends with a definition of qualitative research can thus both address the “internal” issues of what is qualitative research, and the “external” critiques that make it harder to do qualitative research, to which both pressure from quantitative methods and general changes in society contribute.

Åkerström, Malin. 2013. Curiosity and serendipity in qualitative research. Qualitative Sociology Review 9 (2): 10–18.

Google Scholar  

Alford, Robert R. 1998. The craft of inquiry. Theories, methods, evidence . Oxford: Oxford University Press.

Alvesson, Mats, and Dan Kärreman. 2011. Qualitative research and theory development. Mystery as method . London: SAGE Publications.

Book   Google Scholar  

Aspers, Patrik. 2006. Markets in Fashion, A Phenomenological Approach. London Routledge.

Atkinson, Paul. 2005. Qualitative research. Unity and diversity. Forum: Qualitative Social Research 6 (3): 1–15.

Becker, Howard S. 1963. Outsiders. Studies in the sociology of deviance . New York: The Free Press.

Becker, Howard S. 1966. Whose side are we on? Social Problems 14 (3): 239–247.

Article   Google Scholar  

Becker, Howard S. 1970. Sociological work. Method and substance . New Brunswick: Transaction Books.

Becker, Howard S. 1996. The epistemology of qualitative research. In Ethnography and human development. Context and meaning in social inquiry , ed. Jessor Richard, Colby Anne, and Richard A. Shweder, 53–71. Chicago: University of Chicago Press.

Becker, Howard S. 1998. Tricks of the trade. How to think about your research while you're doing it . Chicago: University of Chicago Press.

Becker, Howard S. 2017. Evidence . Chigaco: University of Chicago Press.

Becker, Howard, Blanche Geer, Everett Hughes, and Anselm Strauss. 1961. Boys in White, student culture in medical school . New Brunswick: Transaction Publishers.

Berezin, Mabel. 2014. How do we know what we mean? Epistemological dilemmas in cultural sociology. Qualitative Sociology 37 (2): 141–151.

Best, Joel. 2004. Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , eds . Charles, Ragin, Joanne, Nagel, and Patricia White, 53-54. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf .

Biernacki, Richard. 2014. Humanist interpretation versus coding text samples. Qualitative Sociology 37 (2): 173–188.

Blumer, Herbert. 1969. Symbolic interactionism: Perspective and method . Berkeley: University of California Press.

Brady, Henry, David Collier, and Jason Seawright. 2004. Refocusing the discussion of methodology. In Rethinking social inquiry. Diverse tools, shared standards , ed. Brady Henry and Collier David, 3–22. Lanham: Rowman and Littlefield.

Brown, Allison P. 2010. Qualitative method and compromise in applied social research. Qualitative Research 10 (2): 229–248.

Charmaz, Kathy. 2006. Constructing grounded theory . London: Sage.

Corte, Ugo, and Katherine Irwin. 2017. “The Form and Flow of Teaching Ethnographic Knowledge: Hands-on Approaches for Learning Epistemology” Teaching Sociology 45(3): 209-219.

Creswell, John W. 2009. Research design. Qualitative, quantitative, and mixed method approaches . 3rd ed. Thousand Oaks: SAGE Publications.

Davidsson, David. 1988. 2001. The myth of the subjective. In Subjective, intersubjective, objective , ed. David Davidsson, 39–52. Oxford: Oxford University Press.

Denzin, Norman K. 1970. The research act: A theoretical introduction to Ssociological methods . Chicago: Aldine Publishing Company Publishers.

Denzin, Norman K., and Yvonna S. Lincoln. 2003. Introduction. The discipline and practice of qualitative research. In Collecting and interpreting qualitative materials , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–45. Thousand Oaks: SAGE Publications.

Denzin, Norman K., and Yvonna S. Lincoln. 2005. Introduction. The discipline and practice of qualitative research. In The Sage handbook of qualitative research , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–32. Thousand Oaks: SAGE Publications.

Emerson, Robert M., ed. 1988. Contemporary field research. A collection of readings . Prospect Heights: Waveland Press.

Emerson, Robert M., Rachel I. Fretz, and Linda L. Shaw. 1995. Writing ethnographic fieldnotes . Chicago: University of Chicago Press.

Esterberg, Kristin G. 2002. Qualitative methods in social research . Boston: McGraw-Hill.

Fine, Gary Alan. 1995. Review of “handbook of qualitative research.” Contemporary Sociology 24 (3): 416–418.

Fine, Gary Alan. 2003. “ Toward a Peopled Ethnography: Developing Theory from Group Life.” Ethnography . 4(1):41-60.

Fine, Gary Alan, and Black Hawk Hancock. 2017. The new ethnographer at work. Qualitative Research 17 (2): 260–268.

Fine, Gary Alan, and Timothy Hallett. 2014. Stranger and stranger: Creating theory through ethnographic distance and authority. Journal of Organizational Ethnography 3 (2): 188–203.

Flick, Uwe. 2002. Qualitative research. State of the art. Social Science Information 41 (1): 5–24.

Flick, Uwe. 2007. Designing qualitative research . London: SAGE Publications.

Frankfort-Nachmias, Chava, and David Nachmias. 1996. Research methods in the social sciences . 5th ed. London: Edward Arnold.

Franzosi, Roberto. 2010. Sociology, narrative, and the quality versus quantity debate (Goethe versus Newton): Can computer-assisted story grammars help us understand the rise of Italian fascism (1919- 1922)? Theory and Society 39 (6): 593–629.

Franzosi, Roberto. 2016. From method and measurement to narrative and number. International journal of social research methodology 19 (1): 137–141.

Gadamer, Hans-Georg. 1990. Wahrheit und Methode, Grundzüge einer philosophischen Hermeneutik . Band 1, Hermeneutik. Tübingen: J.C.B. Mohr.

Gans, Herbert. 1999. Participant Observation in an Age of “Ethnography”. Journal of Contemporary Ethnography 28 (5): 540–548.

Geertz, Clifford. 1973. The interpretation of cultures . New York: Basic Books.

Gilbert, Nigel. 2009. Researching social life . 3rd ed. London: SAGE Publications.

Glaeser, Andreas. 2014. Hermeneutic institutionalism: Towards a new synthesis. Qualitative Sociology 37: 207–241.

Glaser, Barney G., and Anselm L. Strauss. [1967] 2010. The discovery of grounded theory. Strategies for qualitative research. Hawthorne: Aldine.

Goertz, Gary, and James Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences . Princeton: Princeton University Press.

Goffman, Erving. 1989. On fieldwork. Journal of Contemporary Ethnography 18 (2): 123–132.

Goodwin, Jeff, and Ruth Horowitz. 2002. Introduction. The methodological strengths and dilemmas of qualitative sociology. Qualitative Sociology 25 (1): 33–47.

Habermas, Jürgen. [1981] 1987. The theory of communicative action . Oxford: Polity Press.

Hammersley, Martyn. 2007. The issue of quality in qualitative research. International Journal of Research & Method in Education 30 (3): 287–305.

Hammersley, Martyn. 2013. What is qualitative research? Bloomsbury Publishing.

Hammersley, Martyn. 2018. What is ethnography? Can it survive should it? Ethnography and Education 13 (1): 1–17.

Hammersley, Martyn, and Paul Atkinson. 2007. Ethnography. Principles in practice . London: Tavistock Publications.

Heidegger, Martin. [1927] 2001. Sein und Zeit . Tübingen: Max Niemeyer Verlag.

Heidegger, Martin. 1988. 1923. Ontologie. Hermeneutik der Faktizität, Gesamtausgabe II. Abteilung: Vorlesungen 1919-1944, Band 63, Frankfurt am Main: Vittorio Klostermann.

Hempel, Carl G. 1966. Philosophy of the natural sciences . Upper Saddle River: Prentice Hall.

Hood, Jane C. 2006. Teaching against the text. The case of qualitative methods. Teaching Sociology 34 (3): 207–223.

James, William. 1907. 1955. Pragmatism . New York: Meredian Books.

Jovanović, Gordana. 2011. Toward a social history of qualitative research. History of the Human Sciences 24 (2): 1–27.

Kalof, Linda, Amy Dan, and Thomas Dietz. 2008. Essentials of social research . London: Open University Press.

Katz, Jack. 2015. Situational evidence: Strategies for causal reasoning from observational field notes. Sociological Methods & Research 44 (1): 108–144.

King, Gary, Robert O. Keohane, S. Sidney, and S. Verba. 1994. Designing social inquiry. In Scientific inference in qualitative research . Princeton: Princeton University Press.

Chapter   Google Scholar  

Lamont, Michelle. 2004. Evaluating qualitative research: Some empirical findings and an agenda. In Report from workshop on interdisciplinary standards for systematic qualitative research , ed. M. Lamont and P. White, 91–95. Washington, DC: National Science Foundation.

Lamont, Michèle, and Ann Swidler. 2014. Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology 37 (2): 153–171.

Lazarsfeld, Paul, and Alan Barton. 1982. Some functions of qualitative analysis in social research. In The varied sociology of Paul Lazarsfeld , ed. Patricia Kendall, 239–285. New York: Columbia University Press.

Lichterman, Paul, and Isaac Reed I (2014), Theory and Contrastive Explanation in Ethnography. Sociological methods and research. Prepublished 27 October 2014; https://doi.org/10.1177/0049124114554458 .

Lofland, John, and Lyn Lofland. 1995. Analyzing social settings. A guide to qualitative observation and analysis . 3rd ed. Belmont: Wadsworth.

Lofland, John, David A. Snow, Leon Anderson, and Lyn H. Lofland. 2006. Analyzing social settings. A guide to qualitative observation and analysis . 4th ed. Belmont: Wadsworth/Thomson Learning.

Long, Adrew F., and Mary Godfrey. 2004. An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology 7 (2): 181–196.

Lundberg, George. 1951. Social research: A study in methods of gathering data . New York: Longmans, Green and Co..

Malinowski, Bronislaw. 1922. Argonauts of the Western Pacific: An account of native Enterprise and adventure in the archipelagoes of Melanesian New Guinea . London: Routledge.

Manicas, Peter. 2006. A realist philosophy of science: Explanation and understanding . Cambridge: Cambridge University Press.

Marchel, Carol, and Stephanie Owens. 2007. Qualitative research in psychology. Could William James get a job? History of Psychology 10 (4): 301–324.

McIntyre, Lisa J. 2005. Need to know. Social science research methods . Boston: McGraw-Hill.

Merton, Robert K., and Elinor Barber. 2004. The travels and adventures of serendipity. A Study in Sociological Semantics and the Sociology of Science . Princeton: Princeton University Press.

Mannay, Dawn, and Melanie Morgan. 2015. Doing ethnography or applying a qualitative technique? Reflections from the ‘waiting field‘. Qualitative Research 15 (2): 166–182.

Neuman, Lawrence W. 2007. Basics of social research. Qualitative and quantitative approaches . 2nd ed. Boston: Pearson Education.

Ragin, Charles C. 1994. Constructing social research. The unity and diversity of method . Thousand Oaks: Pine Forge Press.

Ragin, Charles C. 2004. Introduction to session 1: Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , 22, ed. Charles C. Ragin, Joane Nagel, Patricia White. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf

Rawls, Anne. 2018. The Wartime narrative in US sociology, 1940–7: Stigmatizing qualitative sociology in the name of ‘science,’ European Journal of Social Theory (Online first).

Schütz, Alfred. 1962. Collected papers I: The problem of social reality . The Hague: Nijhoff.

Seiffert, Helmut. 1992. Einführung in die Hermeneutik . Tübingen: Franke.

Silverman, David. 2005. Doing qualitative research. A practical handbook . 2nd ed. London: SAGE Publications.

Silverman, David. 2009. A very short, fairly interesting and reasonably cheap book about qualitative research . London: SAGE Publications.

Silverman, David. 2013. What counts as qualitative research? Some cautionary comments. Qualitative Sociology Review 9 (2): 48–55.

Small, Mario L. 2009. “How many cases do I need?” on science and the logic of case selection in field-based research. Ethnography 10 (1): 5–38.

Small, Mario L 2008. Lost in translation: How not to make qualitative research more scientific. In Workshop on interdisciplinary standards for systematic qualitative research, ed in Michelle Lamont, and Patricia White, 165–171. Washington, DC: National Science Foundation.

Snow, David A., and Leon Anderson. 1993. Down on their luck: A study of homeless street people . Berkeley: University of California Press.

Snow, David A., and Calvin Morrill. 1995. New ethnographies: Review symposium: A revolutionary handbook or a handbook for revolution? Journal of Contemporary Ethnography 24 (3): 341–349.

Strauss, Anselm L. 2003. Qualitative analysis for social scientists . 14th ed. Chicago: Cambridge University Press.

Strauss, Anselm L., and Juliette M. Corbin. 1998. Basics of qualitative research. Techniques and procedures for developing grounded theory . 2nd ed. Thousand Oaks: Sage Publications.

Swedberg, Richard. 2017. Theorizing in sociological research: A new perspective, a new departure? Annual Review of Sociology 43: 189–206.

Swedberg, Richard. 1990. The new 'Battle of Methods'. Challenge January–February 3 (1): 33–38.

Timmermans, Stefan, and Iddo Tavory. 2012. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory 30 (3): 167–186.

Trier-Bieniek, Adrienne. 2012. Framing the telephone interview as a participant-centred tool for qualitative research. A methodological discussion. Qualitative Research 12 (6): 630–644.

Valsiner, Jaan. 2000. Data as representations. Contextualizing qualitative and quantitative research strategies. Social Science Information 39 (1): 99–113.

Weber, Max. 1904. 1949. Objectivity’ in social Science and social policy. Ed. Edward A. Shils and Henry A. Finch, 49–112. New York: The Free Press.

Download references

Acknowledgements

Financial Support for this research is given by the European Research Council, CEV (263699). The authors are grateful to Susann Krieglsteiner for assistance in collecting the data. The paper has benefitted from the many useful comments by the three reviewers and the editor, comments by members of the Uppsala Laboratory of Economic Sociology, as well as Jukka Gronow, Sebastian Kohl, Marcin Serafin, Richard Swedberg, Anders Vassenden and Turid Rødne.

Author information

Authors and affiliations.

Department of Sociology, Uppsala University, Uppsala, Sweden

Patrik Aspers

Seminar for Sociology, Universität St. Gallen, St. Gallen, Switzerland

Department of Media and Social Sciences, University of Stavanger, Stavanger, Norway

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Patrik Aspers .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Aspers, P., Corte, U. What is Qualitative in Qualitative Research. Qual Sociol 42 , 139–160 (2019). https://doi.org/10.1007/s11133-019-9413-7

Download citation

Published : 27 February 2019

Issue Date : 01 June 2019

DOI : https://doi.org/10.1007/s11133-019-9413-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Epistemology
  • Philosophy of science
  • Phenomenology
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Understanding Qualitative Research: An In-Depth Study Guide

    qualitative research statements imply cause and effect

  2. Qualitative Research: Definition, Types, Methods and Examples (2022)

    qualitative research statements imply cause and effect

  3. Qualitative Research

    qualitative research statements imply cause and effect

  4. 18 Qualitative Research Examples (2024)

    qualitative research statements imply cause and effect

  5. How To Make Conceptual Framework In Qualitative Research

    qualitative research statements imply cause and effect

  6. Quantitative and Qualitative research: Everything You Need to Know

    qualitative research statements imply cause and effect

VIDEO

  1. Understanding Quantitative and Qualitative Research Method

  2. Indian Culture and Value Systems

  3. Qualitative Characteristics of Financial Information

  4. FUNDAMENTAL OF ACCOUNTANCY AND BUSINESS MANAGEMENT2#accounting #abm #accountancy

  5. Mastering the IASB's Conceptual Framework for Financial Reporting 2018

  6. Understandablity : Qualitative Characteristics / Attributes of Accounting Information Class 11 A/c

COMMENTS

  1. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  2. Causal implicatures from correlational statements

    In Study 1, participants interpreted statements of the form "X is associated with Y" to imply that Y causes X. In Studies 2 and 3, participants interpreted statements of the form "X is associated with an increased risk of Y" to imply that X causes Y. Thus, even the most orthodox correlational language can give rise to causal inferences.

  3. Thinking Clearly About Correlations and Causation: Graphical Causal

    Correlation does not imply causation; but often, observational data are the only option, even though the research question at hand involves causality. ... They refer to "associations," "relationships," or tentative "links" between variables instead of clear cause-effect relationships, and they usually add a general disclaimer ("Of ...

  4. Understanding the Purpose of a Qualitative Study: Methods and Examples

    Qualitative researchers use methods such as observation, interviews, open-ended surveys, focus groups, content analysis, or oral history to investigate the meanings that people attribute to their behavior and interactions. This approach provides an in-depth understanding of attitudes, behaviors, interactions, events, and social processes.

  5. What is Qualitative in Qualitative Research

    Qualitative research involves the studied use and collection of a variety of empirical materials - case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts - that describe routine and problematic moments and meanings in individuals' lives.

  6. Understanding Causation in Healthcare: An Introduction to Critical

    It is challenging to work in the social realm because people cannot easily be placed in the controlled environments considered necessary to truly attribute an effect or event to a cause (Oltmann & Boughey, 2012). For example, if you read in a recent research article that a new behaviour change intervention has been successful in reducing ...

  7. Using Qualitative Methods for Causal Explanation

    The view that qualitative research methods can be used to identify causal relationships and develop causal explanations is now accepted by a significant number of both qualitative and quantitative ...

  8. 4.2 Causality

    A qualitative researcher collects data, usually words, and notices patterns. Those patterns inform the theories we use in social work. In many ways, the idiographic causal explanations created in qualitative research are like the social theories we reviewed in Chapter 2 and other theories you use in your practice and theory courses.

  9. Exploring causal relationships qualitatively: An empirical illustration

    Causal relationships are traditionally examined in quantitative research. However, this article informs the discussion surrounding the potential use of qualitative data to explore causal relationships qualitatively through an empirical illustration of a school leadership development team. As school leadership development is supposed to offer continuing development to practicing school leaders ...

  10. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  11. Qualitative Research: Getting Started

    Qualitative research was historically employed in fields such as sociology, history, and anthropology. 2 Miles and Huberman 2 said that qualitative data "are a source of well-grounded, rich descriptions and explanations of processes in identifiable local contexts. With qualitative data one can preserve chronological flow, see precisely which ...

  12. Causal Research: Definition, examples and how to use it

    Help companies improve internally. By conducting causal research, management can make informed decisions about improving their employee experience and internal operations. For example, understanding which variables led to an increase in staff turnover. Repeat experiments to enhance reliability and accuracy of results.

  13. Causal Research Design: Definition, Benefits, Examples

    Hugh Good. Causal research is sometimes called an explanatory or analytical study. It delves into the fundamental cause-and-effect connections between two or more variables. Researchers typically observe how changes in one variable affect another related variable. Examining these relationships gives researchers valuable insights into the ...

  14. 3 Causes-of-Effects versus Effects-of-Causes

    The quantitative and qualitative cultures differ in the extent to which and the ways in which they address causes-of-effects and effects-of-causes questions. Quantitative scholars, who favor the effects-of-causes approach, focus on estimating the average effects of particular variables within populations or samples.

  15. The Importance of Qualitative Research for Causal Explanation in

    Abstract. The concept of causation has long been controversial in qualitative research, and many qualitative researchers have rejected causal explanation as incompatible with an interpretivist or constructivist approach. This rejection conflates causation with the positivist theory of causation, and ignores an alternative understanding of ...

  16. WRITING A RESEARCH STATEMENT FOR QUALITATIVE RESEARCH

    Which of the following can be used as a research question?, Compare the two statements and distinguish which best describes a research statement. Qualitative research statements imply cause and effect. Qualitative research statements seek to understand the behavior and underlying contexts of the observed subject. and more.

  17. Causality and Causal Inference in Social Work: Quantitative and

    The Nature of Causality and Causal Inference. The human sciences, including social work, place great emphasis on understanding the causes and effects of human behavior, yet there is a lack of consensus as to how cause and effect can and should be linked (Parascandola & Weed, 2001; Salmon, 1998; Susser, 1973).What little consensus exists seems to be that effects are assumed to be consequences ...

  18. 7.2 Causal relationships

    Explanatory research attempts to establish nomothetic causal relationships—an independent variable is demonstrated to cause changes a dependent variable. Exploratory and descriptive qualitative research contains some causal relationships, but they are actually descriptions of the causal relationships established by the participants in your study.

  19. Causal implicatures from correlational statements

    Correlation does not imply causation, but this does not necessarily stop people from drawing causal inferences from correlational statements. We show that people do in fact infer causality from statements of association, under minimal conditions. In Study 1, participants interpreted statements of the form "X is associated with Y" to imply that Y causes X. In Studies 2 and 3, participants ...

  20. PDF Chapter 4 Causation and Causal Complexity

    Mello, Patrick A. (2021) Qualitative Comparative Analysis: An Introduction to Research Design and Application, Washington, DC: Georgetown University Press, Chapter 4. Post-peer review, pre ...

  21. What is Qualitative in Qualitative Research

    What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being "qualitative," the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term "qualitative." Then, drawing on ideas we find scattered ...

  22. Methods for Evaluating Causality in Observational Studies

    Regression-discontinuity methods have been little used in medical research to date, but they can be helpful in the study of cause-and-effect relationships from observational data . Regression-discontinuity design is a quasi-experimental approach ( box 3 ) that was developed in educational psychology in the 1960s ( 18 ).

  23. WRITING A RESEARCH STATEMENT FOR QUALITATIVE RESEARCH

    Study with Quizlet and memorize flashcards containing terms like Research problem, Research questions, Qualitative research and more.