• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

causal comparative research

Home Market Research Research Tools and Apps

Causal Comparative Research: Definition, Types & Benefits

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Within the field of research, there are multiple methodologies and ways to find answers to your needs, in this article we will address everything you need to know about Causal Comparative Research, a methodology with many advantages and applications.

What Is Causal Comparative Research?

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.

When you think of Casual Comparative Research, it will almost always consist of the following:

  • A method or set of methods to identify cause/effect relationships
  • A set of individuals (or entities) that are NOT selected randomly – they were intended to participate in this specific study
  • Variables are represented in two or more groups (cannot be less than two, otherwise there is no differentiation between them)
  • Non-manipulated independent variables – *typically, it’s a suggested relationship (since we can’t control the independent variable completely)

Types of Casual Comparative Research

Casual Comparative Research is broken down into two types:

  • Retrospective Comparative Research
  • Prospective Comparative Research

Retrospective Comparative Research: Involves investigating a particular question…. after the effects have occurred. As an attempt to see if a specific variable does influence another variable.

Prospective Comparative Research: This type of Casual Comparative Research is characterized by being initiated by the researcher and starting with the causes and determined to analyze the effects of a given condition. This type of investigation is much less common than the Retrospective type of investigation.

LEARN ABOUT: Quasi-experimental Research

Causal Comparative Research vs Correlation Research

The universal rule of statistics… correlation is NOT causation! 

Casual Comparative Research does not rely on relationships. Instead, they’re comparing two groups to find out whether the independent variable affected the outcome of the dependent variable

When running a Causal Comparative Research, none of the variables can be influenced, and a cause-effect relationship has to be established with a persuasive, logical argument; otherwise, it’s a correlation.

Another significant difference between both methodologies is their analysis of the data collected. In the case of Causal Comparative Research, the results are usually analyzed using cross-break tables and comparing the averages obtained. At the same time, in Causal Comparative Research, Correlation Analysis typically uses scatter charts and correlation coefficients.

Advantages and Disadvantages of Causal Comparative Research

Like any research methodology, causal comparative research has a specific use and limitations to consider when considering them in your next project. Below we list some of the main advantages and disadvantages.

  • It is more efficient since it allows you to save human and economic resources and to do it relatively quickly.
  • Identifying causes of certain occurrences (or non-occurrences)
  • Thus, descriptive analysis rather than experimental

Disadvantages

  • You’re not fully able to manipulate/control an independent variable as well as the lack of randomization
  • Like other methodologies, it tends to be prone to some research bias , the most common type of research is subject- selection bias , so special care must be taken to avoid it so as not to compromise the validity of this type of research.
  • The loss of subjects/location influences / poor attitude of subjects/testing threats….are always a possibility

Finally, it is important to remember that the results of this type of causal research should be interpreted with caution since a common mistake is to think that although there is a relationship between the two variables analyzed, this does not necessarily guarantee that the variable influences or is the main factor to influence in the second variable.

LEARN ABOUT: ANOVA testing

QuestionPro can be your ally in your next Causal Comparative Research

QuestionPro is one of the platforms most used by the world’s leading research agencies, thanks to its diverse functions and versatility when collecting and analyzing data.

With QuestionPro you will not only be able to collect the necessary data to carry out your causal comparative research, you will also have access to a series of advanced reports and analyses to obtain valuable insights for your research project.

We invite you to learn more about our Research Suite, schedule a free demo of our main features today, and clarify all your doubts about our solutions.

LEARN MORE         SIGN UP FREE

Author : John Oppenhimer

MORE LIKE THIS

Government Customer Experience

Government Customer Experience: Impact on Government Service

Apr 11, 2024

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

employee evaluation software

Top 15 Employee Evaluation Software to Enhance Performance

event feedback software

Event Feedback Software: Top 11 Best in 2024

Apr 9, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

causal comparative research

Causal Comparative Research: Methods And Examples

Ritu was in charge of marketing a new protein drink about to be launched. The client wanted a causal-comparative study…

Causal Comparative Research

Ritu was in charge of marketing a new protein drink about to be launched. The client wanted a causal-comparative study highlighting the drink’s benefits. They demanded that comparative analysis be made the main campaign design strategy. After carefully analyzing the project requirements, Ritu decided to follow a causal-comparative research design. She realized that causal-comparative research emphasizing physical development in different groups of people would lay a good foundation to establish the product.

What Is Causal Comparative Research?

Examples of causal comparative research variables.

Causal-comparative research is a method used to identify the cause–effect relationship between a dependent and independent variable. This relationship is usually a suggested relationship because we can’t control an independent variable completely. Unlike correlation research, this doesn’t rely on relationships. In a causal-comparative research design, the researcher compares two groups to find out whether the independent variable affected the outcome or the dependent variable.

A causal-comparative method determines whether one variable has a direct influence on the other and why. It identifies the causes of certain occurrences (or non-occurrences). It makes a study descriptive rather than experimental by scrutinizing the relationships among different variables in which the independent variable has already occurred. Variables can’t be manipulated sometimes, but a link between dependent and independent variables is established and the implications of possible causes are used to draw conclusions.

In a causal-comparative design, researchers study cause and effect in retrospect and determine consequences or causes of differences already existing among or between groups of people.

Let’s look at some characteristics of causal-comparative research:

  • This method tries to identify cause and effect relationships.
  • Two or more groups are included as variables.
  • Individuals aren’t selected randomly.
  • Independent variables can’t be manipulated.
  • It helps save time and money.

The main purpose of a causal-comparative study is to explore effects, consequences and causes. There are two types of causal-comparative research design. They are:

Retrospective Causal Comparative Research

For this type of research, a researcher has to investigate a particular question after the effects have occurred. They attempt to determine whether or not a variable influences another variable.

Prospective Causal Comparative Research

The researcher initiates a study, beginning with the causes and determined to analyze the effects of a given condition. This is not as common as retrospective causal-comparative research.

Usually, it’s easier to compare a variable with the known than the unknown.

Researchers use causal-comparative research to achieve research goals by comparing two variables that represent two groups. This data can include differences in opportunities, privileges exclusive to certain groups or developments with respect to gender, race, nationality or ability.

For example, to find out the difference in wages between men and women, researchers have to make a comparative study of wages earned by both genders across various professions, hierarchies and locations. None of the variables can be influenced and cause-effect relationship has to be established with a persuasive logical argument. Some common variables investigated in this type of research are:

  • Achievement and other ability variables
  • Family-related variables
  • Organismic variables such as age, sex and ethnicity
  • Variables related to schools
  • Personality variables

While raw test scores, assessments and other measures (such as grade point averages) are used as data in this research, sources, standardized tests, structured interviews and surveys are popular research tools.

However, there are drawbacks of causal-comparative research too, such as its inability to manipulate or control an independent variable and the lack of randomization. Subject-selection bias always remains a possibility and poses a threat to the internal validity of a study. Researchers can control it with statistical matching or by creating identical subgroups. Executives have to look out for loss of subjects, location influences, poor attitude of subjects and testing threats to produce a valid research study.

Harappa’s Thinking Critically program is for managers who want to learn how to think effectively before making critical decisions. Learn how leaders articulate the reasons behind and implications of their decisions. Become a growth-driven manager looking to select the right strategies to outperform targets. It’s packed with problem-solving and effective-thinking tools that are essential for skill development. What more? It offers live learning support and the opportunity to progress at your own pace. Ask for your free demo today!

Explore Harappa Diaries to learn more about topics such as Objectives Of Research Methodology , Types Of Thinking , What Is Visualisation and Effective Learning Methods to upgrade your knowledge and skills.

Thriversitybannersidenav

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories

Market Research

  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Causal Research

Try Qualtrics for free

Causal research: definition, examples and how to use it.

16 min read Causal research enables market researchers to predict hypothetical occurrences & outcomes while improving existing strategies. Discover how this research can decrease employee retention & increase customer success for your business.

What is causal research?

Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables.

It’s often used by companies to determine the impact of changes in products, features, or services process on critical company metrics. Some examples:

  • How does rebranding of a product influence intent to purchase?
  • How would expansion to a new market segment affect projected sales?
  • What would be the impact of a price increase or decrease on customer loyalty?

To maintain the accuracy of causal research, ‘confounding variables’ or influences — e.g. those that could distort the results — are controlled. This is done either by keeping them constant in the creation of data, or by using statistical methods. These variables are identified before the start of the research experiment.

As well as the above, research teams will outline several other variables and principles in causal research:

  • Independent variables

The variables that may cause direct changes in another variable. For example, the effect of truancy on a student’s grade point average. The independent variable is therefore class attendance.

  • Control variables

These are the components that remain unchanged during the experiment so researchers can better understand what conditions create a cause-and-effect relationship.  

This describes the cause-and-effect relationship. When researchers find causation (or the cause), they’ve conducted all the processes necessary to prove it exists.

  • Correlation

Any relationship between two variables in the experiment. It’s important to note that correlation doesn’t automatically mean causation. Researchers will typically establish correlation before proving cause-and-effect.

  • Experimental design

Researchers use experimental design to define the parameters of the experiment — e.g. categorizing participants into different groups.

  • Dependent variables

These are measurable variables that may change or are influenced by the independent variable. For example, in an experiment about whether or not terrain influences running speed, your dependent variable is the terrain.  

Why is causal research useful?

It’s useful because it enables market researchers to predict hypothetical occurrences and outcomes while improving existing strategies. This allows businesses to create plans that benefit the company. It’s also a great research method because researchers can immediately see how variables affect each other and under what circumstances.

Also, once the first experiment has been completed, researchers can use the learnings from the analysis to repeat the experiment or apply the findings to other scenarios. Because of this, it’s widely used to help understand the impact of changes in internal or commercial strategy to the business bottom line.

Some examples include:

  • Understanding how overall training levels are improved by introducing new courses
  • Examining which variations in wording make potential customers more interested in buying a product
  • Testing a market’s response to a brand-new line of products and/or services

So, how does causal research compare and differ from other research types?

Well, there are a few research types that are used to find answers to some of the examples above:

1. Exploratory research

As its name suggests, exploratory research involves assessing a situation (or situations) where the problem isn’t clear. Through this approach, researchers can test different avenues and ideas to establish facts and gain a better understanding.

Researchers can also use it to first navigate a topic and identify which variables are important. Because no area is off-limits, the research is flexible and adapts to the investigations as it progresses.

Finally, this approach is unstructured and often involves gathering qualitative data, giving the researcher freedom to progress the research according to their thoughts and assessment. However, this may make results susceptible to researcher bias and may limit the extent to which a topic is explored.

2. Descriptive research

Descriptive research is all about describing the characteristics of the population, phenomenon or scenario studied. It focuses more on the “what” of the research subject than the “why”.

For example, a clothing brand wants to understand the fashion purchasing trends amongst buyers in California — so they conduct a demographic survey of the region, gather population data and then run descriptive research. The study will help them to uncover purchasing patterns amongst fashion buyers in California, but not necessarily why those patterns exist.

As the research happens in a natural setting, variables can cross-contaminate other variables, making it harder to isolate cause and effect relationships. Therefore, further research will be required if more causal information is needed.

Get started on your market research journey with CoreXM

How is causal research different from the other two methods above?

Well, causal research looks at what variables are involved in a problem and ‘why’ they act a certain way. As the experiment takes place in a controlled setting (thanks to controlled variables) it’s easier to identify cause-and-effect amongst variables.

Furthermore, researchers can carry out causal research at any stage in the process, though it’s usually carried out in the later stages once more is known about a particular topic or situation.

Finally, compared to the other two methods, causal research is more structured, and researchers can combine it with exploratory and descriptive research to assist with research goals.

Summary of three research types

causal research table

What are the advantages of causal research?

  • Improve experiences

By understanding which variables have positive impacts on target variables (like sales revenue or customer loyalty), businesses can improve their processes, return on investment, and the experiences they offer customers and employees.

  • Help companies improve internally

By conducting causal research, management can make informed decisions about improving their employee experience and internal operations. For example, understanding which variables led to an increase in staff turnover.

  • Repeat experiments to enhance reliability and accuracy of results

When variables are identified, researchers can replicate cause-and-effect with ease, providing them with reliable data and results to draw insights from.

  • Test out new theories or ideas

If causal research is able to pinpoint the exact outcome of mixing together different variables, research teams have the ability to test out ideas in the same way to create viable proof of concepts.

  • Fix issues quickly

Once an undesirable effect’s cause is identified, researchers and management can take action to reduce the impact of it or remove it entirely, resulting in better outcomes.

What are the disadvantages of causal research?

  • Provides information to competitors

If you plan to publish your research, it provides information about your plans to your competitors. For example, they might use your research outcomes to identify what you are up to and enter the market before you.

  • Difficult to administer

Causal research is often difficult to administer because it’s not possible to control the effects of extraneous variables.

  • Time and money constraints

Budgetary and time constraints can make this type of research expensive to conduct and repeat. Also, if an initial attempt doesn’t provide a cause and effect relationship, the ROI is wasted and could impact the appetite for future repeat experiments.

  • Requires additional research to ensure validity

You can’t rely on just the outcomes of causal research as it’s inaccurate. It’s best to conduct other types of research alongside it to confirm its output.

  • Trouble establishing cause and effect

Researchers might identify that two variables are connected, but struggle to determine which is the cause and which variable is the effect.

  • Risk of contamination

There’s always the risk that people outside your market or area of study could affect the results of your research. For example, if you’re conducting a retail store study, shoppers outside your ‘test parameters’ shop at your store and skew the results.

How can you use causal research effectively?

To better highlight how you can use causal research across functions or markets, here are a few examples:

Market and advertising research

A company might want to know if their new advertising campaign or marketing campaign is having a positive impact. So, their research team can carry out a causal research project to see which variables cause a positive or negative effect on the campaign.

For example, a cold-weather apparel company in a winter ski-resort town may see an increase in sales generated after a targeted campaign to skiers. To see if one caused the other, the research team could set up a duplicate experiment to see if the same campaign would generate sales from non-skiers. If the results reduce or change, then it’s likely that the campaign had a direct effect on skiers to encourage them to purchase products.

Improving customer experiences and loyalty levels

Customers enjoy shopping with brands that align with their own values, and they’re more likely to buy and present the brand positively to other potential shoppers as a result. So, it’s in your best interest to deliver great experiences and retain your customers.

For example, the Harvard Business Review found that an increase in customer retention rates by 5% increased profits by 25% to 95%. But let’s say you want to increase your own, how can you identify which variables contribute to it?Using causal research, you can test hypotheses about which processes, strategies or changes influence customer retention. For example, is it the streamlined checkout? What about the personalized product suggestions? Or maybe it was a new solution that solved their problem? Causal research will help you find out.

Discover how to use analytics to improve customer retention.

Improving problematic employee turnover rates

If your company has a high attrition rate, causal research can help you narrow down the variables or reasons which have the greatest impact on people leaving. This allows you to prioritize your efforts on tackling the issues in the right order, for the best positive outcomes.

For example, through causal research, you might find that employee dissatisfaction due to a lack of communication and transparency from upper management leads to poor morale, which in turn influences employee retention.

To rectify the problem, you could implement a routine feedback loop or session that enables your people to talk to your company’s C-level executives so that they feel heard and understood.

How to conduct causal research first steps to getting started are:

1. Define the purpose of your research

What questions do you have? What do you expect to come out of your research? Think about which variables you need to test out the theory.

2. Pick a random sampling if participants are needed

Using a technology solution to support your sampling, like a database, can help you define who you want your target audience to be, and how random or representative they should be.

3. Set up the controlled experiment

Once you’ve defined which variables you’d like to measure to see if they interact, think about how best to set up the experiment. This could be in-person or in-house via interviews, or it could be done remotely using online surveys.

4. Carry out the experiment

Make sure to keep all irrelevant variables the same, and only change the causal variable (the one that causes the effect) to gather the correct data. Depending on your method, you could be collecting qualitative or quantitative data, so make sure you note your findings across each regularly.

5. Analyze your findings

Either manually or using technology, analyze your data to see if any trends, patterns or correlations emerge. By looking at the data, you’ll be able to see what changes you might need to do next time, or if there are questions that require further research.

6. Verify your findings

Your first attempt gives you the baseline figures to compare the new results to. You can then run another experiment to verify your findings.

7. Do follow-up or supplemental research

You can supplement your original findings by carrying out research that goes deeper into causes or explores the topic in more detail. One of the best ways to do this is to use a survey. See ‘Use surveys to help your experiment’.

Identifying causal relationships between variables

To verify if a causal relationship exists, you have to satisfy the following criteria:

  • Nonspurious association

A clear correlation exists between one cause and the effect. In other words, no ‘third’ that relates to both (cause and effect) should exist.

  • Temporal sequence

The cause occurs before the effect. For example, increased ad spend on product marketing would contribute to higher product sales.

  • Concomitant variation

The variation between the two variables is systematic. For example, if a company doesn’t change its IT policies and technology stack, then changes in employee productivity were not caused by IT policies or technology.

How surveys help your causal research experiments?

There are some surveys that are perfect for assisting researchers with understanding cause and effect. These include:

  • Employee Satisfaction Survey – An introductory employee satisfaction survey that provides you with an overview of your current employee experience.
  • Manager Feedback Survey – An introductory manager feedback survey geared toward improving your skills as a leader with valuable feedback from your team.
  • Net Promoter Score (NPS) Survey – Measure customer loyalty and understand how your customers feel about your product or service using one of the world’s best-recognized metrics.
  • Employee Engagement Survey – An entry-level employee engagement survey that provides you with an overview of your current employee experience.
  • Customer Satisfaction Survey – Evaluate how satisfied your customers are with your company, including the products and services you provide and how they are treated when they buy from you.
  • Employee Exit Interview Survey – Understand why your employees are leaving and how they’ll speak about your company once they’re gone.
  • Product Research Survey – Evaluate your consumers’ reaction to a new product or product feature across every stage of the product development journey.
  • Brand Awareness Survey – Track the level of brand awareness in your target market, including current and potential future customers.
  • Online Purchase Feedback Survey – Find out how well your online shopping experience performs against customer needs and expectations.

That covers the fundamentals of causal research and should give you a foundation for ongoing studies to assess opportunities, problems, and risks across your market, product, customer, and employee segments.

If you want to transform your research, empower your teams and get insights on tap to get ahead of the competition, maybe it’s time to leverage Qualtrics CoreXM.

Qualtrics CoreXM provides a single platform for data collection and analysis across every part of your business — from customer feedback to product concept testing. What’s more, you can integrate it with your existing tools and services thanks to a flexible API.

Qualtrics CoreXM offers you as much or as little power and complexity as you need, so whether you’re running simple surveys or more advanced forms of research, it can deliver every time.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

causal comparative research

Yearly paid plans are up to 65% off for the spring sale. Limited time only! 🌸

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

causal comparative research

HubSpot CRM

causal comparative research

Google Sheets

causal comparative research

Google Analytics

causal comparative research

Microsoft Excel

causal comparative research

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

causal comparative research

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • What is causal-comparative research: Definition, types & methods

What is causal-comparative research: Definition, types & methods

Defne Çobanoğlu

Like most of the population in the world, you probably also learned about World War 1 and its short-term and long-term causes. But there was no declaration from the nations stating their reasons. The researchers analyzed the events and occasions and drew cause-and-effect relationships between variables. Thanks to their efforts, the main causes and starting points are clearly defined. 

The research method used when investigating the reasons for WWI is causal-comparative research . And in this article, we give an explanation for causal-comparative research, the types of methods, advantages and disadvantages, and examples. Let us get started with the definition!

  • What is causal-comparative research?

Causal comparative research is a type of research method where the researcher tries to find out if there is a causal effect relationship between dependent and independent variables. In other words, the researcher using this method wants to know whether one thing changing affects another thing, and if so, why.

The researcher can look at previous events and try to draw conclusions and cause-and-effect relationships. But of course, there may be some times when it is not possible to do so. Then, they can collect information about a group of participants and observe the changes in the long run. Let us get into detail on the types of causal-comparative research:

  • Types of causal-comparative research

Causal-comparative research types

Causal-comparative research types

Even though the main objective of a causal-comparative study is to draw cause-and-effect relationships, how the researcher does it may change. Because it is possible that there may be some limitations that put the study in a difficult situation. Mainly, causal-comparative research design is divided into two groups. These research designs are: 

1 - Retrospective comparative research

Retrospective comparative study is about the study and comparison of the existing data to know more about the relations, patterns, or outcomes of past events and historical periods . In this study approach, the scientists collect data on past events and try to find results and create patterns. This method is mainly used when it is impossible to do a prospective comparative study. The reasons for limitation can be practical, ethical, or logistical reasons.

2 - Prospective comparative research

The prospective comparative study is about collecting information from a group of participants over a long period. Afterward, the scientists make some predictions about the future. Then, researchers follow the participants and observe the changes, outcomes, or developments. The main goal of this study is to see how the conditions in the beginning change and effect each other. 

  • Causal-comparative research examples

The nature of causal-comparative study design makes it possible to study and make a hypothesis on all kinds of past events and occurrences. When there are multiple variables, researchers can try to make sense of how different variables affect the outcome of situations. Now let us see some causal-comparative study method examples:

Causal-comparative research example #1

For example, let’s imagine that a researcher wants to figure out whether classroom sizes affect students' exam results. In this case, the classroom size is the independent variable, and the effect on academic performance is the dependent variable. The researcher can compare the exam results of students from classes of varying sizes to see if there is a correlation between the two.

Causal-comparative research example #2

There may or may not be a difference in leadership styles between men and women, and it is possible to figure out the difference by looking at various examples. To find data on the subject, the researcher can collect data on the leadership methods from both female and male leaders. And they compare the information between the two groups.

  • Advantages and disadvantages of causal-comparative research

A causal-comparative study design may be the perfect method for a researcher, but it may not be as suitable for another one. It is up to the aim of the study and the researcher’s wishes to use which research method. In order to make a conscious decision, the researcher should be aware of the advantages and disadvantages of causal-comparative design.

Advantages of causal-comparative research

  • This type of study helps identify the causes of occurrences .
  • It is useful when experimentation is not possible.
  • As this research type relies on existing data or natural occurrences, there is no need for experimentation. Therefore, it is cost-effective .
  • The findings of a causal-comparative research study are good for creating a hypothesis .
  • It is an effective method to make sense of past events to be prepared for the future.

Disadvantages of causal-comparative research

  • Randomization aspect is not possible in this type of study.
  • There is a lack of control over independent variables .
  • As with other research methodologies, this type of research is also prone to researcher bias . Subject-selection bias may be unavoidable.
  • When preexisting characteristics and events are studied, ethical issues may arise, especially if the data is sensitive.
  • Frequently asked questions about causal-comparative research

Is causal-comparative research quantitative or qualitative?

Causal-comparative research is mostly a quantitative study, as it gives factual and numerical data. After all, the primary goal of causal-comparative research is to find out whether or not there is a statistically noticeable difference or relation between conditions based on naturally occurring independent variables. But this study method also provides qualitative data as it answers “ why ” questions.

Causal-comparative research vs. correlational research

The main difference between causal-comparative research and correlational research is the fact that causal-comparative research studies two or more groups and one independent variable . And correlational research observes and studies two or more variables and one group .

Causal-comparative research vs. experimental research

The difference between experimental and causal-comparative study design is a big one. In the experimental study, the participants are randomly selected . In the causal-comparative study design, however, the participants are already in different groups as the events have already happened . Also, in other instances, natural events are studied without human intervention.

Causal-comparative research vs. quasi-experimental research

Both causal-comparative studies and quasi-experimental studies are used to explore and identify cause-and-effect relationships, and both of them are non-experimental methods . Causal-comparative research design aims to find causal connections between groups based on naturally occurring independent factors. And quasi-experimental research has a more experimental element, such as partial control over subjects and the use of groups for comparison.

What is the sample size for causal-comparative research?

The best sample size for causal-comparative research depends on a number of factors, such as the purpose of the research, research design, and practical limitations. There is no right or wrong sample size for data collection by causal-comparative study. It can change according to the nature of the study .

What are the limitations of causal-comparative research?

As much as it is an effective method to create a cause-and-effect relationship between variables, comparative research also has its limits. For example, to name a few, randomization can not be done, and there is a lack of control over independent variables.

  • Final words

There are times when experimentation can be used, and there are times when it is not possible for ethical or practical reasons. In that case, analyzing events and groups of people is a good way to define the cause-and-effect relationship between two different variables. The researcher can look into the details of past events to draw conclusions, or they can find a defined group to observe and study them long-term.

In this article, we have gathered information on causal-comparative research to give a good idea of the research method. For further information on different research types and for all your research needs, don’t forget to visit our other articles!

Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

Diagnostic analysis: Definition, tools & examples

Diagnostic analysis: Definition, tools & examples

15+ Must-ask company culture questions to ask

15+ Must-ask company culture questions to ask

7 Best FlexiQuiz alternatives in 2023 (pros & prices)

7 Best FlexiQuiz alternatives in 2023 (pros & prices)

  • Translators
  • Graphic Designers
  • Editing Services
  • Academic Editing Services
  • Admissions Editing Services
  • Admissions Essay Editing Services
  • AI Content Editing Services
  • APA Style Editing Services
  • Application Essay Editing Services
  • Book Editing Services
  • Business Editing Services
  • Capstone Paper Editing Services
  • Children's Book Editing Services
  • College Application Editing Services
  • College Essay Editing Services
  • Copy Editing Services
  • Developmental Editing Services
  • Dissertation Editing Services
  • eBook Editing Services
  • English Editing Services
  • Horror Story Editing Services
  • Legal Editing Services
  • Line Editing Services
  • Manuscript Editing Services
  • MLA Style Editing Services
  • Novel Editing Services
  • Paper Editing Services
  • Personal Statement Editing Services
  • Research Paper Editing Services
  • Résumé Editing Services
  • Scientific Editing Services
  • Short Story Editing Services
  • Statement of Purpose Editing Services
  • Substantive Editing Services
  • Thesis Editing Services

Proofreading

  • Proofreading Services
  • Admissions Essay Proofreading Services
  • Children's Book Proofreading Services
  • Legal Proofreading Services
  • Novel Proofreading Services
  • Personal Statement Proofreading Services
  • Research Proposal Proofreading Services
  • Statement of Purpose Proofreading Services

Translation

  • Translation Services

Graphic Design

  • Graphic Design Services
  • Dungeons & Dragons Design Services
  • Sticker Design Services
  • Writing Services

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

Causal Comparative Research: Insights and Implications

David Costello

Diving into the realm of research methodologies, one encounters a variety of approaches tailored for specific inquiries. Causal Comparative Research, at its core, refers to a research design aimed at identifying and analyzing causal relationships between variables, specifically when the researcher does not have control over active manipulation of variables. Instead of manipulating variables as in experimental research, this method examines existing differences between or among groups to derive potential causes.

Its significance in the academic and research arena is multifaceted. For scenarios where experimental designs are either not feasible or ethical, Causal Comparative Research provides an alternative pathway to glean insights. This approach bridges the gap between mere observational studies and those requiring strict control, offering researchers a valuable tool to unearth potential causal links in a myriad of contexts. By understanding these causal links, scholars, policymakers, and professionals can make more informed decisions and theories, further enriching our collective knowledge base.

Background and evolution

Causal Comparative Research, while not as old as some other research methodologies, has roots deeply embedded in the quest for understanding relationships without direct manipulation. The method blossomed in fields such as education, sociology, and psychology during times when researchers confronted questions of causality. Over time, as the academic community acknowledged the need to investigate causal relationships in naturally occurring group differences, this method gained traction.

What sets Causal Comparative Research apart from other methodologies is its unique stance on causality without direct interference. Experimental research, often hailed as the gold standard for identifying causal relationships, involves deliberate manipulation of independent variables to gauge their effect on dependent variables . This controlled setting allows for clearer cause-and-effect assertions. On the other hand, observational studies, which are purely descriptive, steer clear from making any causal inferences and focus primarily on recording and understanding patterns or phenomena as they naturally occur.

Yet, nestled between these two methodologies, Causal Comparative Research carves its niche. It aims to identify potential causes by examining existing differences between or among groups. While it doesn't offer the direct control of an experiment, it delves deeper than a mere observational approach by trying to understand the "why" behind observed differences. In doing so, it offers a unique blend of retrospective investigation with a pursuit for causality, providing researchers with a versatile tool in their investigative arsenal.

Key characteristics

Causal Comparative Research is distinguished by a unique set of features that demarcate its approach from other research methodologies. These characteristics not only define its operational dynamics but also guide its potential applications and insights. By understanding these foundational traits, researchers can effectively harness the method's strengths and navigate its nuances.

Non-manipulation of variables

One of the foundational attributes of Causal Comparative Research is the non-manipulation of variables. Rather than actively intervening or changing conditions, researchers in this paradigm focus on studying groups as they naturally present themselves . This means the intrinsic differences between groups, which have already emerged, become the central focus.

Such a non-interventionist approach allows for real-world applicability and reduces the artificiality sometimes present in controlled experiments. However, this comes at the cost of being less definitive about causal relationships since the conditions aren't being manipulated directly by the researcher.

By studying pre-existing conditions and group differences, researchers aim to unearth potential causative factors or trends that may have otherwise gone unnoticed in a more controlled setting.

Retrospective in nature

Causal Comparative Research is inherently retrospective. Instead of setting up conditions and predicting future outcomes, researchers using this method look backward, aiming to identify what might have caused the current differences between groups .

This backward-looking approach offers a distinct vantage point. It allows researchers to harness historical data, past events, and already established patterns to discern potential causal relationships. While this method doesn't provide as concrete causative conclusions as prospective studies, it provides crucial insights into historical causative factors.

Understanding the past is vital in many academic fields. This retrospective nature provides a pathway to delve into historical causality, offering insights that can guide future investigations and decisions.

Relies on existing differences between or among groups

The very essence of Causal Comparative Research is rooted in the examination of existing differences. Instead of creating distinct groups through manipulation, researchers study naturally occurring group differences .

These existing distinctions can arise from a multitude of factors, be it cultural, environmental, socio-economic, or even genetic. The goal is to discern whether these differences can hint at underlying causal relationships or if they are mere coincidences.

The reliance on pre-existing differences is both a strength and a limitation. It ensures genuine applicability to real-world scenarios but also introduces potential confounding variables that researchers must be cautious of while interpreting results.

Advantages of causal comparative research

Offering a unique blend of observational and experimental techniques, Causal Comparative Research is tailored for situations demanding flexibility without compromising on the search for causal insights. Here is why many researchers consider it a crucial tool in their investigative arsenal.

Useful when experimental research is not feasible

Causal Comparative Research emerges as a strong alternative in scenarios where experimental research is unfeasible. Experimental research, while robust, often requires conditions or manipulations that might not be viable or ethical , especially in fields like psychology, sociology, or education.

In such situations, relying on naturally occurring differences provides researchers a viable avenue to still investigate potential causal relationships without directly intervening or risking harm. Thus, it offers a middle ground between pure observation and controlled experimentation, allowing for causal inquiries in challenging contexts.

Provides valuable insights in a short amount of time

One of the standout attributes of Causal Comparative Research is its efficiency. Given that it focuses on pre-existing differences, there's no need to wait for conditions to develop or results to manifest over extended periods.

This means that researchers can glean valuable insights in a relatively shorter time frame compared to longitudinal or prospective experimental designs. For pressing questions or time-sensitive scenarios, this method offers timely data and conclusions. Its swiftness does not compromise depth, ensuring that the insights derived are both timely and profound.

Can offer preliminary evidence before experimental designs are implemented

Before diving into a full-fledged experimental design, researchers often seek preliminary evidence or hints to justify their hypotheses or the feasibility of the experiment. Causal Comparative Research serves this purpose aptly.

By examining existing differences and drawing potential causal links, it provides an initial layer of evidence. This preliminary data can guide the structuring of more elaborate, controlled experiments, ensuring they're grounded in prior findings. Thus, it acts as a stepping stone, paving the way for more rigorous research designs by providing an initial overview of potential causal links.

Limitations and challenges

Every research methodology, regardless of its strengths, comes with its set of limitations and challenges. Causal Comparative Research, while flexible and versatile, is no exception. Before embracing its advantages, it's imperative for researchers to be acutely aware of its potential pitfalls and the nuances that might influence their findings.

Cannot definitively establish cause-and-effect relationships

While Causal Comparative Research offers valuable insights into potential causal relationships, it does not provide definitive cause-and-effect conclusions. Without direct manipulation of variables, it's challenging to ascertain a clear causative link. This inherent limitation means that, at best, findings can suggest probable causes but cannot confirm them with the same certainty as experimental research.

Potential for confounding variables

Given the reliance on naturally occurring group differences, there's a heightened risk of confounding variables influencing the outcomes . These are external factors that might affect the dependent variable , clouding the clarity of potential causal links. Researchers must remain vigilant, identifying and accounting for these variables to ensure the study's findings remain as untainted as possible by external influences.

Difficulty in ensuring group equivalency

Ensuring that the groups under study are equivalent is paramount in Causal Comparative Research. Any intrinsic group differences, other than the ones being studied, can skew results and interpretations. This challenge underscores the importance of careful selection and meticulous analysis to minimize the impact of non-equivalent groups on the research findings.

Steps in conducting causal comparative research

The process of Causal Comparative Research demands a systematic progression through specific stages to ensure that the research is comprehensive, accurate, and valid. Below is a step-by-step breakdown of this research methodology:

  • Identification of the Research Problem: This initial stage involves recognizing and defining the specific research problem or research question . It forms the foundation upon which the entire research process will be built, making it crucial to be clear, concise, and relevant.
  • Selection of Groups: Once the problem is identified, researchers need to select the groups they wish to compare. These groups should have existing differences relevant to the research question. The accuracy and relevance of group selection directly influence the research's validity.
  • Measurement of the Dependent Variable(s): In this phase, researchers decide on the dependent variables they'll measure. These are the outcomes or effects potentially influenced by the groups' differences. Proper operationalization and measurement scales are essential to ensure that the data collected is accurate and meaningful.
  • Data Collection and Analysis: With everything set up, the actual data collection begins. This could involve surveys, observations, or any other relevant data collection method. Post collection, the data undergoes rigorous analysis to identify patterns, differences, or potential causal links.
  • Interpretation and Reporting of Results: Once the analysis is complete, researchers need to interpret the results in the context of the research problem. This interpretation forms the basis of the research's conclusions. Finally, findings are reported, often in the form of academic papers or reports, ensuring that the insights can be shared and critiqued by the broader academic community.

By meticulously following these steps, researchers can navigate the complexities of Causal Comparative Research, ensuring that their investigations are both methodologically sound and academically valuable.

Key considerations for validity

When conducting Causal Comparative Research, validity remains at the forefront. Ensuring that the research accurately captures and represents the phenomena under study is pivotal for its credibility and utility. Delving into the intricacies of validity, two primary considerations emerge: internal and external validity.

Internal validity concerns

Internal validity pertains to the degree to which the research accurately establishes a cause-and-effect relationship between variables. However, several threats can compromise it, especially in a causal-comparative setup, for instance:

  • Maturation: Refers to changes occurring naturally over time within participants, which could be misconstrued as effects of the studied variable.
  • Testing: Concerns the effects of taking a test multiple times. Participants might improve not because of the variable of interest, but due to familiarity with the test.
  • Instrumentation: Issues arise when the tools or methods used to collect data change or are inconsistent, potentially skewing results.

Addressing these concerns and others is crucial to maintain the research's integrity and ensure that the findings genuinely reflect the causal relationships under scrutiny.

External validity considerations

While internal validity focuses on the research's accuracy within its confines, external validity revolves around the generalizability of the findings. It assesses whether the study's conclusions can be applied to broader contexts, populations, or settings.

One major concern here is the representativeness of the groups studied. If they are too niche or specific, generalizing findings becomes problematic. Additionally, the conditions under which the research is conducted can influence its applicability elsewhere. If the environment, time, or setting is too unique, the findings might not hold true in different scenarios.

Ensuring robust external validity means that the research doesn't just hold academic value, but can also inform real-world practices, policies, and decisions, making its implications far-reaching and impactful.

Illustrative examples of causal comparative research

Across varied disciplines, Causal Comparative Research has been employed to address pressing questions, providing insights into causal factors without the need for direct manipulation. Let's explore a few examples that encapsulate its breadth and significance.

Comparing traditional and online learning outcomes

With the rise of digital platforms, online learning has rapidly grown as a popular alternative to traditional classroom settings . However, discerning the effectiveness of both mediums in terms of student performance and engagement is essential for educators and institutions. Causal Comparative Research provides an apt approach to explore this, without altering the learning environments, but rather examining the existing outcomes.

  • Identification of the Research Problem: The primary concern here is understanding the potential causal factors behind the differing success rates or engagement levels of students in traditional classrooms versus online learning platforms.
  • Selection of Groups: Two primary groups would be selected for this study: students who have primarily undergone traditional classroom learning and those who have predominantly experienced online learning. It would be essential to ensure these groups are as comparable as possible in other aspects, such as age, educational level, and background.
  • Measurement of the Dependent Variable(s): The dependent variables might include academic performance (grades or test scores), engagement metrics (participation in class discussions or assignments turned in), and possibly even feedback or satisfaction surveys from students regarding their learning experience.
  • Data Collection and Analysis: Data would be gathered from institutional records, online learning platforms, and potentially direct surveys. Once collected, statistical analyses would be employed to compare the performance and engagement metrics between the two groups, adjusting for any potential confounding variables.
  • Interpretation and Reporting of Results: After analysis, researchers would interpret the data to understand any significant differences in outcomes between traditional and online learners. It's crucial to report findings with the acknowledgment that the research indicates correlation and not necessarily direct causation. Recommendations could be made for educators based on the insights gathered.

In conclusion, while both traditional and online learning environments offer unique benefits, utilizing Causal Comparative Research allows institutions and educators to glean vital insights into their relative effectiveness. Such understanding can guide curriculum development, teaching methodologies, and even future educational investments.

Analysis of lifestyle factors in disease prevalence

In contemporary health studies, lifestyle factors like diet, exercise, and stress have often been cited as potential determinants of disease prevalence . With diverse populations adhering to varied lifestyles, understanding the potential influence of these factors on disease rates becomes pivotal for healthcare professionals and policymakers. Causal Comparative Research offers a path to delve into these influences by analyzing existing health outcomes against different lifestyle patterns.

  • Identification of the Research Problem: The primary goal is to determine whether specific lifestyle factors (e.g., sedentary behavior, dietary habits, tobacco use) have a significant influence on the prevalence of certain diseases, such as heart disease, diabetes, or hypertension.
  • Selection of Groups: Groups can be categorized based on distinct lifestyle patterns. For example, groups might consist of individuals who are sedentary versus those who exercise regularly, or those who adhere to a vegetarian diet versus those who consume meat regularly.
  • Measurement of the Dependent Variable(s): The dependent variable would be the prevalence or incidence of specific diseases in each group. This can be measured using health records, self-reported incidents, or clinical diagnoses.
  • Data Collection and Analysis: Data can be sourced from health databases, patient surveys, or direct health check-ups. Statistical tools can then be applied to identify any significant disparities in disease rates between the varied lifestyle groups, accounting for potential confounders like age, genetics, or socio-economic status.
  • Interpretation and Reporting of Results: After the data analysis, findings would elucidate any notable correlations between lifestyle factors and disease prevalence. It's vital to emphasize that this research would indicate associations, not direct causation. Still, such insights could be invaluable for health promotion campaigns and policy formulation.

To conclude, by leveraging Causal Comparative Research in analyzing lifestyle factors and their potential influence on disease rates, healthcare stakeholders can be better equipped with knowledge that informs public health strategies and individual lifestyle recommendations.

Resilience levels in trauma survivors vs. non-trauma individuals

Resilience—the capacity to recover quickly from difficulties and maintain mental health— has piqued the interest of psychologists , especially when comparing trauma survivors to those who haven't experienced trauma. The ability to understand the underlying factors contributing to resilience can pave the way for better therapeutic approaches and interventions.

  • Identification of the Research Problem: Determining whether individuals who have experienced trauma have different resilience levels compared to those who haven't.
  • Selection of Groups: One group would consist of individuals who have experienced significant traumatic events (such as natural disasters, personal assaults, or wartime experiences), and the second group would comprise individuals with no history of significant trauma.
  • Measurement of the Dependent Variable(s): Resilience levels would be the primary dependent variable, measured using standardized resilience scales like the Connor-Davidson Resilience Scale (CD-RISC) .
  • Data Collection and Analysis: Participants from both groups would complete the chosen resilience scale. Data would then be analyzed to determine if there are significant differences in resilience scores between the two groups. Covariates like age, gender, socioeconomic status, and mental health history might be controlled for to enhance the study's validity.
  • Interpretation and Reporting of Results: The findings would indicate whether trauma survivors, on average, have higher, lower, or comparable resilience levels to their non-trauma counterparts. This would provide valuable insights into the potential protective factors or coping strategies that trauma survivors might develop.

The outcomes of this study can significantly influence therapeutic strategies and post-trauma interventions, ensuring that individuals who've faced traumatic events receive tailored care that acknowledges their unique psychological landscape.

Impact of family structure on child development outcomes

Family structures have undergone significant evolution over the decades . With varying family setups—from nuclear families to single-parent households to extended family living arrangements—the question arises: How do these different structures impact child development? Delving into this query provides insights crucial for educators, therapists, and policymakers.

  • Identification of the Research Problem: Investigate the potential differences in child development outcomes based on varying family structures.
  • Selection of Groups: Children would be categorized based on their family structure: nuclear families, single-parent households, extended family households, and other non-traditional structures.
  • Measurement of the Dependent Variable(s): Child development outcomes, which could include academic performance, socio-emotional development, and behavioral patterns. These would be measured using standardized tests, behavioral assessments, and teacher or caregiver reports.
  • Data Collection and Analysis: Data would be collected from schools, families, and relevant institutions. Statistical methods would then be used to determine significant differences in developmental outcomes across the different family structures, controlling for factors like socio-economic status, parental education, and location.
  • Interpretation and Reporting of Results: Findings would detail whether and how family structures play a pivotal role in shaping child development. Results could reveal, for instance, if children from extended family structures exhibit better socio-emotional skills due to increased interactions with varied age groups within the family.

Understanding the nuances of how family structure affects child development can guide interventions, curricula designs, and policies to cater better to the diverse needs of children, ensuring every child receives the support they require to thrive.

Impact of organizational structures on employee productivity

As businesses evolve, they experiment with different organizational structures , from traditional hierarchies to flat structures to matrix setups. How do these varying structures influence employee productivity and satisfaction? Exploring this can provide businesses valuable insights to optimize performance and employee morale.

  • Identification of the Research Problem: Determine the effect of different organizational structures on employee productivity.
  • Selection of Groups: Employees from diverse firms, categorized based on their company's organizational structure: hierarchical, flat, matrix, and hybrid structures.
  • Measurement of the Dependent Variable(s): Employee productivity could be gauged through metrics like task completion rate, project delivery timelines, and output quality. Additionally, employee satisfaction surveys might be incorporated as secondary data.
  • Data Collection and Analysis: Data would be collected from employee performance metrics and satisfaction surveys across different companies. Advanced statistical methods would be employed to analyze potential variations in productivity and satisfaction across organizational structures, accounting for potential confounders.
  • Interpretation and Reporting of Results: Findings might indicate, for instance, that flat structures promote higher employee autonomy and satisfaction but might face challenges in larger teams due to potential communication breakdowns.

By discerning the relationship between organizational structure and employee productivity, businesses can make informed decisions on organizational design, ensuring optimal output while fostering a conducive work environment.

Best practices

Ensuring the validity and reliability of your Causal Comparative Research findings is paramount. Implementing best practices not only adds rigor to the research but also increases the trustworthiness of the results. Below are some practices to uphold when conducting Causal Comparative Research.

Ensuring representative samples

One of the primary pillars of credible research is the selection of a representative sample. A sample that genuinely mirrors the larger population ensures that findings can be more confidently generalized. In Causal Comparative Research, the groups being compared should ideally capture the broader dynamics and diversity of the populations they represent.

To ensure a representative sample, researchers should be wary of biases during selection. This includes avoiding convenience sampling unless it's justified. Stratified random sampling or quota sampling can help in ensuring that different subgroups within the population are adequately represented.

Furthermore, the size of the sample plays a crucial role. While a larger sample can often yield more reliable results, it's imperative to ensure that it remains manageable and aligns with the study's logistical and financial constraints.

Controlling for extraneous variables

Extraneous variables can introduce noise into the research, obscuring the clarity of potential causal relationships. It's essential to identify potential confounders and control for them, ensuring that they don't unduly influence the outcome.

In Causal Comparative Research, since there's no direct manipulation of variables, the risk of uncontrolled extraneous variables affecting the outcome is heightened. One way to control for these variables is through matching, where participants in different groups are matched based on certain criteria, ensuring that these criteria do not interfere with the results.

Another technique involves statistical control, where advanced analytical methods, such as covariance analysis , are employed to account for the variance caused by extraneous variables.

Choosing appropriate statistical tools and techniques

The analysis phase is the heart of the research, where data comes alive and starts narrating a story. Selecting the appropriate statistical tools and techniques is pivotal in ensuring that this story is accurate and meaningful.

In Causal Comparative Research, the choice of statistical analysis largely depends on the nature of the data and the research question . For instance, if you're comparing means of two groups, a t-test might be appropriate. However, for more than two groups, an ANOVA could be the preferred choice.

Advanced statistical models, such as regression analysis or structural equation modeling , might be employed for more complex research questions. Regardless of the chosen method, it's crucial to ensure that assumptions of the tests are met, and the data is adequately prepared for analysis.

In the landscape of research methodologies, Causal Comparative Research stands out as a compelling blend of observational and quasi-experimental approaches. While it offers the advantage of examining naturally occurring differences without the need for direct manipulation, it comes with its own set of challenges and considerations. As with all research methods , its efficacy lies in the meticulous application of its principles, and the conscious effort to uphold best practices. When executed with rigor, this method provides invaluable insights, bridging the gap between observation and direct experimentation, and helping researchers navigate the complex webs of causality in varied fields.

Header image by Tom Wang .

Related Posts

6 Steps to Mastering the Theoretical Framework of a Dissertation

6 Steps to Mastering the Theoretical Framework of a Dissertation

How To Write a Strong Research Hypothesis

How To Write a Strong Research Hypothesis

  • Academic Writing Advice
  • All Blog Posts
  • Writing Advice
  • Admissions Writing Advice
  • Book Writing Advice
  • Short Story Advice
  • Employment Writing Advice
  • Business Writing Advice
  • Web Content Advice
  • Article Writing Advice
  • Magazine Writing Advice
  • Grammar Advice
  • Dialect Advice
  • Editing Advice
  • Freelance Advice
  • Legal Writing Advice
  • Poetry Advice
  • Graphic Design Advice
  • Logo Design Advice
  • Translation Advice
  • Blog Reviews
  • Short Story Award Winners
  • Scholarship Winners

Advance your scientific manuscript with expert editing

Advance your scientific manuscript with expert editing

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Chapter 16 Causal Comparative Research How to Design and Evaluate Research in Education 8th

Profile image of Cut Eka Para Samya

Related Papers

INTERNATIONAL JOURNAL OF MULTIPLE RESEARCH APPROACHES

causal comparative research

Educação & Realidade

Steven Klees

1: Comparison is the essence of science and the field of comparative and international education, like many of the social sciences, has been dominated by quantitative methodological approaches. This paper raises fundamental questions about the utility of regression analysis for causal inference. It examines three extensive literatures of applied regression analysis concerned with education policies. The paper concludes that the conditions necessary for regression analysis to yield valid causal inferences are so far from ever being met or approximated that such inferences are never valid. Alternative research methodologies are then briefly discussed.

Practising Comparison. Logics. Relations, Collaborations.

Monika Krause

Journal of educational …

Rhonda Craven

Educational Measurement: Issues and Practice

G. Van Den Wittenboer

What are the conditions that will allow for appropriate comparisons among groups" Should the concept of comparative validity be added to the lexicon of the psychometrician"

Theoretical and Methodological Approaches to Social Sciences and Knowledge Management

The Changing Academic Profession

Ulrich Teichler

Cut Eka Para Samya

Commentary on Causal Prescriptive Statements Causal prescriptive statements are necessary in the social sciences whenever there is a mission to help individuals, groups, or organizations improve. Researchers inquire whether some variable or intervention A causes an improvement in some mental, emotional, or behavioural variable B. If they are satisfied that A causes B, then they can take steps to manipulate A in the real world and thereby help people by enhancing B.

In Part 4, we begin a more detailed discussion of some of the methodologies that educational researchers use. We concentrate here on quantitative research, with a separate chapter devoted to group-comparison experimental research, single-subject experimental research, correlational research, causal-comparative research, and survey research. In each chapter, we not only discuss the method in some detail, but we also provide examples of published studies in which the researchers used one of these methods. We conclude each chapter with an analysis of a particular study's strengths and weaknesses.

RELATED PAPERS

Lung Cancer

James Mulshine

Piotr Pietraszewski

Natural Hazards

Pontus von Schoenberg

Proceedings of the V International conference Information Technology and Nanotechnology 2019

Vyacheslav Khachumov

Gokul Joshi

Chasqui Revista Latinoamericana De Comunicacion

Alexandra Ayala-Marín

International Journal of Biotechnology for Wellness Industries

Jitender Sharma

Proceedings of the National Academy of Sciences

Edward Pugh

Manager : Jurnal Ilmu manajemen

Asti Marlina

The Indonesian Journal of Health Science

Susi Wahyuning Asih

Rakesh Prajapati

Yannis Papaphilippou

Tanja Troselj , MARIN SOPTA

Radiasi : Jurnal Berkala Pendidikan Fisika

Yusro al Hakim

American Journal of Orthopsychiatry

Deborah K Padgett

manoel eduardo

International Journal of Pediatric Otorhinolaryngology

Hamdy Elhakim

Oksana Polinkevych

mario alejandro

Oscar Adolfo Medina Pérez

AHMET NİZAMOĞLU

manabu tanifuji

The Journal of Chemical Physics

Kevin Lehmann

Proceedings for Annual Meeting of The Japanese Pharmacological Society

Ulrike Kemmerling Weis

Biosystems Engineering

Geert Craessaerts

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Accessibility Report

[Enter personal and organization information through the Preferences > Identity dialog.]

The checker found no problems in this document.

  • Needs manual check: 2
  • Passed manually: 0
  • Failed manually: 0

Detailed Report

PhD Thesis Bangalore

Call us : 0091 80 4111 6961

Guide to ‘causal-comparative’ research design: Identifying causative relationship between an independent & dependent variable

  • PhD Research

'  data-srcset=

Most often, in experimental research, when a researcher wants to compare groups in a more natural way, the approach used is causal design. On the other hand, in a non-experimental setting, if a researcher wants to identify consequences or causes of differences between groups of individuals, then typically  causal-comparative design is deployed.  

Causal-comparative, also known as ex post facto (after the fact) research design, is an approach that attempts to figure out a causative relationship between an independent variable & a dependent variable. It must be noted that the relationship between the independent variable and dependent variable is a suggested relationship and not proven as the researcher do not have complete control over the independent variable.

This method seeks to build causal relationships between events and circumstances. Simply said, it determines to find out the reasons/causes of specific occurrences or non-occurrences. Based on Mill’s canon of agreement and disagreement, causal-comparative research involves comparison in contrast to correlation studies which looks at relationships. 

For example , you may wish to compare the body composition of individuals who are trained with exercise machines versus individuals trained only free weights. Here you will not be manipulating any variables, but only investigating the impact of exercise machines and free weights on body composition. However, since factors such as training programs, diet, aerobic conditioning affects the body composition, causal-comparative research will be assessed scrupulously to determine how the other factors were controlled. 

This research design is further segregated into:

  • Retrospective causal-comparative research –  In this method, a research question after the effects have occurred is investigated. The researcher aims to determine how one variable may have impacted another variable.  
  • Prospective causal-comparative research – This method begins with studying the causes and is progressed by investigating the possible effects of a condition. 

How to conduct causal-comparative research? 

The basic outline for performing this type of research is similar to other researches. The steps involved in this process are: 

  • Topic selection – Identify & define a specific phenomenon of interest and consider the possible consequences for the phenomenon. This method involves the selection of two groups that differ on a certain variable of interest. 
  • Review the literature – Assess the literature in order to identify the independent and dependent variables for the study. This process lets you figure out external variables that contribute to a cause-effect relationship.
  • Develop a hypothesis – The hypothesis developed must define the effect of the independent variable on the dependent variable.
  • Selection of comparison groups – Choose groups that differ in regards to the independent variable. This enables you to control external variables and reduce their impact. Here, you can use the matching technique to find groups that differ mainly by the presence of the independent variable. 
  • Choosing a tool for variable measurement variables and data collection – In this type of research, the researcher need not incorporate a treatment protocol. It is a matter of gathering data from surveys, interviews, etc. that allows comparisons to be made between the groups.
  • Data analysis – Here, data is reported as a frequency or mean for each group using descriptive statistics. This is followed by determining the significant mean difference between the groups using inferential statistics (T-test, Chi-square test). 
  • Interpretation of results – In this step carefully state that the independent variable causes the dependent variable. However, due to the presence of external variables and lack of randomisation in participant selection, it is probably ideal to state that the results showcase a possible effect or cause.  

Flow chart 

So, when should one consider using this research design? 

Typically, causal-comparative research design can be considered as an alternative to experimental design due to its feasibility, cost-affordability and easy to perform the research. 

However, in causal-comparative design, the independent variables cannot be manipulated, unlike experimental research. For example , if you want to investigate if ethnicity affects self-esteem, you cannot manipulate the self-esteem of the participants’. The independent variable here is already selected, and hence, some other method needs to be utilised to determine the cause.

Threats to the internal validity of the research 

In this type of research, since the participants are not randomly selected and placed in the groups, there is a threat to internal validity. Another threat to internal validity is its inability to manipulate the independent variable. 

In order to counter the threats and strengthen the research, impose selection strategies of matching utilising ANCOVA or homogeneous subgroups. 

Causal-comparative design includes basic features such as:

  • Involves selection of two comparison groups (experimental & control group) to be studied
  • Includes making comparisons between pre-existing groups in regards to interested variables 
  • Studies variables which cannot be manipulated for practical or ethical reasons
  • Consumes reduced amount of time and cost

Although this approach gives an opportunity to analyse data on the basis of your personal opinion and come out with the best conclusion, while predicting the relationship, you might fall to post hoc fallacy. Therefore, pay extra attention while predicting the relationship and then arrive at a conclusion.

2 thoughts on “Guide to ‘causal-comparative’ research design: Identifying causative relationship between an independent & dependent variable”

Well-research article. Bookmarked

Very helpful for the research information and also, can you specify the advantages of the comparative research for certain approaches as well.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Request a Call Back

  • Academic Writing
  • Avoiding Rejection
  • Data Analysis
  • Data collection
  • Dissertation
  • Dissertation Defense Preparation
  • Experimental Research
  • Limitation and Delimitation
  • Literature Review
  • Manuscript Publication
  • Quantitative Research
  • Questionnaires
  • Research Design
  • Research Methodology
  • Research Paper Writing
  • Research Proposal
  • Research Scholar
  • Topic Selection
  • Uncategorized

Recent Posts

  • A Beginner’s Guide to Top 3 Best PhD Research Topics for 2024-25 thesis , Topic Selection January 16, 2024
  • How to Write a Research Proposal and How NOT to in 2024 Research Proposal December 20, 2023
  • 5 Unknown Differences Between Limitation and Delimitation Limitation and Delimitation November 21, 2023
  • 3 Game-Changing Tips for PhD Thesis Writing in 2023-24 Citation , PhD Research , PhD Thesis November 3, 2023
  • What to Do When PhD Dissertation Defense Preparation Derail? Dissertation Defense Preparation September 16, 2023

How to get published in SCI Indexed Journals

About Us

  • Open access
  • Published: 07 May 2021

The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions

  • Benjamin Hanckel 1 ,
  • Mark Petticrew 2 ,
  • James Thomas 3 &
  • Judith Green 4  

BMC Public Health volume  21 , Article number:  877 ( 2021 ) Cite this article

21k Accesses

42 Citations

35 Altmetric

Metrics details

Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA approaches identified in published studies, and identify implications for future research and reporting.

PubMed, Scopus and Web of Science were systematically searched for peer-reviewed studies published in English up to December 2019 that had used QCA methods to identify the conditions associated with the uptake and/or effectiveness of interventions for public health. Data relating to the interventions studied (settings/level of intervention/populations), methods (type of QCA, case level, source of data, other methods used) and reported strengths and weaknesses of QCA were extracted and synthesised narratively.

The search identified 1384 papers, of which 27 (describing 26 studies) met the inclusion criteria. Interventions evaluated ranged across: nutrition/obesity ( n  = 8); physical activity ( n  = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation ( n  = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2), and general population health ( n  = 1). The majority of studies ( n  = 24) were of interventions solely or predominantly in high income countries. Key strengths reported were that QCA provides a method for addressing causal complexity; and that it provides a systematic approach for understanding the mechanisms at work in implementation across contexts. Weaknesses reported related to data availability limitations, especially on ineffective interventions. The majority of papers demonstrated good knowledge of cases, and justification of case selection, but other criteria of methodological quality were less comprehensively met.

QCA is a promising approach for addressing the role of context in complex interventions, and for identifying causal configurations of conditions that predict implementation and/or outcomes when there is sufficiently detailed understanding of a series of comparable cases. As the use of QCA in evaluative health research increases, there may be a need to develop advice for public health researchers and journals on minimum criteria for quality and reporting.

Peer Review reports

Interest in the use of Qualitative Comparative Analysis (QCA) arises in part from growing recognition of the need to broaden methodological capacity to address causality in complex systems [ 1 , 2 , 3 ]. Guidance for researchers for evaluating complex interventions suggests process evaluations [ 4 , 5 ] can provide evidence on the mechanisms of change, and the ways in which context affects outcomes. However, this does not address the more fundamental problems with trial and quasi-experimental designs arising from system complexity [ 6 ]. As Byrne notes, the key characteristic of complex systems is ‘emergence’ [ 7 ]: that is, effects may accrue from combinations of components, in contingent ways, which cannot be reduced to any one level. Asking about ‘what works’ in complex systems is not to ask a simple question about whether an intervention has particular effects, but rather to ask: “how the intervention works in relation to all existing components of the system and to other systems and their sub-systems that intersect with the system of interest” [ 7 ]. Public health interventions are typically attempts to effect change in systems that are themselves dynamic; approaches to evaluation are needed that can deal with emergence [ 8 ]. In short, understanding the uptake and impact of interventions requires methods that can account for the complex interplay of intervention conditions and system contexts.

To build a useful evidence base for public health, evaluations thus need to assess not just whether a particular intervention (or component) causes specific change in one variable, in controlled circumstances, but whether those interventions shift systems, and how specific conditions of interventions and setting contexts interact to lead to anticipated outcomes. There have been a number of calls for the development of methods in intervention research to address these issues of complex causation [ 9 , 10 , 11 ], including calls for the greater use of case studies to provide evidence on the important elements of context [ 12 , 13 ]. One approach for addressing causality in complex systems is Qualitative Comparative Analysis (QCA): a systematic way of comparing the outcomes of different combinations of system components and elements of context (‘conditions’) across a series of cases.

The potential of qualitative comparative analysis

QCA is an approach developed by Charles Ragin [ 14 , 15 ], originating in comparative politics and macrosociology to address questions of comparative historical development. Using set theory, QCA methods explore the relationships between ‘conditions’ and ‘outcomes’ by identifying configurations of necessary and sufficient conditions for an outcome. The underlying logic is different from probabilistic reasoning, as the causal relationships identified are not inferred from the (statistical) likelihood of them being found by chance, but rather from comparing sets of conditions and their relationship to outcomes. It is thus more akin to the generative conceptualisations of causality in realist evaluation approaches [ 16 ]. QCA is a non-additive and non-linear method that emphasises diversity, acknowledging that different paths can lead to the same outcome. For evaluative research in complex systems [ 17 ], QCA therefore offers a number of benefits, including: that QCA can identify more than one causal pathway to an outcome (equifinality); that it accounts for conjectural causation (where the presence or absence of conditions in relation to other conditions might be key); and that it is asymmetric with respect to the success or failure of outcomes. That is, that specific factors explain success does not imply that their absence leads to failure (causal asymmetry).

QCA was designed, and is typically used, to compare data from a medium N (10–50) series of cases that include those with and those without the (dichotomised) outcome. Conditions can be dichotomised in ‘crisp sets’ (csQCA) or represented in ‘fuzzy sets’ (fsQCA), where set membership is calibrated (either continuously or with cut offs) between two extremes representing fully in (1) or fully out (0) of the set. A third version, multi-value QCA (mvQCA), infrequently used, represents conditions as ‘multi-value sets’, with multinomial membership [ 18 ]. In calibrating set membership, the researcher specifies the critical qualitative anchors that capture differences in kind (full membership and full non-membership), as well as differences in degree in fuzzy sets (partial membership) [ 15 , 19 ]. Data on outcomes and conditions can come from primary or secondary qualitative and/or quantitative sources. Once data are assembled and coded, truth tables are constructed which “list the logically possible combinations of causal conditions” [ 15 ], collating the number of cases where those configurations occur to see if they share the same outcome. Analysis of these truth tables assesses first whether any conditions are individually necessary or sufficient to predict the outcome, and then whether any configurations of conditions are necessary or sufficient. Necessary conditions are assessed by examining causal conditions shared by cases with the same outcome, whilst identifying sufficient conditions (or combinations of conditions) requires examining cases with the same causal conditions to identify if they have the same outcome [ 15 ]. However, as Legewie argues, the presence of a condition, or a combination of conditions in actual datasets, are likely to be “‘quasi-necessary’ or ‘quasi-sufficient’ in that the causal relation holds in a great majority of cases, but some cases deviate from this pattern” [ 20 ]. Following reduction of the complexity of the model, the final model is tested for coverage (the degree to which a configuration accounts for instances of an outcome in the empirical cases; the proportion of cases belonging to a particular configuration) and consistency (the degree to which the cases sharing a combination of conditions align with a proposed subset relation). The result is an analysis of complex causation, “defined as a situation in which an outcome may follow from several different combinations of causal conditions” [ 15 ] illuminating the ‘causal recipes’, the causally relevant conditions or configuration of conditions that produce the outcome of interest.

QCA, then, has promise for addressing questions of complex causation, and recent calls for the greater use of QCA methods have come from a range of fields related to public health, including health research [ 17 ], studies of social interventions [ 7 ], and policy evaluation [ 21 , 22 ]. In making arguments for the use of QCA across these fields, researchers have also indicated some of the considerations that must be taken into account to ensure robust and credible analyses. There is a need, for instance, to ensure that ‘contradictions’, where cases with the same configurations show different outcomes, are resolved and reported [ 15 , 23 , 24 ]. Additionally, researchers must consider the ratio of cases to conditions, and limit the number of conditions to cases to ensure the validity of models [ 25 ]. Marx and Dusa, examining crisp set QCA, have provided some guidance to the ‘ceiling’ number of conditions which can be included relative to the number of cases to increase the probability of models being valid (that is, with a low probability of being generated through random data) [ 26 ].

There is now a growing body of published research in public health and related fields drawing on QCA methods. This is therefore a timely point to map the field and assess the potential of QCA as a method for contributing to the evidence base for what works in improving public health. To inform future methodological development of robust methods for addressing complexity in the evaluation of public health interventions, we undertook a systematic review to map existing evidence, identify gaps in, and strengths and weakness of, the QCA literature to date, and identify the implications of these for conducting and reporting future QCA studies for public health evaluation. We aimed to address the following specific questions [ 27 ]:

1. How is QCA used for public health evaluation? What populations, settings, methods used in source case studies, unit/s and level of analysis (‘cases’), and ‘conditions’ have been included in QCA studies?

2. What strengths and weaknesses have been identified by researchers who have used QCA to understand complex causation in public health evaluation research?

3. What are the existing gaps in, and strengths and weakness of, the QCA literature in public health evaluation, and what implications do these have for future research and reporting of QCA studies for public health?

This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 29 April 2019 ( CRD42019131910 ). A protocol was prepared in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) 2015 statement [ 28 ], and published in 2019 [ 27 ], where the methods are explained in detail. EPPI-Reviewer 4 was used to manage the process and undertake screening of abstracts [ 29 ].

Search strategy

We searched for peer-reviewed published papers in English, which used QCA methods to examine causal complexity in evaluating the implementation, uptake and/or effects of a public health intervention, in any region of the world, for any population. ‘Public health interventions’ were defined as those which aim to promote or protect health, or prevent ill health, in the population. No date exclusions were made, and papers published up to December 2019 were included.

Search strategies used the following phrases “Qualitative Comparative Analysis” and “QCA”, which were combined with the keywords “health”, “public health”, “intervention”, and “wellbeing”. See Additional file  1 for an example. Searches were undertaken on the following databases: PubMed, Web of Science, and Scopus. Additional searches were undertaken on Microsoft Academic and Google Scholar in December 2019, where the first pages of results were checked for studies that may have been missed in the initial search. No additional studies were identified. The list of included studies was sent to experts in QCA methods in health and related fields, including authors of included studies and/or those who had published on QCA methodology. This generated no additional studies within scope, but a suggestion to check the COMPASSS (Comparative Methods for Systematic Cross-Case Analysis) database; this was searched, identifying one further study that met the inclusion criteria [ 30 ]. COMPASSS ( https://compasss.org/ ) collates publications of studies using comparative case analysis.

We excluded studies where no intervention was evaluated, which included studies that used QCA to examine public health infrastructure (i.e. staff training) without a specific health outcome, and papers that report on prevalence of health issues (i.e. prevalence of child mortality). We also excluded studies of health systems or services interventions where there was no public health outcome.

After retrieval, and removal of duplicates, titles and abstracts were screened by one of two authors (BH or JG). Double screening of all records was assisted by EPPI Reviewer 4’s machine learning function. Of the 1384 papers identified after duplicates were removed, we excluded 820 after review of titles and abstracts (Fig.  1 ). The excluded studies included: a large number of papers relating to ‘quantitative coronary angioplasty’ and some which referred to the Queensland Criminal Code (both of which are also abbreviated to ‘QCA’); papers that reported methodological issues but not empirical studies; protocols; and papers that used the phrase ‘qualitative comparative analysis’ to refer to qualitative studies that compared different sub-populations or cases within the study, but did not include formal QCA methods.

figure 1

Flow Diagram

Full texts of the 51 remaining studies were screened by BH and JG for inclusion, with 10 papers double coded by both authors, with complete agreement. Uncertain inclusions were checked by the third author (MP). Of the full texts, 24 were excluded because: they did not report a public health intervention ( n  = 18); had used a methodology inspired by QCA, but had not undertaken a QCA ( n  = 2); were protocols or methodological papers only ( n  = 2); or were not published in peer-reviewed journals ( n  = 2) (see Fig.  1 ).

Data were extracted manually from the 27 remaining full texts by BH and JG. Two papers relating to the same research question and dataset were combined, such that analysis was by study ( n  = 26) not by paper. We retrieved data relating to: publication (journal, first author country affiliation, funding reported); the study setting (country/region setting, population targeted by the intervention(s)); intervention(s) studied; methods (aims, rationale for using QCA, crisp or fuzzy set QCA, other analysis methods used); data sources drawn on for cases (source [primary data, secondary data, published analyses], qualitative/quantitative data, level of analysis, number of cases, final causal conditions included in the analysis); outcome explained; and claims made about strengths and weaknesses of using QCA (see Table  1 ). Data were synthesised narratively, using thematic synthesis methods [ 31 , 32 ], with interventions categorised by public health domain and level of intervention.

Quality assessment

There are no reporting guidelines for QCA studies in public health, but there are a number of discussions of best practice in the methodological literature [ 25 , 26 , 33 , 34 ]. These discussions suggest several criteria for strengthening QCA methods that we used as indicators of methodological and/or reporting quality: evidence of familiarity of cases; justification for selection of cases; discussion and justification of set membership score calibration; reporting of truth tables; reporting and justification of solution formula; and reporting of consistency and coverage measures. For studies using csQCA, and claiming an explanatory analysis, we additionally identified whether the number of cases was sufficient for the number of conditions included in the model, using a pragmatic cut-off in line with Marx & Dusa’s guideline thresholds, which indicate how many cases are sufficient for given numbers of conditions to reject a 10% probability that models could be generated with random data [ 26 ].

Overview of scope of QCA research in public health

Twenty-seven papers reporting 26 studies were included in the review (Table  1 ). The earliest was published in 2005, and 17 were published after 2015. The majority ( n  = 19) were published in public health/health promotion journals, with the remainder published in other health science ( n  = 3) or in social science/management journals ( n  = 4). The public health domain(s) addressed by each study were broadly coded by the main area of focus. They included nutrition/obesity ( n  = 8); physical activity (PA) (n = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation (n = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2); or general population health ( n  = 1). The majority ( n  = 24) of studies were conducted solely or predominantly in high-income countries (systematic reviews in general searched global sources, but commented that the overwhelming majority of studies were from high-income countries). Country settings included: any ( n  = 6); OECD countries ( n  = 3); USA ( n  = 6); UK ( n  = 6) and one each from Nepal, Austria, Belgium, Netherlands and Africa. These largely reflected the first author’s country affiliations in the UK ( n  = 13); USA ( n  = 9); and one each from South Africa, Austria, Belgium, and the Netherlands. All three studies primarily addressing health inequalities [ 35 , 36 , 37 ] were from the UK.

Eight of the interventions evaluated were individual-level behaviour change interventions (e.g. weight management interventions, case management, self-management for chronic conditions); eight evaluated policy/funding interventions; five explored settings-based health promotion/behaviour change interventions (e.g. schools-based physical activity intervention, store-based food choice interventions); three evaluated community empowerment/engagement interventions, and two studies evaluated networks and their impact on health outcomes.

Methods and data sets used

Fifteen studies used crisp sets (csQCA), 11 used fuzzy sets (fsQCA). No study used mvQCA. Eleven studies included additional analyses of the datasets drawn on for the QCA, including six that used qualitative approaches (narrative synthesis, case comparisons), typically to identify cases or conditions for populating the QCA; and four reporting additional statistical analyses (meta-regression, linear regression) to either identify differences overall between cases prior to conducting a QCA (e.g. [ 38 ]) or to explore correlations in more detail (e.g. [ 39 ]). One study used an additional Boolean configurational technique to reduce the number of conditions in the QCA analysis [ 40 ]. No studies reported aiming to compare the findings from the QCA with those from other techniques for evaluating the uptake or effectiveness of interventions, although some [ 41 , 42 ] were explicitly using the study to showcase the possibilities of QCA compared with other approaches in general. Twelve studies drew on primary data collected specifically for the study, with five of those additionally drawing on secondary data sets; five drew only on secondary data sets, and nine used data from systematic reviews of published research. Seven studies drew primarily on qualitative data, generally derived from interviews or observations.

Many studies were undertaken in the context of one or more trials, which provided evidence of effect. Within single trials, this was generally for a process evaluation, with cases being trial sites. Fernald et al’s study, for instance, was in the context of a trial of a programme to support primary care teams in identifying and implementing self-management support tools for their patients, which measured patient and health care provider level outcomes [ 43 ]. The QCA reported here used qualitative data from the trial to identify a set of necessary conditions for health care provider practices to implement the tools successfully. In studies drawing on data from systematic reviews, cases were always at the level of intervention or intervention component, with data included from multiple trials. Harris et al., for instance, undertook a mixed-methods systematic review of school-based self-management interventions for asthma, using meta-analysis methods to identify effective interventions and QCA methods to identify which intervention features were aligned with success [ 44 ].

The largest number of studies ( n  = 10), including all the systematic reviews, analysed cases at the level of the intervention, or a component of the intervention; seven analysed organisational level cases (e.g. school class, network, primary care practice); five analysed sub-national region level cases (e.g. state, local authority area), and two each analysed country or individual level cases. Sample sizes ranged from 10 to 131, with no study having small N (< 10) sample sizes, four having large N (> 50) sample sizes, and the majority (22) being medium N studies (in the range 10–50).

Rationale for using QCA

Most papers reported a rationale for using QCA that mentioned ‘complexity’ or ‘context’, including: noting that QCA is appropriate for addressing causal complexity or multiple pathways to outcome [ 37 , 43 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ]; noting the appropriateness of the method for providing evidence on how context impacts on interventions [ 41 , 50 ]; or the need for a method that addressed causal asymmetry [ 52 ]. Three stated that the QCA was an ‘exploratory’ analysis [ 53 , 54 , 55 ]. In addition to the empirical aims, several papers (e.g. [ 42 , 48 ]) sought to demonstrate the utility of QCA, or to develop QCA methods for health research (e.g. [ 47 ]).

Reported strengths and weaknesses of approach

There was a general agreement about the strengths of QCA. Specifically, that it was a useful tool to address complex causality, providing a systematic approach to understand the mechanisms at work in implementation across contexts [ 38 , 39 , 43 , 45 , 46 , 47 , 55 , 56 , 57 ], particularly as they relate to (in) effective intervention implementation [ 44 , 51 ] and the evaluation of interventions [ 58 ], or “where it is not possible to identify linearity between variables of interest and outcomes” [ 49 ]. Authors highlighted the strengths of QCA as providing possibilities for examining complex policy problems [ 37 , 59 ]; for testing existing as well as new theory [ 52 ]; and for identifying aspects of interventions which had not been previously perceived as critical [ 41 ] or which may have been missed when drawing on statistical methods that use, for instance, linear additive models [ 42 ]. The strengths of QCA in terms of providing useful evidence for policy were flagged in a number of studies, particularly where the causal recipes suggested that conventional assumptions about effectiveness were not confirmed. Blackman et al., for instance, in a series of studies exploring why unequal health outcomes had narrowed in some areas of the UK and not others, identified poorer outcomes in settings with ‘better’ contracting [ 35 , 36 , 37 ]; Harting found, contrary to theoretical assumptions about the necessary conditions for successful implementation of public health interventions, that a multisectoral network was not a necessary condition [ 30 ].

Weaknesses reported included the limitations of QCA in general for addressing complexity, as well as specific limitations with either the csQCA or the fsQCA methods employed. One general concern discussed across a number of studies was the problem of limited empirical diversity, which resulted in: limitations in the possible number of conditions included in each study, particularly with small N studies [ 58 ]; missing data on important conditions [ 43 ]; or limited reported diversity (where, for instance, data were drawn from systematic reviews, reflecting publication biases which limit reporting of ineffective interventions) [ 41 ]. Reported methodological limitations in small and intermediate N studies included concerns about the potential that case selection could bias findings [ 37 ].

In terms of potential for addressing causal complexity, the limitations of QCA for identifying unintended consequences, tipping points, and/or feedback loops in complex adaptive systems were noted [ 60 ], as were the potential limitations (especially in csQCA studies) of reducing complex conditions, drawn from detailed qualitative understanding, to binary conditions [ 35 ]. The impossibility of doing this was a rationale for using fsQCA in one study [ 57 ], where detailed knowledge of conditions is needed to make theoretically justified calibration decisions. However, others [ 47 ] make the case that csQCA provides more appropriate findings for policy: dichotomisation forces a focus on meaningful distinctions, including those related to decisions that practitioners/policy makers can action. There is, then, a potential trade-off in providing ‘interpretable results’, but ones which preclude potential for utilising more detailed information [ 45 ]. That QCA does not deal with probabilistic causation was noted [ 47 ].

Quality of published studies

Assessment of ‘familiarity with cases’ was made subjectively on the basis of study authors’ reports of their knowledge of the settings (empirical or theoretical) and the descriptions they provided in the published paper: overall, 14 were judged as sufficient, and 12 less than sufficient. Studies which included primary data were more likely to be judged as demonstrating familiarity ( n  = 10) than those drawing on secondary sources or systematic reviews, of which only two were judged as demonstrating familiarity. All studies justified how the selection of cases had been made; for those not using the full available population of cases, this was in general (appropriately) done theoretically: following previous research [ 52 ]; purposively to include a range of positive and negative outcomes [ 41 ]; or to include a diversity of cases [ 58 ]. In identifying conditions leading to effective/not effective interventions, one purposive strategy was to include a specified percentage or number of the most effective and least effective interventions (e.g. [ 36 , 40 , 51 , 52 ]). Discussion of calibration of set membership scores was judged adequate in 15 cases, and inadequate in 11; 10 reported raw data matrices in the paper or supplementary material; 21 reported truth tables in the paper or supplementary material. The majority ( n  = 21) reported at least some detail on the coverage (the number of cases with a particular configuration) and consistency (the percentage of similar causal configurations which result in the same outcome). The majority ( n  = 21) included truth tables (or explicitly provided details of how to obtain them); fewer ( n  = 10) included raw data. Only five studies met all six of these quality criteria (evidence of familiarity with cases, justification of case selection, discussion of calibration, reporting truth tables, reporting raw data matrices, reporting coverage and consistency); a further six met at least five of them.

Of the csQCA studies which were not reporting an exploratory analysis, four appeared to have insufficient cases for the large number of conditions entered into at least one of the models reported, with a consequent risk to the validity of the QCA models [ 26 ].

QCA has been widely used in public health research over the last decade to advance understanding of causal inference in complex systems. In this review of published evidence to date, we have identified studies using QCA to examine the configurations of conditions that lead to particular outcomes across contexts. As noted by most study authors, QCA methods have promised advantages over probabilistic statistical techniques for examining causation where systems and/or interventions are complex, providing public health researchers with a method to test the multiple pathways (configurations of conditions), and necessary and sufficient conditions that lead to desired health outcomes.

The origins of QCA approaches are in comparative policy studies. Rihoux et al’s review of peer-reviewed journal articles using QCA methods published up to 2011 found the majority of published examples were from political science and sociology, with fewer than 5% of the 313 studies they identified coming from health sciences [ 61 ]. They also reported few examples of the method being used in policy evaluation and implementation studies [ 62 ]. In the decade since their review of the field [ 61 ], there has been an emerging body of evaluative work in health: we identified 26 studies in the field of public health alone, with the majority published in public health journals. Across these studies, QCA has been used for evaluative questions in a range of settings and public health domains to identify the conditions under which interventions are implemented and/or have evidence of effect for improving population health. All studies included a series of cases that included some with and some without the outcome of interest (such as behaviour change, successful programme implementation, or good vaccination uptake). The dominance of high-income countries in both intervention settings and author affiliations is disappointing, but reflects the disproportionate location of public health research in the global north more generally [ 63 ].

The largest single group of studies included were systematic reviews, using QCA to compare interventions (or intervention components) to identify successful (and non-successful) configurations of conditions across contexts. Here, the value of QCA lies in its potential for synthesis with quantitative meta-synthesis methods to identify the particular conditions or contexts in which interventions or components are effective. As Parrott et al. note, for instance, their meta-analysis could identify probabilistic effects of weight management programmes, and the QCA analysis enabled them to address the “role that the context of the [paediatric weight management] intervention has in influencing how, when, and for whom an intervention mix will be successful” [ 50 ]. However, using QCA to identify configurations of conditions that lead to effective or non- effective interventions across particular areas of population health is an application that does move away in some significant respects from the origins of the method. First, researchers drawing on evidence from systematic reviews for their data are reliant largely on published evidence for information on conditions (such as the organisational contexts in which interventions were implemented, or the types of behaviour change theory utilised). Although guidance for describing interventions [ 64 ] advises key aspects of context are included in reports, this may not include data on the full range of conditions that might be causally important, and review research teams may have limited knowledge of these ‘cases’ themselves. Second, less successful interventions are less likely to be published, potentially limiting the diversity of cases, particularly of cases with unsuccessful outcomes. A strength of QCA is the separate analysis of conditions leading to positive and negative outcomes: this is precluded where there is insufficient evidence on negative outcomes [ 50 ]. Third, when including a range of types of intervention, it can be unclear whether the cases included are truly comparable. A QCA study requires a high degree of theoretical and pragmatic case knowledge on the part of the researcher to calibrate conditions to qualitative anchors: it is reliant on deep understanding of complex contexts, and a familiarity with how conditions interact within and across contexts. Perhaps surprising is that only seven of the studies included here clearly drew on qualitative data, given that QCA is primarily seen as a method that requires thick, detailed knowledge of cases, particularly when the aim is to understand complex causation [ 8 ]. Whilst research teams conducting QCA in the context of systematic reviews may have detailed understanding in general of interventions within their spheres of expertise, they are unlikely to have this for the whole range of cases, particularly where a diverse set of contexts (countries, organisational settings) are included. Making a theoretical case for the valid comparability of such a case series is crucial. There may, then, be limitations in the portability of QCA methods for conducting studies entirely reliant on data from published evidence.

QCA was developed for small and medium N series of cases, and (as in the field more broadly, [ 61 ]), the samples in our studies predominantly had between 10 and 50 cases. However, there is increasing interest in the method as an alternative or complementary technique to regression-oriented statistical methods for larger samples [ 65 ], such as from surveys, where detailed knowledge of cases is likely to be replaced by theoretical knowledge of relationships between conditions (see [ 23 ]). The two larger N (> 100 cases) studies in our sample were an individual level analysis of survey data [ 46 , 47 ] and an analysis of intervention arms from a systematic review [ 50 ]. Larger sample sizes allow more conditions to be included in the analysis [ 23 , 26 ], although for evaluative research, where the aim is developing a causal explanation, rather than simply exploring patterns, there remains a limit to the number of conditions that can be included. As the number of conditions included increases, so too does the number of possible configurations, increasing the chance of unique combinations and of generating spurious solutions with a high level of consistency. As a rule of thumb, once the number of conditions exceeds 6–8 (with up to 50 cases) or 10 (for larger samples), the credibility of solutions may be severely compromised [ 23 ].

Strengths and weaknesses of the study

A systematic review has the potential advantages of transparency and rigour and, if not exhaustive, our search is likely to be representative of the body of research using QCA for evaluative public health research up to 2020. However, a limitation is the inevitable difficulty in operationalising a ‘public health’ intervention. Exclusions on scope are not straightforward, given that most social, environmental and political conditions impact on public health, and arguably a greater range of policy and social interventions (such as fiscal or trade policies) that have been the subject of QCA analyses could have been included, or a greater range of more clinical interventions. However, to enable a manageable number of papers to review, and restrict our focus to those papers that were most directly applicable to (and likely to be read by) those in public health policy and practice, we operationalised ‘public health interventions’ as those which were likely to be directly impacting on population health outcomes, or on behaviours (such as increased physical activity) where there was good evidence for causal relationships with public health outcomes, and where the primary research question of the study examined the conditions leading to those outcomes. This review has, of necessity, therefore excluded a considerable body of evidence likely to be useful for public health practice in terms of planning interventions, such as studies on how to better target smoking cessation [ 66 ] or foster social networks [ 67 ] where the primary research question was on conditions leading to these outcomes, rather than on conditions for outcomes of specific interventions. Similarly, there are growing number of descriptive epidemiological studies using QCA to explore factors predicting outcomes across such diverse areas as lupus and quality of life [ 68 ]; length of hospital stay [ 69 ]; constellations of factors predicting injury [ 70 ]; or the role of austerity, crisis and recession in predicting public health outcomes [ 71 ]. Whilst there is undoubtedly useful information to be derived from studying the conditions that lead to particular public health problems, these studies were not directly evaluating interventions, so they were also excluded.

Restricting our search to publications in English and to peer reviewed publications may have missed bodies of work from many regions, and has excluded research from non-governmental organisations using QCA methods in evaluation. As this is a rapidly evolving field, with relatively recent uptake in public health (all our included studies were after 2005), our studies may not reflect the most recent advances in the area.

Implications for conducting and reporting QCA studies

This systematic review has reviewed studies that deployed an emergent methodology, which has no reporting guidelines and has had, to date, a relatively low level of awareness among many potential evidence users in public health. For this reason, many of the studies reviewed were relatively detailed on the methods used, and the rationale for utilising QCA.

We did not assess quality directly, but used indicators of good practice discussed in QCA methodological literature, largely written for policy studies scholars, and often post-dating the publication dates of studies included in this review. It is also worth noting that, given the relatively recent development of QCA methods, methodological debate is still thriving on issues such as the reliability of causal inferences [ 72 ], alongside more general critiques of the usefulness of the method for policy decisions (see, for instance, [ 73 ]). The authors of studies included in this review also commented directly on methodological development: for instance, Thomas et al. suggests that QCA may benefit from methods development for sensitivity analyses around calibration decisions [ 42 ].

However, we selected quality criteria that, we argue, are relevant for public health research> Justifying the selection of cases, discussing and justifying the calibration of set membership, making data sets available, and reporting truth tables, consistency and coverage are all good practice in line with the usual requirements of transparency and credibility in methods. When QCA studies aim to provide explanation of outcomes (rather than exploring configurations), it is also vital that they are reported in ways that enhance the credibility of claims made, including justifying the number of conditions included relative to cases. Few of the studies published to date met all these criteria, at least in the papers included here (although additional material may have been provided in other publications). To improve the future discoverability and uptake up of QCA methods in public health, and to strengthen the credibility of findings from these methods, we therefore suggest the following criteria should be considered by authors and reviewers for reporting QCA studies which aim to provide causal evidence about the configurations of conditions that lead to implementation or outcomes:

The paper title and abstract state the QCA design;

The sampling unit for the ‘case’ is clearly defined (e.g.: patient, specified geographical population, ward, hospital, network, policy, country);

The population from which the cases have been selected is defined (e.g.: all patients in a country with X condition, districts in X country, tertiary hospitals, all hospitals in X country, all health promotion networks in X province, European policies on smoking in outdoor places, OECD countries);

The rationale for selection of cases from the population is justified (e.g.: whole population, random selection, purposive sample);

There are sufficient cases to provide credible coverage across the number of conditions included in the model, and the rationale for the number of conditions included is stated;

Cases are comparable;

There is a clear justification for how choices of relevant conditions (or ‘aspects of context’) have been made;

There is sufficient transparency for replicability: in line with open science expectations, datasets should be available where possible; truth tables should be reported in publications, and reports of coverage and consistency provided.

Implications for future research

In reviewing methods for evaluating natural experiments, Craig et al. focus on statistical techniques for enhancing causal inference, noting only that what they call ‘qualitative’ techniques (the cited references for these are all QCA studies) require “further studies … to establish their validity and usefulness” [ 2 ]. The studies included in this review have demonstrated that QCA is a feasible method when there are sufficient (comparable) cases for identifying configurations of conditions under which interventions are effective (or not), or are implemented (or not). Given ongoing concerns in public health about how best to evaluate interventions across complex contexts and systems, this is promising. This review has also demonstrated the value of adding QCA methods to the tool box of techniques for evaluating interventions such as public policies, health promotion programmes, and organisational changes - whether they are implemented in a randomised way or not. Many of the studies in this review have clearly generated useful evidence: whether this evidence has had more or less impact, in terms of influencing practice and policy, or is more valid, than evidence generated by other methods is not known. Validating the findings of a QCA study is perhaps as challenging as validating the findings from any other design, given the absence of any gold standard comparators. Comparisons of the findings of QCA with those from other methods are also typically constrained by the rather different research questions asked, and the different purposes of the analysis. In our review, QCA were typically used alongside other methods to address different questions, rather than to compare methods. However, as the field develops, follow up studies, which evaluate outcomes of interventions designed in line with conditions identified as causal in prior QCAs, might be useful for contributing to validation.

This review was limited to public health evaluation research: other domains that would be useful to map include health systems/services interventions and studies used to design or target interventions. There is also an opportunity to broaden the scope of the field, particularly for addressing some of the more intractable challenges for public health research. Given the limitations in the evidence base on what works to address inequalities in health, for instance [ 74 ], QCA has potential here, to help identify the conditions under which interventions do or do not exacerbate unequal outcomes, or the conditions that lead to differential uptake or impacts across sub-population groups. It is perhaps surprising that relatively few of the studies in this review included cases at the level of country or region, the traditional level for QCA studies. There may be scope for developing international comparisons for public health policy, and using QCA methods at the case level (nation, sub-national region) of classic policy studies in the field. In the light of debate around COVID-19 pandemic response effectiveness, comparative studies across jurisdictions might shed light on issues such as differential population responses to vaccine uptake or mask use, for example, and these might in turn be considered as conditions in causal configurations leading to differential morbidity or mortality outcomes.

When should be QCA be considered?

Public health evaluations typically assess the efficacy, effectiveness or cost-effectiveness of interventions and the processes and mechanisms through which they effect change. There is no perfect evaluation design for achieving these aims. As in other fields, the choice of design will in part depend on the availability of counterfactuals, the extent to which the investigator can control the intervention, and the range of potential cases and contexts [ 75 ], as well as political considerations, such as the credibility of the approach with key stakeholders [ 76 ]. There are inevitably ‘horses for courses’ [ 77 ]. The evidence from this review suggests that QCA evaluation approaches are feasible when there is a sufficient number of comparable cases with and without the outcome of interest, and when the investigators have, or can generate, sufficiently in-depth understanding of those cases to make sense of connections between conditions, and to make credible decisions about the calibration of set membership. QCA may be particularly relevant for understanding multiple causation (that is, where different configurations might lead to the same outcome), and for understanding the conditions associated with both lack of effect and effect. As a stand-alone approach, QCA might be particularly valuable for national and regional comparative studies of the impact of policies on public health outcomes. Alongside cluster randomised trials of interventions, or alongside systematic reviews, QCA approaches are especially useful for identifying core combinations of causal conditions for success and lack of success in implementation and outcome.

Conclusions

QCA is a relatively new approach for public health research, with promise for contributing to much-needed methodological development for addressing causation in complex systems. This review has demonstrated the large range of evaluation questions that have been addressed to date using QCA, including contributions to process evaluations of trials and for exploring the conditions leading to effectiveness (or not) in systematic reviews of interventions. There is potential for QCA to be more widely used in evaluative research, to identify the conditions under which interventions across contexts are implemented or not, and the configurations of conditions associated with effect or lack of evidence of effect. However, QCA will not be appropriate for all evaluations, and cannot be the only answer to addressing complex causality. For explanatory questions, the approach is most appropriate when there is a series of enough comparable cases with and without the outcome of interest, and where the researchers have detailed understanding of those cases, and conditions. To improve the credibility of findings from QCA for public health evidence users, we recommend that studies are reported with the usual attention to methodological transparency and data availability, with key details that allow readers to judge the credibility of causal configurations reported. If the use of QCA continues to expand, it may be useful to develop more comprehensive consensus guidelines for conduct and reporting.

Availability of data and materials

Full search strategies and extraction forms are available by request from the first author.

Abbreviations

Comparative Methods for Systematic Cross-Case Analysis

crisp set QCA

fuzzy set QCA

multi-value QCA

Medical Research Council

  • Qualitative Comparative Analysis

randomised control trial

Physical Activity

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406. https://doi.org/10.1177/1356389015605205 .

Article   Google Scholar  

Craig P, Katikireddi SV, Leyland A, Popham F. Natural experiments: an overview of methods, approaches, and contributions to public health intervention research. Annu Rev Public Health. 2017;38(1):39–56. https://doi.org/10.1146/annurev-publhealth-031816-044327 .

Article   PubMed   PubMed Central   Google Scholar  

Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3. https://doi.org/10.1136/bmj.39569.510521.AD .

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258. https://doi.org/10.1136/bmj.h1258 .

Pattyn V, Álamos-Concha P, Cambré B, Rihoux B, Schalembier B. Policy effectiveness through Configurational and mechanistic lenses: lessons for concept development. J Comp Policy Anal Res Pract. 2020;0:1–18.

Google Scholar  

Byrne D. Evaluating complex social interventions in a complex world. Evaluation. 2013;19(3):217–28. https://doi.org/10.1177/1356389013495617 .

Gerrits L, Pagliarin S. Social and causal complexity in qualitative comparative analysis (QCA): strategies to account for emergence. Int J Soc Res Methodol 2020;0:1–14, doi: https://doi.org/10.1080/13645579.2020.1799636 .

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32. https://doi.org/10.1080/09581596.2017.1282603 .

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4. https://doi.org/10.1016/S0140-6736(17)31267-9 .

Article   PubMed   Google Scholar  

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95. https://doi.org/10.1186/s12916-018-1089-4 .

Craig P, Di Ruggiero E, Frohlich KL, Mykhalovskiy E and White M, on behalf of the Canadian Institutes of Health Research (CIHR)–National Institute for Health Research (NIHR) Context Guidance Authors Group. Taking account of context in population health intervention research: guidance for producers, users and funders of research. Southampton: NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Paparini S, Green J, Papoutsi C, Murdoch J, Petticrew M, Greenhalgh T, et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med. 2020;18(1):301. https://doi.org/10.1186/s12916-020-01777-6 .

Ragin. The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley: University of California Press; 1987.

Ragin C. Redesigning social inquiry: fuzzy sets and beyond - Charles C: Ragin - Google Books. The University of Chicago Press; 2008. https://doi.org/10.7208/chicago/9780226702797.001.0001 .

Book   Google Scholar  

Befani B, Ledermann S, Sager F. Realistic evaluation and QCA: conceptual parallels and an empirical application. Evaluation. 2007;13(2):171–92. https://doi.org/10.1177/1356389007075222 .

Kane H, Lewis MA, Williams PA, Kahwati LC. Using qualitative comparative analysis to understand and quantify translation and implementation. Transl Behav Med. 2014;4(2):201–8. https://doi.org/10.1007/s13142-014-0251-6 .

Cronqvist L, Berg-Schlosser D. Chapter 4: Multi-Value QCA (mvQCA). In: Rihoux B, Ragin C, editors. Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. 2455 Teller Road, Thousand Oaks California 91320 United States: SAGE Publications, Inc.; 2009. p. 69–86. doi: https://doi.org/10.4135/9781452226569 .

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225–39.

CAS   PubMed   PubMed Central   Google Scholar  

Legewie N. An introduction to applied data analysis with qualitative comparative analysis (QCA). Forum Qual Soc Res. 2013;14.  https://doi.org/10.17169/fqs-14.3.1961 .

Varone F, Rihoux B, Marx A. A new method for policy evaluation? In: Rihoux B, Grimm H, editors. Innovative comparative methods for policy analysis: beyond the quantitative-qualitative divide. Boston: Springer US; 2006. p. 213–36. https://doi.org/10.1007/0-387-28829-5_10 .

Chapter   Google Scholar  

Gerrits L, Verweij S. The evaluation of complex infrastructure projects: a guide to qualitative comparative analysis. Cheltenham: Edward Elgar Pub; 2018. https://doi.org/10.4337/9781783478422 .

Greckhamer T, Misangyi VF, Fiss PC. The two QCAs: from a small-N to a large-N set theoretic approach. In: Configurational Theory and Methods in Organizational Research. Emerald Group Publishing Ltd.; 2013. p. 49–75. https://pennstate.pure.elsevier.com/en/publications/the-two-qcas-from-a-small-n-to-a-large-n-set-theoretic-approach . Accessed 16 Apr 2021.

Rihoux B, Ragin CC. Configurational comparative methods: qualitative comparative analysis (QCA) and related techniques. SAGE; 2009, doi: https://doi.org/10.4135/9781452226569 .

Marx A. Crisp-set qualitative comparative analysis (csQCA) and model specification: benchmarks for future csQCA applications. Int J Mult Res Approaches. 2010;4(2):138–58. https://doi.org/10.5172/mra.2010.4.2.138 .

Marx A, Dusa A. Crisp-set qualitative comparative analysis (csQCA), contradictions and consistency benchmarks for model specification. Methodol Innov Online. 2011;6(2):103–48. https://doi.org/10.4256/mio.2010.0037 .

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252. https://doi.org/10.1186/s13643-019-1159-5 .

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349(1):g7647. https://doi.org/10.1136/bmj.g7647 .

EPPI-Reviewer 4.0: Software for research synthesis. UK: University College London; 2010.

Harting J, Peters D, Grêaux K, van Assema P, Verweij S, Stronks K, et al. Implementing multiple intervention strategies in Dutch public health-related policy networks. Health Promot Int. 2019;34(2):193–203. https://doi.org/10.1093/heapro/dax067 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8(1):45. https://doi.org/10.1186/1471-2288-8-45 .

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC methods Programme. 2006.

Wagemann C, Schneider CQ. Qualitative comparative analysis (QCA) and fuzzy-sets: agenda for a research approach and a data analysis technique. Comp Sociol. 2010;9:376–96.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. https://doi.org/10.1017/CBO9781139004244 .

Blackman T, Dunstan K. Qualitative comparative analysis and health inequalities: investigating reasons for differential Progress with narrowing local gaps in mortality. J Soc Policy. 2010;39(3):359–73. https://doi.org/10.1017/S0047279409990675 .

Blackman T, Wistow J, Byrne D. A Qualitative Comparative Analysis of factors associated with trends in narrowing health inequalities in England. Soc Sci Med 1982. 2011;72:1965–74.

Blackman T, Wistow J, Byrne D. Using qualitative comparative analysis to understand complex policy problems. Evaluation. 2013;19(2):126–40. https://doi.org/10.1177/1356389013484203 .

Glatman-Freedman A, Cohen M-L, Nichols KA, Porges RF, Saludes IR, Steffens K, et al. Factors affecting the introduction of new vaccines to poor nations: a comparative study of the haemophilus influenzae type B and hepatitis B vaccines. PLoS One. 2010;5(11):e13802. https://doi.org/10.1371/journal.pone.0013802 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ford EW, Duncan WJ, Ginter PM. Health departments’ implementation of public health’s core functions: an assessment of health impacts. Public Health. 2005;119(1):11–21. https://doi.org/10.1016/j.puhe.2004.03.002 .

Article   CAS   PubMed   Google Scholar  

Lucidarme S, Cardon G, Willem A. A comparative study of health promotion networks: configurations of determinants for network effectiveness. Public Manag Rev. 2016;18(8):1163–217. https://doi.org/10.1080/14719037.2015.1088567 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: re-analysis of a systematic review to identify pathways to effectiveness. Health Expect Int J Public Particip Health Care Health Policy. 2018;21:574–84.

CAS   Google Scholar  

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3(1):67. https://doi.org/10.1186/2046-4053-3-67 .

Fernald DH, Simpson MJ, Nease DE, Hahn DL, Hoffmann AE, Michaels LC, et al. Implementing community-created self-management support tools in primary care practices: multimethod analysis from the INSTTEPP study. J Patient-Centered Res Rev. 2018;5(4):267–75. https://doi.org/10.17294/2330-0698.1634 .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

Kahwati LC, Lewis MA, Kane H, Williams PA, Nerz P, Jones KR, et al. Best practices in the veterans health Administration’s MOVE! Weight management program. Am J Prev Med. 2011;41(5):457–64. https://doi.org/10.1016/j.amepre.2011.06.047 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) to evaluate a public health policy initiative in the north east of England. Polic Soc. 2013;32(4):289–301. https://doi.org/10.1016/j.polsoc.2013.10.002 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) in public health: a case study of a health improvement service for long-term incapacity benefit recipients. J Public Health. 2014;36(1):126–33. https://doi.org/10.1093/pubmed/fdt047 .

Article   CAS   Google Scholar  

Brunton G, O’Mara-Eves A, Thomas J. The “active ingredients” for successful community engagement with disadvantaged expectant and new mothers: a qualitative comparative analysis. J Adv Nurs. 2014;70(12):2847–60. https://doi.org/10.1111/jan.12441 .

McGowan VJ, Wistow J, Lewis SJ, Popay J, Bambra C. Pathways to mental health improvement in a community-led area-based empowerment initiative: evidence from the big local ‘communities in control’ study. England J Public Health. 2019;41(4):850–7. https://doi.org/10.1093/pubmed/fdy192 .

Parrott JS, Henry B, Thompson KL, Ziegler J, Handu D. Managing Complexity in Evidence Analysis: A Worked Example in Pediatric Weight Management. J Acad Nutr Diet. 2018;118:1526–1542.e3.

Kien C, Grillich L, Nussbaumer-Streit B, Schoberberger R. Pathways leading to success and non-success: a process evaluation of a cluster randomized physical activity health promotion program applying fuzzy-set qualitative comparative analysis. BMC Public Health. 2018;18(1):1386. https://doi.org/10.1186/s12889-018-6284-x .

Lubold AM. The effect of family policies and public health initiatives on breastfeeding initiation among 18 high-income countries: a qualitative comparative analysis research design. Int Breastfeed J. 2017;12(1):34. https://doi.org/10.1186/s13006-017-0122-0 .

Bianchi F, Garnett E, Dorsel C, Aveyard P, Jebb SA. Restructuring physical micro-environments to reduce the demand for meat: a systematic review and qualitative comparative analysis. Lancet Planet Health. 2018;2(9):e384–97. https://doi.org/10.1016/S2542-5196(18)30188-8 .

Bianchi F, Dorsel C, Garnett E, Aveyard P, Jebb SA. Interventions targeting conscious determinants of human behaviour to reduce the demand for meat: a systematic review with qualitative comparative analysis. Int J Behav Nutr Phys Act. 2018;15(1):102. https://doi.org/10.1186/s12966-018-0729-6 .

Hartmann-Boyce J, Bianchi F, Piernas C, Payne Riches S, Frie K, Nourse R, et al. Grocery store interventions to change food purchasing behaviors: a systematic review of randomized controlled trials. Am J Clin Nutr. 2018;107(6):1004–16. https://doi.org/10.1093/ajcn/nqy045 .

Burchett HED, Sutcliffe K, Melendez-Torres GJ, Rees R, Thomas J. Lifestyle weight management programmes for children: a systematic review using qualitative comparative analysis to identify critical pathways to effectiveness. Prev Med. 2018;106:1–12. https://doi.org/10.1016/j.ypmed.2017.08.025 .

Chiappone A. Technical assistance and changes in nutrition and physical activity practices in the National Early Care and education learning Collaboratives project, 2015–2016. Prev Chronic Dis. 2018;15. https://doi.org/10.5888/pcd15.170239 .

Kane H, Hinnant L, Day K, Council M, Tzeng J, Soler R, et al. Pathways to program success: a qualitative comparative analysis (QCA) of communities putting prevention to work case study programs. J Public Health Manag Pract JPHMP. 2017;23(2):104–11. https://doi.org/10.1097/PHH.0000000000000449 .

Roberts MC, Murphy T, Moss JL, Wheldon CW, Psek W. A qualitative comparative analysis of combined state health policies related to human papillomavirus vaccine uptake in the United States. Am J Public Health. 2018;108(4):493–9. https://doi.org/10.2105/AJPH.2017.304263 .

Breuer E, Subba P, Luitel N, Jordans M, Silva MD, Marchal B, et al. Using qualitative comparative analysis and theory of change to unravel the effects of a mental health intervention on service utilisation in Nepal. BMJ Glob Health. 2018;3(6):e001023. https://doi.org/10.1136/bmjgh-2018-001023 .

Rihoux B, Álamos-Concha P, Bol D, Marx A, Rezsöhazy I. From niche to mainstream method? A comprehensive mapping of QCA applications in journal articles from 1984 to 2011. Polit Res Q. 2013;66:175–84.

Rihoux B, Rezsöhazy I, Bol D. Qualitative comparative analysis (QCA) in public policy analysis: an extensive review. Ger Policy Stud. 2011;7:9–82.

Plancikova D, Duric P, O’May F. High-income countries remain overrepresented in highly ranked public health journals: a descriptive analysis of research settings and authorship affiliations. Crit Public Health 2020;0:1–7, DOI: https://doi.org/10.1080/09581596.2020.1722313 .

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348(mar07 3):g1687. https://doi.org/10.1136/bmj.g1687 .

Fiss PC, Sharapov D, Cronqvist L. Opposites attract? Opportunities and challenges for integrating large-N QCA and econometric analysis. Polit Res Q. 2013;66:191–8.

Blackman T. Can smoking cessation services be better targeted to tackle health inequalities? Evidence from a cross-sectional study. Health Educ J. 2008;67(2):91–101. https://doi.org/10.1177/0017896908089388 .

Haynes P, Banks L, Hill M. Social networks amongst older people in OECD countries: a qualitative comparative analysis. J Int Comp Soc Policy. 2013;29(1):15–27. https://doi.org/10.1080/21699763.2013.802988 .

Rioja EC. Valero-Moreno S, Giménez-Espert M del C, Prado-Gascó V. the relations of quality of life in patients with lupus erythematosus: regression models versus qualitative comparative analysis. J Adv Nurs. 2019;75(7):1484–92. https://doi.org/10.1111/jan.13957 .

Dy SM. Garg Pushkal, Nyberg Dorothy, Dawson Patricia B., Pronovost Peter J., Morlock Laura, et al. critical pathway effectiveness: assessing the impact of patient, hospital care, and pathway characteristics using qualitative comparative analysis. Health Serv Res. 2005;40(2):499–516. https://doi.org/10.1111/j.1475-6773.2005.0r370.x .

MELINDER KA, ANDERSSON R. The impact of structural factors on the injury rate in different European countries. Eur J Pub Health. 2001;11(3):301–8. https://doi.org/10.1093/eurpub/11.3.301 .

Saltkjel T, Holm Ingelsrud M, Dahl E, Halvorsen K. A fuzzy set approach to economic crisis, austerity and public health. Part II: How are configurations of crisis and austerity related to changes in population health across Europe? Scand J Public Health. 2017;45(18_suppl):48–55.

Baumgartner M, Thiem A. Often trusted but never (properly) tested: evaluating qualitative comparative analysis. Sociol Methods Res. 2020;49(2):279–311. https://doi.org/10.1177/0049124117701487 .

Tanner S. QCA is of questionable value for policy research. Polic Soc. 2014;33(3):287–98. https://doi.org/10.1016/j.polsoc.2014.08.003 .

Mackenbach JP. Tackling inequalities in health: the need for building a systematic evidence base. J Epidemiol Community Health. 2003;57(3):162. https://doi.org/10.1136/jech.57.3.162 .

Stern E, Stame N, Mayne J, Forss K, Davies R, Befani B. Broadening the range of designs and methods for impact evaluations. Technical report. London: DfiD; 2012.

Pattyn V. Towards appropriate impact evaluation methods. Eur J Dev Res. 2019;31(2):174–9. https://doi.org/10.1057/s41287-019-00202-w .

Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health. 2003;57(7):527–9. https://doi.org/10.1136/jech.57.7.527 .

Download references

Acknowledgements

The authors would like to thank and acknowledge the support of Sara Shaw, PI of MR/S014632/1 and the rest of the Triple C project team, the experts who were consulted on the final list of included studies, and the reviewers who provided helpful feedback on the original submission.

This study was funded by MRC: MR/S014632/1 ‘Case study, context and complex interventions (Triple C): development of guidance and publication standards to support case study research’. The funder played no part in the conduct or reporting of the study. JG is supported by a Wellcome Trust Centre grant 203109/Z/16/Z.

Author information

Authors and affiliations.

Institute for Culture and Society, Western Sydney University, Sydney, Australia

Benjamin Hanckel

Department of Public Health, Environments and Society, LSHTM, London, UK

Mark Petticrew

UCL Institute of Education, University College London, London, UK

James Thomas

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

You can also search for this author in PubMed   Google Scholar

Contributions

BH - research design, data acquisition, data extraction and coding, data interpretation, paper drafting; JT – research design, data interpretation, contributing to paper; MP – funding acquisition, research design, data interpretation, contributing to paper; JG – funding acquisition, research design, data extraction and coding, data interpretation, paper drafting. All authors approved the final version.

Corresponding author

Correspondence to Judith Green .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

All authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Example search strategy.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hanckel, B., Petticrew, M., Thomas, J. et al. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health 21 , 877 (2021). https://doi.org/10.1186/s12889-021-10926-2

Download citation

Received : 03 February 2021

Accepted : 22 April 2021

Published : 07 May 2021

DOI : https://doi.org/10.1186/s12889-021-10926-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Intervention
  • Systematic review

BMC Public Health

ISSN: 1471-2458

causal comparative research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Causal comparative effectiveness analysis of dynamic continuous-time treatment initiation rules with sparsely measured outcomes and death

Liangyuan hu.

1 Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York City, New York

Joseph W. Hogan

2 Department of Biostatistics, Brown University School of Public Health, Providence, Rhode Island

Associated Data

Evidence supporting the current World Health Organization recommendations of early antiretroviral therapy (ART) initiation for adolescents is inconclusive. We leverage a large observational data and compare, in terms of mortality and CD4 cell count, the dynamic treatment initiation rules for human immunodeficiency virus-infected adolescents. Our approaches extend the marginal structural model for estimating outcome distributions under dynamic treatment regimes, developed in Robins et al . (2008) , to allow the causal comparisons of both specific regimes and regimes along a continuum. Furthermore, we propose strategies to address three challenges posed by the complex data set: continuous-time measurement of the treatment initiation process; sparse measurement of longitudinal outcomes of interest, leading to incomplete data; and censoring due to dropout and death. We derive a weighting strategy for continuous-time treatment initiation, use imputation to deal with missingness caused by sparse measurements and dropout, and define a composite outcome that incorporates both death and CD4 count as a basis for comparing treatment regimes. Our analysis suggests that immediate ART initiation leads to lower mortality and higher median values of the composite outcome, relative to other initiation rules.

1 |. INTRODUCTION

1.1 |. dynamic treatment regimes and treatment of pediatric human immunodeficiency virus infection.

Human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome continues to be one of the leading causes of burdensome disease in adolescents (10–19 years old). Globally, an estimated 2.1 million adolescents were living with HIV in 2013, with most living in sub-Saharan Africa ( World Health Organization, 2015 ). Current World Health Organization (WHO) treatment recommendations for adolescents call for initiation of antiretroviral therapy (ART) upon diagnosis with HIV ( World Health Organization, 2015 ). Previously, and particularly for resource-limited settings, WHO recommendations called for delaying treatment until a clinical benchmark signaling disease progression was reached. For example, the 2013 guidelines recommended initiating ART when CD4 cell count—a marker of immune system function—fell below 500.

For investigating the effectiveness of ART initiation rules, adolescents are a subpopulation of particular interest, particularly because of issues related to drug adherence ( Mark et al ., 2017 ). For adolescents, early initiation of ART can potentially increase the risk of poor adherence, leading to development of drug resistance, while initiating too late increases mortality and morbidity associated with HIV. Evidence from both clinical trials ( Luzuriaga et al ., 2004 ; Violari et al ., 2008 ) and observational studies ( Berk et al ., 2005 ; Schomaker et al ., 2017 ) supports the immediate ART initiation rule recommended by the WHO for children under 10 years of age. Conclusive evidence is lacking for adolescents. The 2015 WHO guidelines did not identify any study investigating the clinical outcomes of adolescent-specific treatment initiation strategies ( World Health Organization, 2015 ). A recent large-scale study ( Schomaker et al ., 2017 ) of HIV-infected children (1–9 years) and adolescents (10–16 years) found mortality benefit associated with immediate ART initiation among children, but inconclusive results for the adolescents, and recommended further study of this group. Evaluating ART initiation rules specific to adolescents therefore remains important.

Prior to 2015, WHO guidelines for treatment initiation were expressed in the form of a dynamic treatment regime (DTR), formulated as “initiate when a specific marker crosses threshold value q .” In a DTR, the decision to initiate treatment for an individual can depend on evolving treatment, covariate, and marker history ( Chakraborty and Murphy, 2014 ).

In this paper, we use observational data on 1962 HIV-infected adolescents, collected as part of the East Africa IeDEA Consortium ( Egger et al ., 2012 ) to compare the effectiveness of CD4-based DTR, with emphasis on comparisons to the strategy of immediate treatment initiation. Our approach is to emulate a clinical trial in which individuals are randomized at baseline and then followed for a fixed amount of time, at which point mortality status and, for those remaining alive, CD4 cell count are ascertained. Hence, the utility function for our comparison involves both mortality and CD4 count among survivors.

In addition to the usual complication of time-varying confounding caused by treatment not being randomly allocated, the structure of the dataset poses three specific challenges that we address here. First, unlike with many published analyses comparing DTR, treatment initiation is measured in continuous time; second, the outcome of interest, CD4, is measured infrequently and at irregularly spaced time intervals, leading to incomplete data at the target measurement time; third, some individuals may not complete follow-up, leading to censoring of both death time and CD4 count.

We use inverse probability weighting (IPW) to handle confounding, and imputation to address missingness due to sparse measurement and censoring. To deal with continuous-time measurement of treatment initiation, we derive continuous-time versions of the relevant probability weights. To deal with missingness, we rely on imputations from a model of the joint distribution of CD4 count and mortality fitted to the observed data. We take a two-step approach: first, the joint model is fitted to the observed data and used to generate (multiple) imputations of missing CD4 and mortality outcomes; second, we apply IPW to the filled-in datasets to generate causal comparisons between different DTR.

1.2 |. Comparing DTR using observational data

Randomized controlled trials can be used to evaluate a DTR of the form described above (see Violari et al ., 2008 for example). Observational data afford large sample sizes and rich information on treatment decisions, but the lack of randomization motivates the need to use specialized methods for drawing valid causal comparisons between regimes. Statistical methods for drawing causal inferences about DTR from observational data include the g-computation algorithm ( Robins, 1986 ), inverse probability weighted estimation of marginal structural models ( Robins et al ., 2008 ), and g-estimation of structural nested models ( Moodie et al ., 2007 ); see Daniel et al . (2013) for a comprehensive review and comparison.

The g-computation formula was first introduced by Robins (1986) and has been used to deal with time-dependent confounding when estimating the causal effect of a time-varying treatment. The unobserved potential outcomes and intermediate outcomes that would have been observed under different hypothetical treatments are predicted from models for potential outcomes and models for time-varying confounders. The predicted potential outcomes under different hypothetical DTR assignments are then contrasted for causal effect estimates. As the number of longitudinal time points increases, the method more heavily leverages parametric modeling assumptions used for extrapolation of covariates and outcomes, increasing the reliance on these assumptions and introducing potential for bias from model misspecification.

The IPW approach reweights each individual inversely by the probability of following specific regimes so that, in the weighted population, treatment can be regarded as randomly allocated to these regimes. Time-varying weights are required for handling time-dependent confounding. This involves specifying a model for treatment trajectory over longitudinal follow-up that can include time-dependent covariates. The IPW approach does not require models for the distribution of outcomes and covariates, which in principle makes it less susceptible to model misspecification than the g-computation formula. The method can, however, generate unstable parameter estimates if there are extreme weights, raising the possibility of finite-sample bias, which can often be alleviated by using stabilized weights or truncation ( Cole and Hernán, 2008 ; Cain et al ., 2010 ).

1.3 |. IeDEA data

The IeDEA consortium, established in 2005, collects clinical and demographic data on HIV-infected individuals from seven global regions, four of which are in Africa. Data from African regions derive from 183 clinics providing ART ( Egger et al ., 2012 ). Our analysis makes use of clinical encounter data, drawn from the East Africa region, on 1962 HIV-infected and ART naive adolescents who were diagnosed with HIV between 20 February 2002 and 19 November 2012. The dataset contains individual-level information at diagnosis on the following variables: age, gender, clinic site, centers for disease control and prevention (CDC) class (a four-level ordinal diagnostic indicator of HIV severity), CD4 count, weight-for-age Z scores (WAZ), and height-for-age Z scores (HAZ). The dataset also includes longitudinal information on ART initiation status, death, CD4 count, WAZ, and HAZ. These data were generated before the 2015 WHO guidelines that recommend immediate ART initiation, which yields significant variability in ART initiation patterns observed in our data. The follow-up visits vary considerably from patient to patient, resulting in irregularly and sparsely measured CD4 cell count (1.71, 1.32, and 1.10/person/year within 1, 2, and 3 years of diagnosis) and various ART initiation patterns ( Figure 1 ). Kaplan-Meier estimates of mortality 1, 2, and 3 years postdiagnosis are 3.3%, 4.5%, and 5.6% respectively.

An external file that holds a picture, illustration, etc.
Object name is nihms-1859808-f0001.jpg

CD4 and ART initiation status during follow-up for nine randomly selected individuals. Empty circles indicate no ART and filled circles represent on ART. Two gray lines denote 1 and 2 years postdiagnosis. Purple line corresponds to end of follow-up. ART, antiretroviral therapy

Our goal is to compare CD4 cell count and mortality rate at 1 and 2 years postenrollment under dynamic regimes defined in terms of initiating treatment at specific CD4 threshold values. In the next section, we define the randomized trial our analysis is designed to emulate, and the outcome measure (utility) used for the comparisons.

The remainder of the paper is organized as follows: Section 2 describes notation and the statistical problem. Section 3 delineates the approaches to estimating and comparing dynamic continuous-time treatment initiation rules with sparsely measured outcomes and death. Section 4 presents results from our analysis of IeDEA data and highlights new insights relative to previous studies. Section 5 provides a summary and directions for future research.

2 |. NOTATION AND DYNAMIC REGIMES

2.1 |. randomized trial being emulated to compare dynamic regimes.

Ideally, causal comparisons of dynamic regimes should be based on a hypothetical randomized trial ( Hernán et al ., 2006 ). In our setting, the trial we are emulating would randomize individuals at time t = 0 to regimes in a set 𝒬 = { 0 , 200 , 210 , 220 , … , 490 , 500 , ∞ } , where q = 0 corresponds to “never treat” and q = ∞ denotes “treat immediately,” and other regimes correspond to initiating treatment when CD4 falls below q . Each individual would be followed to a specific time point t *, at which point survival status would be ascertained and, for those surviving to t *, CD4 would be measured. For those who discontinue follow-up prior to t *, we assume treatment status (on or off) at the time of discontinuation would still apply at t *.

For each individual, let { D q : q ∈ 𝒬 } represent the set of potential outcomes, one for each regime, indicating death at t *, such that D q = 1 if dead and D q = 0 if alive. Similarly define { Y q : q ∈ 𝒬 } to be the set of potential CD4 counts for an individual who survives to t *. Now define, for q ∈ 𝒬 , the composite outcome X q = (1 − D q ) Y q , with X q = 0 for those who die prior to t * and X q = Y q > 0 for those who survive. We use both mortality rate P ( D q = 1) = P ( X q = 0) and quantiles of X q as a basis for comparing treatments. The cumulative distribution function of X q is a useful measure of treatment utility because it has point mass at zero corresponding to the mortality rate, and thereby reflects information about both mortality and CD4 cell count among survivors, for example, P ( X q > 0) is the survival fraction and P ( X q > x ), for x > 0, is proportion of individuals who survive to t *and have CD4 count greater than x .

2.2 |. Defining DTR

Let { Z ( t ): t ≥ 0}, where Z ( t ) > 0, represent CD4 cell count, which is defined for all t but measured only at discrete-time points for each individual (see below). Let T denote survival time, with { N T ( t ): t > 0} its associated zero-one counting process. Each individual has a p × 1 covariate process { L ( t ): t ≥ 0}, some elements of which may be time-varying. The time-varying covariates may be recorded at times other than those where Z is recorded. Finally let A denote the time of treatment initiation, with associated counting process { N A ( t ): t ≥ 0} and intensity function λ A ( t ). Adopting a convention in the DTR literature ( Robins et al ., 2008 ), we assume the decision to initiate ART at t is made after observing the covariates and CD4 cell count; that is, for a given t , N A ( t ) occurs after Z ( t ) and L ( t ). Finally let C be a censoring (dropout) time, with associated counting process N C ( t ).

At a fixed time t , let H ( t ) = { Z ( t ), N T ( t ), L ( t ), N A ( t ), N C ( t )} represent the most recent values of each process. We use overbar notation to denote the history of a process, so that, for example, L ¯ ( t ) = { L ( s ) : 0 ≤ s ≤ t } is the history of L ( t ) up to t . All individuals are observed at baseline and then at a discrete number of time points whose number, frequency, and spacing may vary. Hence the observed data process for individual i (=1,…, n ) is denoted by H ¯ i ( t i K i ) = { H i ( t ) : t = 0 , t i 1 , t i 2 , … , t i K i }

2.3 |. Mapping observed treatment to DTR

The DTR “initiate treatment when Z ( t j ) falls below threshold q ” (where t j is time at the j th visit) is a deterministic function r q ( H ¯ ( t j ) ) that depends on observed values of Z ¯ ( t j ) and treatment history N ¯ A ( t j ) ; for brevity we suppress subscript j and write r ( t q ), which applies to each individual’s actual visit times. As some patients have missing baseline CD4, let R Z ( t ) be a binary indicator with R Z ( t ) = 1 denoting that CD4 has not been observed by time t . At t = 0, the rule is r q (0) = I { R Z (0) = 1 or Z (0) < q }, indicating immediate initiation regardless of Z (0) or treat if Z (0) is below q . For t > 0, we define Z min ( t ) = min j : 0 ≤ t j < t Z ( t j ) to be the lowest previously recorded value of Z prior to t . Then,

In words, the first line of the rule says not to treat if an individual has not yet initiated treatment and Z ( t ) has not fallen below q or has not been observed; the second line says to treat if time t represents the first time Z ( t ) has fallen below q ; the third line says to keep treating once ART has been initiated.

In addition to the observed data process, we define a regime-specific compliance process {Δ q ( t ): t ≥ 0}, where Δ q ( t ) = 1 if regime q is being followed at time t and Δ q = 0 otherwise. Written in terms of H ¯ ( t ) and r q ( t ), we have Δ q ( t ) = N A ( t ) r q ( t ) + {1 − N A ( t )}{1 − r q ( t )}.

Hence, if an individual’s actual treatment status at time t agrees with the DTR q , then this individual is compliant with regime q at time t . Thus for each individual and for each q ∈ 𝒬 , we observe, in addition to H ¯ ( t ) , a regime compliance process { Δ q i ( t ) : t = 0 , t i 1 , … , t i K i } .

2.4 |. Missing outcomes due to sparse measurement times and censoring

For those who remain alive at t *, the observed X i corresponds to Z i ( t *). When measurement of Z i ( t ) is sparse and irregular, Z i ( t *) will not be directly observed unless t ik = t * for some k ∈ {1,…, K i }. In settings like this, it is common to define the observed outcome as the value of Z i ( t ) closest to t * and falling within a prespecified interval [ t a , t b ] containing t *. Specifically, X i is the value of Z ( t ik ) such that t ik ∈ [ t a , t b ] and | t ik − t *| is minimized over k . Even using this definition, the interval [ t a , t b ] still may not contain any of the measurement times for some individuals; hence X i can be missing even for those who remain in follow-up at t *. The other cause of missingness in X i is dropout, which occurs when t i K i < t a .

For both of these situations, we rely on multiple imputation based on a model for the joint distribution of the CD4 process Z ( t ) and the mortality process N T ( t ). The general strategy is as follows: first, we specify and fit a model for the joint distribution [ Z ( t ) , N T ( t ) ∣ H ¯ ( t ) ] of CD4 and mortality, conditional on observed history. For those who are known to be alive but do not have a CD4 measurement within the prespecified interval [ t a , t b ], we impute X ˜ i ~ [ Z ( t * ) ∣ H ¯ i ( t * ) ] from the fitted CD4 submodel. For those who are missing X i because of right censoring, we proceed as follows: (a) calculate P { N T ( t * ) = 1 ∣ H ¯ i ( t i K ) } from the fitted survival submodel, and impute D ˜ i from a Bernoulli distribution having this probability; (b) for those with, D ˜ i = 0 impute X ˜ i ~ [ Z ( t * ) ∣ H ¯ i ( t * ) ] from the fitted CD4 submodel; and (c) for those with D ˜ i = 1 , set X ˜ i = 0 . Further details are given in Section 3.5 .

3 |. ESTIMATING AND COMPARING EFFECTIVENESS OF DYNAMIC REGIMES

3.1 |. assumptions needed for inference about dynamic regimes.

We are interested in parameters or functionals of the potential outcomes distribution F X q ( x ) = P ( X q ≤ x ) . Specific quantities of interest are the mortality rate θ q 1 = P ( X q = 0 ) = F X q ( 0 ) , the median of the distribution of the composite outcome θ q 2 = F X q − 1 ( 1 / 2 ) , and the mean CD4 count among survivors θ q 3 = E ( X q | X q > 0). We first consider inference in the case where there is no missingness in the observable outcomes X i . Estimates for each of these quantities can be obtained using weighted estimating equations under specific assumptions:

A1. Consistency assumption.

To connect observed data to potential outcomes, we use the consistency relation X i = X qi when Δ qi ( t *) = 1, for all q ∈ 𝒬 , which implies that the observed outcome X i corresponds to the potential outcome X qi when individual i actually follows regime q . Note that an individual can potentially follow more than one regime at any given time.

A2. Exchangeability assumption.

In observational studies, individuals are not randomly assigned to follow regimes. Decisions on when to start ART are often made by based on guidelines and observable patient characteristics. We make the following exchangeability assumption, also known as sequential randomization of treatment: λ A ( t ∣ H ¯ ( t ) , T > t , X q ) = λ A ( t ∣ H ¯ ( t ) , T > t ) for t < t *. This assumption states that initiation of treatment at t among those who are still alive is conditionally independent of the potential outcomes X q conditional on observed history H ¯ ( t ) .

A3. Positivity assumption.

Finally we assume that at any given time t , there is positive probability of initiating treatment, among those who have not yet initiated, for all configurations H ¯ ( t ) ( Robins et al ., 2008 ): P { λ A ( t ∣ H ¯ ( t ) , T > t ) > 0 } = 1 . This implicitly assumes a positive probability of visiting clinic in the interval [ t, t *], conditional on H ¯ ( t ) .

3.2 |. Weighted estimating equations for comparing specific regimes

For illustration, consider estimating the mortality rate θ q 1 = P ( X q = 0). If individuals are randomized to specific regimes, a consistent estimator of the death rate is the sample proportion among those who follow regime q , that is, θ ^ q 1 = ∑ i Δ q i ( t * ) I ( X i = 0 ) / ∑ i Δ q i ( t * ) . This estimator is the solution to ∑ i Δ q i ( t * ) { I ( X i = 0 ) − θ q 1 } = 0 , which is an unbiased estimating equation when θ q 1 = θ q 1 * is the true value of θ q 1 . We can similarly construct unbiased estimating equations for other quantities of interest. For example, under randomization, a consistent estimator of the median of X q is the solution to ∑ i Δ q i ( t * ) { I ( X i ≤ θ q 2 ) − ( 1 / 2 ) } = 0 .

For observational data, relying on the assumptions of consistency, positivity, and exchangeability, we can obtain consistent estimates of quantities of interest using weighted estimating equations. Returning to mortality rate, a consistent estimator of θ q 1 can be obtained as the solution to the weighted estimating equation ∑ i = 1 n Δ q i ( t * ) W q i { I ( X i = 0 ) − θ q 1 } = 0 , where W q i = 1 / P { Δ q ( t * ) = 1 ∣ H ¯ i ( t * ) } is the inverse probability of following regime q through time t * ( Robins et al ., 2008 ; Cain et al ., 2010 ; Shen et al ., 2017 ).

In practice the weights W qi must be estimated from data; some of the estimated weights can be large, leading to estimators with high variability ( Cain et al ., 2010 ). This problem can be ameliorated to some degree by using stabilized weights of the form

In this case, the numerator of the weight function needs to be calculated directly from the regime indicator processes. Specifically, for each regime q , define a 0–1 counting process N q ( t ) = 1 − Δ q (t) that jumps when regime q is no longer being followed, and let Λ q ( t ) denote its associated cumulative hazard function. Then S q ( t ) = P { N q ( t ) = 0} = P {Δ q ( t ) = 1}; hence (an estimate of) S q ( t *) = exp{−Λ q ( t *)} can be used as the numerator weight.

3.3 |. Comparing regimes along a continuum

We can examine the effect of DTR q on X q at a higher resolution along a continuum such as 𝒬 = { 200 , 210 , … , 500 } (we use integers for 𝒬 , but theoretically it can include continuous values). When the number of regimes to be compared is large, it is highly possible that not every regime is followed by a sufficiently large number of individuals, and sampling variability associated with the regime effect estimated using the procedure for discrete regimes may be large ( Hernán et al ., 2006 ). A statistically more efficient approach is to formulate a causal model that captures the smoothed effect of q on a parameter of interest; we illustrate using the median θ q 2 = F X q − 1 ( 1 / 2 ) .

Let q l and q u denote the lower and upper bound of the regime continuum. Assume F X g − 1 ( τ ) , where τ is a fixed quantile, follows a structural model:

where d (·) is an unspecified function with smoothness constraints. In our application, we use natural cubic splines constructed from piecewise third-order polynomials that pass through a set of control points, or knots, placed at quantiles of q . This allows d ( q ) to flexibly capture the effect of q along the continuum and enables separate estimation of the discrete regimes q = ∞ and q = 0. Parameterizing our model in terms of the basis functions of a natural cubic spline with J knots ( Hastie et al ., 2009 ) yields F X q − 1 ( τ ) = α ⊤ V ( q ) , where

and d † ( q ) = [ d 1 † ( q ) , … , d J † ( q ) ] ⊤ are the J basis functions of d ( q ). The parameter α is a vector of J + 2 coefficients for I ( q = ∞), I ( q = 0) and the basis functions d † ( q ). The causal effect of regime q on the potential outcome X q is therefore encoded in the parameter α . A consistent estimator of α can be obtained by solving the estimating equation ( Leng and Zhang, 2014 ):

Setting τ = 0.5 estimates the causal effect of q on the median of X q .

3.4 |. Derivation and estimation of continuous-time weights

3.4.1 |. assuming no dropout or death prior to t *.

The denominator of W q i s in Equation (1) is the probability of individual i following regime q through t *, conditional on observed history H i ( t *). As described in Cain et al . (2010) , Robins et al . (2008) , and Shen et al . (2017) , for discrete-time settings where the measurement times are common across individuals, this probability corresponds to the cumulative product of conditional probabilities of treatment indicators over a set of time intervals 0 = t 0 < t 2 < ⋯ < t K = t *. Specifically,

This establishes the connection between regime compliance and treatment history. Equation (3) represents the treatment history among those with Δ qi ( t *) = 1; therefore, to compute the probability of regime compliance for those with Δ qi ( t *) = 1, we just need to model their observed treatment initiation process, as described in Equation (4) .

This observation allows us to generalize the weights for the discrete-time setting to the continuous-time process. Let d N i A ( t ) be the increment of N i A over the small time interval [ t, t + dt ). Note that conditional on H ¯ ( t ) , the occurrence of treatment initiation for individual i in [ t, t + dt ) is a Bernoulli trial with outcomes d N i A ( t ) = 1 and d N i A ( t ) = 0 . Equation (3) can therefore be written as

which takes the form of the individual partial likelihood for the counting process { N i A ( t ) : 0 ≤ t ≤ t * } . When the number of time intervals between t 0 and t K increases, dt becomes smaller, and the finite product in (4) will approach a product integral ( Aalen et al ., 2008 ):

where Δ N i A ( t ) = N i A ( t ) − N i A ( t − ) . The product integral of the first part in (5) is the finite product over the jump times of the counting process, hence the first factor in (6) . The second factor in (6) follows from properties of the product integral of an absolutely continuous function ( Aalen et al ., 2008 , Appendix A.1 ).

The individual counting process { N i A ( t ) , 0 ≤ t ≤ t * } will have at most one jump (at A i ), and in our case patients stay on ART once it is initiated. Hence, the product integral only needs to be evaluated up to the ART initiation time. Equation (6) therefore reduces to

where S A ( t ∣ H ¯ ( t ) ) = exp { − Λ A ( t ∣ H ¯ ( t ) ) } is the survivor function associated with the ART initiation process.

For an alternate derivation of the continuous-time weights, see Johnson and Tsiatis (2005) , who use a Radon-Nikodym derivative of one integrated intensity process (under randomized treatment allocation) with respect to another (for the observational study), and arrive at the same weighting scheme as ours. Simulation studies by Hu et al . (2018) demonstrate consistency and stability of weighted estimators using continuous-time weights in empirical settings when assumptions A1 to A3 hold and the weight model is correctly specified.

Components of the denominator weights are estimated from a fitted hazard model for treatment initiation. Specifically we assume λ A ( t ∣ H ¯ ( t ) ) follows a Cox proportional hazards model λ A ( t ∣ H ¯ ( t ) ) = λ 0 A ( t ) u ( H ¯ ( t ) ; ϕ ) , where u is a strictly positive function capturing the effect of covariates and ϕ is a finite-dimensional parameter vector. Details of the model specification used in our application are given in Section 4 . The parameter ϕ is estimated using maximum partial likelihood estimation, and the baseline hazard function λ 0 A ( t ) is estimated using the Nelson-Aalen estimator. The functions f A and S A are estimated via

To estimate the stabilizing numerator weight P {Δ q ( t *) = 1}, we use the q -specific survivor function associated with the counting process N q ( t ), estimated using the Nelson -Aalen estimator S ^ q ( t * ) = exp { − Λ ^ q ( t * ) } .

3.4.2 |. Considering dropout or death prior to t *

In the IeDEA data, some participants drop out prior to t *, which requires modifications to the weight specification. We make an additional assumption:

A4. Conditional constancy assumption.

Once lost to follow-up at C i < t *, treatment and regime status remain constant, that is, N A ( t ) = N A ( C i ) and Δ qi ( t ) = Δ qi ( C i ) for all t ∈ [ C i , t *].

Under this assumption, both regime adherence and treatment initiation status are deterministic after C i . Hence, the stabilized weight is S q ( C i ) / S A ( C i ∣ H ¯ ( C i ) ) for those who initiated treatment prior to C i and S q ( C i ) / S A ( C i ∣ H ¯ ( C i ) ) for those who have not. If death occurs at T i < t *, both compliance and treatment initiation processes only need to be evaluated up to time T i , and estimation of the stabilized weights is same as described above, with T i replacing C i . Let U i = min ( T i , C i, t *) denote duration of follow-up time for individual i . The modified stabilized weight can be written as

Estimation follows by Equations (8) and (9) .

3.5 |. Imputation strategy for missing and censored outcomes

Imputation of missing CD4 counts and mortality status are generated from a joint model of CD4 and survival. The two processes are linked via subject-specific random effects that characterize the true CD4 trajectory ( Rizopoulos, 2012 ). Hazard of mortality is assumed to depend on the true, underlying CD4 count as described below.

Observed CD4 counts as a function of time are specified with a two-level model. At the first level, Z i ( t ) = m i ( t ) + e i ( t ), where m i ( t ) is the true, underlying CD4 cell count and e i ~ N (0, σ ( t )) is within-subject variation of the observed counts around the truth. The second level specifies the trajectory in terms of baseline covariates L i (0), treatment initiation time A i , follow-up time t , and subject-specific random effects b i :

In the model for m i ( t ), h 1 ( L i (0), A i , t ; β ) models the effect of L (0), A , and t in terms of a population-level parameter β and h 2 ( A i, t ; b i ) captures individual-specific time trajectories relative to treatment initiation in terms of random effects b i , where b i ~ N (0, Ω).

The hazard model for death uses true CD4 count m i ( t ) as a covariate, in addition to components of L i ( 0) and treatment timing. The specification we use in our analysis is

where λ 0 T ( t ) is an unspecified baseline hazard function, g 1 (·; γ 1 ) is a smooth, twice-differentiable function indexed by a finite-dimensional parameter γ 1 , and g 2 (·; γ 2 ) captures the main effect of baseline covariates, the instantaneous effect of treatment initiation, and potential interactions between them. In our application, we use cubic smoothing splines to model the effects of m i ( t ) and of continuous baseline covariates. This model has fewer covariates than the CD4 model because of relatively low mortality rates.

The joint model is used to generate imputations where CD4 count and mortality information are missing at time t *. The variance of our target parameters θ q = ( θ q 1 , θ q 2 , θ q 3 ) is based on Rubin’s variance estimator ( Rubin, 1987 ); full details of model specifications and variance calculations used in the data analysis in Section 4 appear in Supporting Information .

4 |. APPLICATION TO IEDEA DATA

Our analysis uses longitudinal data on 1962 adolescents with at least 2 years of follow-up time. Time is measured in days. We evaluate effectiveness of the regimes at times t * = 365 and 730 days (1 and 2 years, respectively) after diagnosis. To capture the CD4 observed at t *, we set [ t a , t b ] = [ t * − 180, t * + 180]; hence Y is the CD4 count measured at a time falling within [ t a , t b ] and closest to t *. If no CD4 is captured within [ t a , t b ], then Y is missing. The percentage of missing data for Y is 29.1% at 1 year and 43.4% at 2 years. Among those with missing 1-year outcome, 41.2% were lost to follow-up prior to t a ; for those with missing 2-year outcome the proportion is 42.5%. Table 1 describes summary statistics for baseline variables and follow-up, the observed outcome pair ( Y , D ) (CD4 and deaths), and ART initiation.

Summary statistics

Abbreviation: ART, antiretroviral therapy; HAZ, height-for-age Z scores; WAZ, weight-for-age Z scores.

Missing outcomes are imputed following the strategies described in Section 3.5 , and the complete datasets are analyzed using IPW methods for the causal comparative analysis. The fit of the CD4 submodel was examined using residual plots and examination of individual-specific fitted curves; for the mortality submodel, we tested the proportional hazards assumption for each term included in the model. These model checks indicated no evidence of lack of fit. Details appear in Supporting Information .

Following the deterministic rule r q ( H ¯ ( t ) ) described in Section 3 , we create the regime-specific indicators Δ qi ( t *) for q ∈ 𝒬 for each patient based on the concordance between their ART initiation history { N A ( t ): 0 ≤ t ≤ t *} and r q ( H ¯ ( t * ) ) . To estimate regime weights, we fit the model λ A ( t ∣ H ¯ ( t ) ) = λ 0 A ( t ) u ( H ¯ ( t ) ; ϕ ) to individuals’ treatment and covariate histories observed in the original data to estimate the denominator of W q i s in (1) . For the time-varying component of H ¯ ( t ) , we include the most recently observed values of CD4, WAZ, and HAZ as main effects, modeled using cubic splines. For baseline covariates, we include age at diagnosis (modeled using a cubic spline) and the categorical variables gender and CDC symptom classification (mild, moderate, severe, asymptomatic, and missing). To estimate the numerator of the stabilized weights, we use the Nelson-Aalen estimator of the survival function for each regime-specific compliance process, as described in Section 3.4.1 . We truncated the weights at 5% and 95% quantiles to improve stability. We conducted a sensitivity analysis to assess the impact of weight truncation. The point estimates and the confidence intervals for treatment effect on mortality were unchanged with different weighting schemes. Point estimates and variation associated with treatment effect on the composite outcome increased with less truncation; the confidence intervals indicated greater variability but no change in substantive conclusion about treatment effect. For the denominator weight model, we tested the proportional hazards assumption for each term included in the model and found no violations of the assumption. Details appear in Supporting Information .

We summarize the comparative effectiveness for specific regimes q ∈ {0, 200, 350, 500, ∞} in Table 2 in terms of mortality proportion θ q 1 = P ( X q = 0), median of the distribution of the composite outcome θ q 2 = F X g − 1 ( 1 / 2 ) , and mean CD4 count among survivors, θ q 3 = E ( X q | X q > 0). (The quantity θ q 3 is not a causal effect because it conditions on having survived to time t *.) Confidence intervals are constructed using the normal approximation to the sampling distribution, derived from bootstrap resampling, as described in Supporting Information .

Comparing effectiveness of specific regimes q ∈ {0, 200, 350, 500, ∞} for t * = 1 year and t * = 2 years

θ q 1 = P ( X q = 0 ) = F X q ( 0 ) θ q 2 = F X q − 1 ( 1 / 2 ) , θ q 3 = E ( X q ∣ X q > 0 ) . Ninety-five percent confidence intervals are shown below the point estimates.

Immediate ART initiation yields significantly lower mortality rate and higher medians of the composite outcome at both years than delayed initiation. The “never treat” regime leads to significantly higher mortality rate; among the patients who survive to one year, CD4 is higher—resulting in higher θ q 2 and θ q 3 —indicating that those who do survive without treatment may be relatively healthier at the beginning of the follow-up.

Figure 2 shows the effect of weighting on estimated medians of X q for q = 0, 200, 350, 500, ∞. We compare weighted and unweighted estimates using imputed data; the weighted estimates suggest immediate ART initiation leads to highest θ ^ q 2 , whereas the unweighted estimates ignoring nonrandom allocation of DTRs recommend “never treat” to be the optimal regime. The difference could be attributable to differences in baseline covariates (see Table 8 in Supporting Information ). Not surprisingly, the weighted estimates have higher variability.

An external file that holds a picture, illustration, etc.
Object name is nihms-1859808-f0002.jpg

Comparing the median values of X q under regime q ∈ {0, 200, 350, 500, ∞}. Weighted (W) and unweighted (UW) estimates are compared side-by-side

Finally, we estimate the causal effect of the DTR on the median of X q using the smoothed relationship between F X q − 1 ( 1 / 2 ) and q from Model (2) . The estimated “dose response” curves of θ ^ q 2 vs. q appear in the top panel of Figure 3 . The bottom panel describes the difference in θ ^ q 2 between dynamic regimes q = ∞ and q ∈ {0, 200, 210,…,500}. Our results indicate that immediate ART initiation leads to significantly higher median values of the composite outcome X q than delayed ART initiation. Furthermore, as an illustration of increased efficiency, the variance of the 1-year outcome associated with q = 350 estimated from the structural model is 180, compared to 209 for the regime-specific estimate, a 13.9% reduction. The R code used to implement our approaches is available in Supporting Information .

An external file that holds a picture, illustration, etc.
Object name is nihms-1859808-f0003.jpg

The effectiveness of continuous regimes. The upper panel presents the median of X q , θ ^ q 2 , at 1 and 2 years; the bottom panel displays the difference in θ ^ q 2 at 1 and 2 years between regimes q = ∞ and q ∈ {0, 200, 210,…,500}. The triangles represent θ ^ q 2 corresponding to regime q = 0 (upper panel), and the difference in θ ^ q 2 between regimes q = ∞ and q = 0 (bottom panel). Similarly, the diamonds correspond to θ ^ q 2 under regime q = ∞. The filled symbols are the mean values, and the empty symbols are the upper and lower bounds of the 95% confidence intervals

5 |. SUMMARY AND DISCUSSION

Motivated by inconclusive evidence for supporting the current WHO guidelines promoting immediate ART initiation in adolescents, we have conducted an analysis comparing dynamic treatment initiation rules. Our approach utilizes the theory of causal inference for DTRs. We extend the framework to allow the causal comparisons of both specific regimes and regimes along a continuum, Additionally, propose strategies to address sparse outcomes and death, and use a composite outcome that can be used to draw causal comparisons between DTRs.

Our analysis suggests that immediate ART initiation leads to mortality benefit and higher median values of the composite outcome, relative to delayed ART initiation. The “never treat” regime yields significantly higher mortality than other initiation rules.

The data from IeDEA pose several challenges that we addressed within our analysis. First, treatment initiation times are recorded on a continuous-time scale. Existing approaches have relied primarily on discretization of the time axis to construct inverse probability weights. We have derived a method to construct weights that uses the continuous-time information. Similar strategies have been employed in Hu et al . (2018) and Johnson and Tsiatis (2005) ; see also Lok (2008) for related work in the context of structural nested mean models.

Second, CD4 counts are measured at irregularly spaced times. This creates challenges when the goal is to compare treatment regimes at a specific follow-up time, as would be the case with a randomized trial. Moreover, even though our sample comprises those who would be scheduled to have at least 2 years of follow-up, some individuals discontinue follow-up prior to that time. These features of the data lead to incomplete observation of CD4 count at the target analysis time and to censoring of death times. To address this issue, we have relied on a parametric model for the joint distribution of observed CD4 counts and death times. The CD4 submodel is flexible enough to capture important features of the longitudinal trajectory of CD4 counts, and is used to impute missing observations at the target follow-up time. The mortality submodel, which depends explicitly on the CD4 trajectory, is used to impute mortality status at the target estimation time. A limitation of the imputation model is that death and CD4 may depend on HIV viral load, but availability of this variable is limited in our data and therefore not included in the model.

The primary strength of this approach is its ability to handle a complex data set on its own terms, without artificially aligning measurement times. Although imputation-based analyses rely on extrapolating missing outcomes, and both the weight model and imputation model must be correctly specified, a potential advantage of our approach over g-computation is reduced depending on data extrapolation. There are several possible extensions as well. First, largely due to limitations related to computing, we used a two-step approach to fit our observed data imputation model rather than a joint likelihood approach. There may be some small biases ( Rizopoulos, 2012 ) introduced by using a two-step rather than fully joint model. Second, the imputation model may not be fully compatible with the weighting model in the sense that we are not constructing a joint distribution of all observed data. Our approach emulates a setting whereby the data imputer and the data analyst are separate: the imputed dataset can be turned over for whatever kind of analysis would be applied to a complete dataset. Empirical checks to our joint model for CD4 and mortality showed no evidence of lack of fit to the observed data (see Supporting Information ). To make the models more flexible, it may be possible to employ machine learning methods as in Shen et al . (2017) . Finally, developing sensitivity analyses to capture the effects of unmeasured confounding for our model would be a worthwhile and important contribution.

Supplementary Material

Supp materials, acknowledgments.

The authors are grateful to Michael Daniels for helpful comments and to Beverly Musick for constructing the analysis dataset. This work was funded by grants R01-AI-108441, R01-CA-183854, U01-AI-069911, and P30-AI-42853 from the U.S. National Institutes of Health.

SUPPORTING INFORMATION

Additional supporting information may be found online in the Supporting Information section.

  • Aalen O, Borgan O and Gjessing H (2008). Survival and Event History Analysis: A Process Point of View . New York: Springer Science & Business Media. [ Google Scholar ]
  • Berk DR, Falkovitz-Halpern MS, Hill DW, Albin C, Arrieta A, Bork JM et al. (2005). Temporal trends in early clinical manifestations of perinatal HIV infection in a population-based cohort . The Journal of the American Medical Association , 293 , 2221–2231. [ PubMed ] [ Google Scholar ]
  • Cain LE, Robins JM, Lanoy E, Logan R, Costagliola D and Hernán MA (2010). When to start treatment? A systematic approach to the comparison of dynamic regimes using observational data . The International Journal of Biostatistics , 6 ( 2 ). http://www.bepress.com/ijb/vol6/iss2/18 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Chakraborty B and Murphy SA (2014). Dynamic treatment regimes . Annual Review of Statistics and Its Application , 1 , 447–464. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cole SR and Hernán MA (2008). Constructing inverse probability weights for marginal structural models . American Journal of Epidemiology , 168 , 656–664. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Daniel R, Cousens S, DeStavola B, Kenward M and Sterne J (2013). Methods for dealing with time-dependent confounding . Statistics in Medicine , 32 , 1584–1618. [ PubMed ] [ Google Scholar ]
  • Egger M, Ekouevi DK, Williams C, Lyamuya RE, Mukumbi H, Braitstein P et al. (2012). Cohort profile: the international epidemiological databases to evaluate AIDS (IeDEA) in sub-Saharan Africa . International Journal of Epidemiology , 41 , 1256–1264. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hastie T, Tibshirani R and Friedman J (2009). The Elements of Statistical Learning: Data Mining Inference, and Prediction . New York: Springer. [ Google Scholar ]
  • Hernán M, Lanoy E, Costagliola D and Robins J (2006). Comparison of dynamic treatment regimes via inverse probability weighting . Basic and Clinical Pharmacology and Toxicology , 98 , 237–242. [ PubMed ] [ Google Scholar ]
  • Hu L, Hogan JW, Mwangi AW and Siika A (2018). Modeling the causal effect of treatment initiation time on survival: application to HIV/TB co-infection . Biometrics , 74 , 703–713. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Johnson BA and Tsiatis AA (2005). Semiparametric inference in observational duration-response studies, with duration possibly right-censored . Biometrika , 92 , 605–618. [ Google Scholar ]
  • Leng C and Zhang W (2014). Smoothing combined estimating equations in quantile regression for longitudinal data . Statistics and Computing , 24 , 123–136. [ Google Scholar ]
  • Lok JJ (2008). Statistical modeling of causal effects in continuous time . The Annals of Statistics , 36 , 1464–1507. [ Google Scholar ]
  • Luzuriaga K, McManus M, Mofenson L, Britto P, Graham B and Sullivan JL (2004). A trial of three antiretroviral regimens in HIV-1-infected children . New England Journal of Medicine , 350 , 2471–2480. [ PubMed ] [ Google Scholar ]
  • Mark D, Armstrong A, Andrade C, Penazzato M, Hatane L, Taing L et al. (2017). HIV treatment and care services for adolescents: a situational analysis of 218 facilities in 23 sub-Saharan African countries . Journal of the International AIDS Society , 20 , 21591. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Moodie E, Richardson T and Stephens D (2007). Demystifying optimal dynamic treatment regimes . Biometrics , 63 , 447–455. [ PubMed ] [ Google Scholar ]
  • Rizopoulos D (2012). Joint Models for Longitudinal and Time-to-Event Data: With Applications in R . Boca Raton, FL: CRC Press. [ Google Scholar ]
  • Robins J (1986). A new approach to causal inference in mortality studies with a sustained exposure period application to control of the healthy worker survivor effect . Mathematical Modelling , 7 , 1393–1512. [ Google Scholar ]
  • Robins J, Orellana L and Rotnitzky A (2008). Estimation and extrapolation of optimal treatment and testing strategies . Statistics in Medicine , 27 , 4678–4721. [ PubMed ] [ Google Scholar ]
  • Rubin DB (1987). Multiple Imputation for Nonresponse in Surveys . New York: John Wiley & Sons. [ Google Scholar ]
  • Schomaker M, Leroy V, Wolfs T, Technau KG, Renner L, Judd A et al. (2017). Optimal timing of antiretroviral treatment initiation in HIV-positive children and adolescents: a multiregional analysis from Southern Africa, West Africa and Europe . International Journal of Epidemiology , 46 , 453–465. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Shen J, Wang L and Taylor JM (2017). Estimation of the optimal regime in treatment of prostate cancer recurrence from observational data using flexible weighting models . Biometrics , 73 , 635–645. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Violari A, Cotton MF, Gibb DM, Babiker AG, Steyn J, Madhi SA et al. (2008). Early antiretroviral therapy and mortality among HIV-infected infants . New England Journal of Medicine , 359 , 2233–2244. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • World Health Organization (2015). Guideline on When to Start Antiretroviral Therapy and on Pre-exposure Prophylaxis for HIV . Geneva, Switzerland: World Health Organization. [ PubMed ] [ Google Scholar ]
  • Obsession- an unwanted thought viewed as meaningful, important, and dangerous
  • Publications

Causal-comparative Research

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V (P. G) College, Roorkee, India

Causal-comparative research is an attempt to identify a causative relationship between an independent variable and a dependent variable.The relationship between the independent variable and dependent variable is usually a suggested relationship (not proven) because you (the researcher) do not have complete control over the independent variable.

The Causal Comparative method seeks to establish causal relationships between events and circumstances. In other words, it finds out the causes of certain occurrences or non-occurrenceces. This is achieved by comparing the circumstances associated with observed effects and by noting  the factors present in the instances where a given effect occurs and where it does not occur. This method is based on Miill’s canon of agreement and disaggrement which states that caoses of given observed effect may be ascertained by noting elements which are invariably present when the result is present and which are invariably absent when the result is absent.

Causal-comparative research scrutinizes the relationship among variables in studies in which the independent variable has already occurred, thus making the study descriptive rather than experimental in nature. Because the independent variable (the variable for which the researcher wants to suggest causation) has already been completed (e.g., two reading methods used by a school ), the researcher has no control over it. That is, the researcher cannot assign subjects or teachers or determine the means of implementation or even verify proper implementation.

Sometimes the variable either cannot be manipulated (e.g., gender) or should not be manipulated (e.g., who smokes cigarettes or how many they smoke). Still, the relationship of the independent variable on one or more dependent variables is measured and implications of possible causation are used to draw conclusions about the results.

Also known as “ex post facto” research.  (Latin for “after the fact”) since both the effect and the alleged cause have already occurred and must be studied in retrospect  .In this type of research investigators attempt to determine the cause or consequences of differences that already exist between or among groups of individuals.

Used, particularly in the behavioral sciences. In education, because it is impossible, impracticable, or unthinkable to manipulate such variables as aptitude, intelligence, personality traits, cultural deprivation, teacher competence, and some variables that might present an unacceptable threat to human beings, this method will continue to be used.

Causal-Comparative Research Facts

  • Causal-Comparative Research is not manipulated by the researcher.
  • -Does not establish cause-effect relationships.
  • -Generally includes more than two groups and at least one dependent variable.
  • -Independent variable is causal-comparative studies is often referred to as the grouping variable.
  • -The independent variable has occurred or is already formed.

The Nature of Causal-Comparative Research

A common design in educational research studies,  Causal-comparative research, seeks to identify associations among variables. Relationships can be identified in causal-comparative study, but causation cannot be fully established.

Attempts to determine cause and effect. It is not as powerful as experimental designs  Causal-comparative research attempts to determine the cause or consequences of differences that already exist between or among groups of individuals.

Alleged cause and effect have already occurred and are being examined after the fact. The basic causal-comparative approach is to begin with a noted difference between two groups and then to look for possible causes for, or consequences of, this difference.

Used when independent variables cannot or should not be examined using controlled experiments.  When an experiment would take a considerable length of time and be quite costly to conduct, a causal-comparative study is sometimes used as an alternative.

Main purpose  of causal-comparative research :

  • Exploration of Effects
  • Exploration of Causes
  • Exploration of Consequences

Basic Characteristics of Causal-comparative research

In short it the basic Characteristics of Causal-comparative research can be concluded:

  • - Causal comparative research attempts to determine reasons, or causes, for the existing condition
  • Causal comparative studies are also called ex post facto because the investigator has no control over the exogenous variable. Whatever happened occurred before the researcher arrived.
  • -Causal-comparative research is sometimes treated as a type of descriptive research since it describes conditions that already exist.
  • -Causal-comparative studies attempt to identify cause-effect relationships; correlational studies do not
  • -Causal-comparative studies involve comparison, correlational studies involve relationship.
  • -Causal-comparative studies typically involve two (or more) groups and one independent variable, whereas correlational studies typically involve two or more variables and one group
  • -In causal-comparative  the researcher attempts to determine the cause, or reason, for preexisting differences in groups of  individual.
  • Involves comparison of two or more groups on a single endogenous variables.
  • -Retrospective causal-comparative studies are far more common in educational research
  • -The basic approach is sometimes referred to as retrospective causal-comparative research (since it starts with effects and investigates causes)
  • -The basic causal-comparative approach involves starting with an effect and seeking possible causes.
  • The characteristic that differentiates these groups is the exogenous variable.
  • -The variation as prospective causal-comparative research (since it starts with causes and investigates effects)
  • We can never know with certainty that the two groups were exactly equal before the difference occurred.

Three important aspects of Causal Comparative method are:

1-      Gathering of data on factors invariably present in cases where the given result occurs and discarding of those elements which are not universally present

2-      2-Gathering the data on factors invariably present in cases where the given effect does not occur

3- 3 Comparing the two sets of data, or in effect, substracting one from the other to get at the causes responsible for the occurance or otherwise of the effect .

Examples of variables investigated in Causal-Comparative Research

  • -Ability variables (achievement)
  • -Family-related variables (SES)
  • -Organismic variables (age, ethnicity, sex)
  • -Personality variables (self-concept)
  • -School related variables (type of school, size of school)

Causal Comparative Research Procedure

Experimental, quasi-experimental, and causal-comparative research methods are frequently studied together because they all try to show cause and effect relationships among two or more variables. To conduct cause and effect research, one variable(s) is considered the causal or independent variable and

Causal comparative research attempts to attribute a change in the effect variable(s) when the causal variable(s) cannot be manipulated.

For example: if you wanted to study the effect of socioeconomic variables such as sex, race, ethnicity, or income on academic achievement, you might identify two existing groups of students: one group – high achievers; second group – low achievers. You then would study the differences of the two groups as related to socioeconomic variables that already occurred or exist as the reason for the difference in the achievement between the two groups. To establish a cause effect relationship in this type of research you have to build a strongly persuasive logical argument. Because it deals with variables that have already occurred or exist, causal-comparative research is also referred to as ex post facto research.

The most common statistical techniques used in causal comparative research are analysis of variance and t-tests wherein significant differences in the means of some measure (i.e. achievement) are compared between or among two or more groups.

Data Sources

  • Raw scores such as test scores
  • Measures such as grade point averages
  • Judgements, and other assessments made of the subjects involved

Research Tools

  • Standardized tests
  • Structured interviews

Procedural Considerations

  • The most important procedural consideration in doing causal comparative research is to identify two or more groups which are demonstrably different in an educationally important way such as high academic achievement versus low academic achievement. An attempt is then made to identify the cause which resulted in the differences in the effect (i.e. academic achievement). The cause (i.e. race, sex, income, etc.) has already had its effect and cannot be manipulated, changed or altered. In selecting subjects for causal- comparative research, it is most important that they be identical as possible except for the difference (i.e. independent variable – race, sex, income) which may have caused the demonstrated effect (i.e. dependent variable – academic achievement)
  • Hypotheses are generally used
  • Statistics are extensively used in experimental research and include measures of spread or dispersion such as:
  • Chi-Square;
  • analysis of variance as well as measures of relationship such as
  • : Pearson Product-Moment Coefficient;
  • Spearman Rank Order Coefficient; Phi Correlation Coefficient; regression

SPECIAL PROCEDURAL CONSIDERATIONS

  • Statistics are extensively used in causal comparative research and include measures of relationship such as: Pearson Product-Moment Coefficient; Spearman Rank Order Coefficient; Phi Correlation Coefficient; Regression; as well as measures of spread or dispersion such as: t-tests; Chi-Square; Analysis of Variance.
  • REPORT PRESENTATION Reports tend to rely on both quantitative and qualitative presentations. Statistical data is almost always provided and supports the overall argument which is used to establish the cause and effect relationship.

Report Presentation

  • Reports tend to rely on quantitative presentations
  • Statistical data is almost always provided and supports the overall cause-effect argument.

CONDUCTING A CAUSAL-COMPARATIVE STUDY

  • -Although the independent variable is not manipulated, there are control procedures that can be exercised to improve interpretation of results.

Design & Procedure

-The researcher selects two groups of participants, the experimental and control groups, but more accurately referred to as comparison groups .

-Groups may differ in two ways.

  • -One group possesses a characteristic that the other does not.
  • -Each group has the characteristic, but to differing degrees or amounts.

-Definition and selection of the comparison groups are very important parts of the causal-comparative procedure.

  • -The independent variable differentiating the groups must be clearly and operationally defined, since each group represents a different population.
  • -In causal-comparative research the random sample is selected from two already existing populations, not from a single population as in experimental research.
  • -As in experimental studies, the goal is to have groups that are as similar as possible on all relevant variables except the independent variable.

-The more similar the two groups are on such variables, the more homogeneous they are on everything but the independent variable.

CONTROL PROCEDURES

-Lack of randomization, manipulation, and control are all sources of weakness in a causal-comparative study.

-Random assignment is probably the single best way to try to ensure equality of the groups.

-A problem is the possibility that the groups are different on some other important variable (e.g. gender, experience, or age) besides the identified independent variable.

  • -Matching is another control technique.
  • -If a researcher has identified a variable likely to influence performance on the dependent variable, the researcher may control for that variable by pair-wise matching of participants.
  • -For each participant in one group, the researcher finds a participant in the other group with the same or very similar score on the control variable.
  • -If a participant in either group does not have a suitable match, the participant is eliminated from the study.
  • -The resulting matched groups are identical or very similar with respect to the identified extraneous variable.
  • -The problem becomes serious when the researcher attempts to simultaneously match participants on two or more variables.

COMPARING HOMOGENEOUS GROUPS OR SUBGROUPS

  • -To control extraneous variables, compare groups that are homogeneous with respect to the extraneous variable.
  • -This procedure may lower the number of participants and limits the generalizability of the findings.
  • -A similar but more satisfactory approach is to form subgroups within each group that represent all levels of the control variable.
  • -Each group might be divided into high, average, and low IQ subgroups.
  • -The existence of comparable subgroups in each group controls for IQ.
  • -In addition to controlling for the variable, this approach also permits the researcher to determine whether the independent variable affects the dependent variable differently at different levels of the control variable.
  • -The best approach is to build the control variable right into the research design and analyze the results in a statistical technique called factorial analysis of variance.
  • -A factorial analysis allows the researcher to determine the effect of the independent variable and the control variable on the dependent variable both separately and in combination.
  • -It permits determination of whether there is interaction between the independent variable and the control variable such that the independent variable operates differently at different levels of the control variable.

ANALYSIS OF COVARIANCE

  • -Is used to adjust initial group differences on variables used in causal-comparative and experimental research studies.
  • -Analysis of covariance adjusts scores on a dependent variable for initial differences on some other variable related to performance on the dependent.
  • -Suppose we were doing a study to compare two methods, X and Y, of teaching fifth graders to solve math problems.
  • -Covariate analysis statistically adjusts the scores of method Y to remove the initial advantage so that the results at the end of the study can be fairly compared as if the two groups started equally.

DATA ANALYSIS AND INTERPRETATION

  • -Analysis of data involves a variety of descriptive and inferential statistics.

-The most commonly used descriptive statistics are

(a)  the mean, which indicates the average performance of a group on some measure of a variable, and

(b)  the standard deviation, which indicates how spread out a set of scores is around the mean, that is, whether the scores are relatively homogeneous or heterogeneous around the mean.

-The most commonly used inferential statistics are

(a) the t test, used to determine whether the means of two groups are statistically different from one another;

(b) analysis of variance, used to determine if there is significant difference among the means of three or more groups; and

(c)  chi square, used to compare group frequencies, or to see if an event occurs more frequently in one group than another.

-Lack of randomization, manipulation, and control factors make it difficult to establish cause-effect relationships with any degree of confidence.

  • -However, reversed causality is more plausible and should be investigated.
  • -It is equally plausible that achievement affects self-concept, as it is that self-concept affects achievement.

-The way to determine the correct order of causality-which variable caused which- is to determine which one occurred first.

  • -The possibility of a third, common explanation in causal-comparative research is plausible in many situations.
  • -One way to control for a potential common cause is to equate groups on that variable.
  • -To investigate or control for alternative hypotheses , the researcher must be aware of them and must present evidence that they are not in fact the true explanation for the behavioral differences being investigated.

Types of Causal-Comparative Research Designs

There are two types of causal-comparative research designs :

Retrospective causal-comparative research

Retrospective causal-comparative research requires that a researcher begins investigating a particular question when the effects have already occurred and the researcher attempts to determine whether one variable may have influenced another variable.

Prospective causal-comparative research

Prospective causal-comparative research occurs when a researcher initiates a study a study begin with the causes and is determined to investigate the effects of a condition. By far, retrospective causal-comparative research designs are much more common than prospective causal-comparative designs….

Basic approach of causal- comparative research

The researcher observe that 2 groups differ on some variable (teaching style) and then attempt to find the reason for (or the results of) this difference. …

- Causal-comparative studies attempt to identify cause-effect relationships.

2- Causal-comparative studies typically Involve two (or more) groups and one independent variable

3- Causal-comparative studies involve comparision.

4-The basic causal-comparative approach involves starting with an effect and seeking possible causes ( retrospective).

5-Retospective causal – comparative studies are far more common in educational research.

Steps for conducting a Causal-comparative research

STEP ONE- Select a topic

For  determining the problem it is necessary for the researcher to focus on the problem that he or she needs to study. They not only need to find out a problem, they also need to determine, analyse and define the problem which they will be dealing with.

Topic studies with Causal-comparative designs typically catch a researcher’s attention based on experiences or situations that have occurred in the real world.

The first step in formulating a problem in causal-comparative research is usually to identify and define the particular phenomena of interest, and then to consider possible causes for, or consequences of, these phenomena.

There are no limits to the kinds of instruments that can be used in a causal-comparative study.

The basic causal-comparative design involves selecting two groups that differ on a particular variable of interest and then comparing them on another variable or variables.

STEP TWO -Review of literature

. Literature Review Before trying to predict the causal relationships, the researcher needs to study all the related or similar literature and relevant studies, which may help in further analysis, prediction and conclusion of the causal relationship between the variables under study.

Reviewing published literature on a specific topic of interest is specially important when conducting  Caucal-comparative research as such a review can assist a researcher in determining which extraneous variable  may exist in the situations that they are considering studying.

STEP THREE- Develop a Research hypothesis

The third step of the  research is to propose the possible solutions or alternatives that might have led to the effect. They need to list out the assumptions which will be the basis of the hypothesis and procedure of the research. Hypothesis developed for Causal-comparative research  to identify the independent and dependent variable Causal-comparative hypothesis should describe the expected impact of the independent variable on the dependent variable.

STEP FOUR-Select participants

The important thing in selecting a sample for a causal-comparative study is to define carefully the characteristic to be studied and then to select groups that differ in this characteristic.

In causal-comparative research participants are already organized in groups. The researcher selects two groups of participants the experimental and control groups but more accurately referred to as  comparison groups because one group does not possess a characteristics or experience possessed by the second group or the two groups differ in the amount the characteristics that they share. The independent variable they share. The independent variable differentiating the groups must be clearly and operationally defined, since each group represent a different variable.

STEP FIVE- Select instruments to measure  variables and collecting data

As all the types of qualitative research Causal-comparative research requires that researcher selects instruments that are reliable and allow researchers  to draw valid conclusions( Link to reliability and validity portion of site ) . They also need to select the scale or construct instrument for collecting the required information / data. After a researcher has selected a reliable and valid instrument, data for the study can be selected.

Causal Comparative: Data Collection

■ You select two groups that differ on the (exogenous) variable of interest.

■ Next, compare the two groups by looking at an endogenous variable that you think might be influenced by the exogenous variable.

■ Define clearly and operationally the exogenous variable.

■ Be sure the groups are similar on all other important variables.

Causal Comparative: Equating groups

■ Use subject matching

■ Use change scores; i.e., each subject as own control

■ Compare homogeneous groups

■ Use analysis of covariance

STEP SIX- Analyze and interpret results

Finally, the researcher needs to analyse, evaluate and interpret the information collected. It is on basis of this step only, the researcher selects the best possible alternative of causes which might have led the effect to occur

Typically in Causal-comparative studies data is reported as a mean or frequency for each group. Inferential statistics is than used to determine whether the mean “ for the groups are significantly differ from each other. Since Causal-comparative research can not definitively determine that one variable  has caused something to occur  Reacher should instead report the findings of Causal-comparative studies as a possible effect or possible cause of an event or occurrence.

Similarly, Jacobs et al. (1992: 81) also proposed that the following steps are involved in conducting an ex-post facto-research:

First Step: The first step should be to state the problem.

Second Step: Following this is the determination of the group to be investigated. Two groups of the population that differ with regard to the variable, should be selected in a proportional manner for the test sample.

Third step: The next step refers to the process of collection of data. Techniques like questionnaires, interviews, literature search etc. are used to collect the relevant information.

Fourth Step : The last step is the interpretation of the findings and the results. Based on the conclusions the hypothesis is either accepted or rejected. It must be remembered that eventhough the ex-post facto research is a valid method for collecting information regarding an event that had already occurred, this type of research has shortcomings, and that only partial control is possible.

Validity of the research

The researcher needs to validate the significance of their research. They need to be cautious regarding the extent to which their findings would be valid and significant and helpful in interpreting and drawing inferences from the obtained results.

Threats to Internal Validity in Causal-Comparative Research

Two weaknesses in causal-comparative research are lack of randomization and inability to manipulate an independent variable.

A major threat to the internal validity of a causal-comparative study is the possibility of a subject selection bias. The chief procedures that a researcher can use to reduce this threat include matching subjects on a related variable or creating homogeneous subgroups, and the technique of statistical matching.

Other threats to internal validity in causal-comparative studies include location, instrumentation, and loss of subjects. In addition, type 3 studies are subject to implementation, history, maturation, attitude of subjects, regression, and testing threats.

In short the Threats to Internal Validity in Causal-Comparative Research can be summerised as:

  • Creating or finding homogeneous subgroups would be another way to control for an extraneous variable
  • One way to control for an extraneous variable is to match subjects from the comparison groups on that variable
  • Subject Characteristics
  • The possibility exists that the groups are not equivalent on one or more important variables
  • The third way to control for an extraneous variable is to use the technique of statistical matching

Other Threats

  • Data collector bias
  • Instrument decay
  • Instrumentation
  • Loss of subjects
  • Pre-test/treatment interaction effect

Evaluating Threats to Internal Validity in Causal-Comparative Studies

Involves three sets of steps as shown below:

– Step 1 : What specific factors are known to affect the variable on which groups are being compared or may be logically be expected to affect this variable?

– Step 2 : What is the likelihood of the comparison groups differing on each of these factors?

– Step 3 : Evaluate the threats on the basis of how likely they are to have an effect and plan to control for them.

Data Analysis

1- In a Causal-Comparative Study, the first step is to construct frequency polygons.

2-Means and SD are usually calculated if the variable involved    are quantitative…

3- The most commonly  used inference test is a’ t’ test for differences between  means

Analysis of data also involve a variety of descriptive and inferential statistics

The mean -which indicates the average performance of a group

The most commonly used Descriptive statistics are or some measures of a variable.

The Standard Deviation , which indicates how spread out  a set of score is around the mean, that is whether the scores are relatively homogenous or heterogenous around the mean.

The most commonly used inferential statistics are;

The t test used to determine whether the means of two groups are statistically different from one another.

Analysis of variance, used to determine if there is significant difference among the means of three or more groups

Chi square, used to compare group frequencies or to see if an event occurs

Limitations of use

1-There must be a “pre existing” independent variable, like years of study, gender, age, etc

2-There must be active variable- variable which the research can manipulate ,like the length and number of study session.

3-Lack of randomization, manipulation and control factors make it difficult to establish cause-effect relationships with any degree of confidence.

Causal Comparative: Conclusions

■ Researchers often infer cause and effect relationships based on such studies.

■ Conditions necessary, but not necessarily sufficient, to infer a causal relationship:

• A statistical relationship exists that is unlikely attributable to chance variation

• You have reason to believe the supposed exogenous variable preceded the endogenous.

• You can, with some degree of certainty, rule out other possible explanations.

Comparison of Causal-comparative method and Experimental method

-Neither method provides researchers with true experimental data

  • -Causal comparative studies help to identify variables worthy of experimental investigation
  • -Causal-comparative and experimental research both attempt to establish cause-effect relationships and both involve comparisons.
  • -Ethical considerations often prevent manipulation of a variable that could be manipulated but should not be-If the nature of the independent variable is such that it may cause physical or mental harm to participants, the ethics of research dictate that it should not be manipulated
  • -Experimental research the independent variable is manipulated by the researcher, whereas in causal-comparative research, the groups are already formed and already different on the independent variable
  • -Experimental studies are costly in more ways than one and should only be conducted when there is good reason to believe the effort will be fruitful
  • -Experimental study the researcher selects a random sample and then randomly divides the sample into two or more groups-Groups are assigned to the treatments and the study is carried out
  • -Independent variables in causal-comparative cannot be manipulated, should not be manipulated, or simply are not manipulated but could be
  • -Individuals are not randomly assigned to treatment groups because they already were selected into groups before the research began
  • -Not possible to manipulate organismic variables such as age or gender
  • -Students with high anxiety could be compared  to students with low anxiety on attention span, or the difference in achievement between first graders who attended preschool and first graders who did not could be examined.

Despite many key advantages, causal comparative research does have some serious limitations that should also be kept in mind

-Both the independent and dependent variables would have already occurred, it would not be possible to determine which came first.-It would be possible that some third variable, such as parental attitude might be the main influence on self-concept and achievement.

  • -Causal-comparative studies do permit investigation of variables that cannot or should not be investigated experimentally, facilitate decision making, provide guidance for experimental studies, and are less costly on all dimensions.
  • -Caution must be applied in interpreting results
  • -Caution must be exercised in attributing cause-effect relationships based on causal-comparative research.
  • -In causal-comparative research the researcher cannot assign participants to treatment groups because they are already in those groups.
  • -Only in experimental research does the researcher randomly assign participants to treatment groups.
  • -Only in experimental research is the degree of control sufficient to establish cause-effect relationships.
  • -Since the independent variable has already occurred, the same kinds of controls cannot be exercised as in an experimental study
  • -The alleged cause of an observed effect may in fact be the effect itself, or there may be a third variable
  • -This conclusion would not be warranted because it is not possible to establish whether self-concept precedes achievement or vice versa.

Difference and Similarities in between Causal and Correlational Research

Causal-comparative research involves comparing (thus the “comparative” aspect) two groups in order to explain existing differences between them on some variable or variables of interest. Correlational research, on the other hand, does not look at differences between groups. Rather, it looks for relationships within a single group. This is a big difference…one is only entitled to conclude that a relationship of some sort exists, not that variable A caused some variation in variable B.In sum, causal-comparative research does allow one to make reasonable inferences about causation; correlational research does not.

Although some consider causal and correlational research as similar in nature, there exists a clear difference between these two types of research. Causal research is aimed at identifying the causal relationships among variables. Correlational research, on the other hand, is aimed at identifying whether an association exists or not.

Causal-comparative and correlational designs are similar as:

  • Neither is experimental
  • Neither involves manipulation of a treatment variable
  • Relationships are studied in both
  • Correlational:  focus on magnitude and direction of relationship
  • Causal-Comparative:  focus on difference between two groups
  • The basic similarity between causal-comparative and correlational studies is that both seek to explore relationships among variables.
  • When relationships are identified through causal-comparative research (or in correlational research), they often are studied at a later time by means of experimental research.
  • Both lack manipulation
  • Both require caution in interpreting results
  • Causation is difficult to infer
  • Both can support subsequent experimental research

The  key difference between causal and correlational research is that while causal research can predict causality, correlational research cannot . Through this article let us examine the differences between causal and correlational research further.

Difference in meaning

The correlational research attempts to identify associations among variables.  The key difference between correlational research and causal research is that correlational research cannot predict causality, although it can identify associations. Another difference that can be highlighted between the two research methods is that in correlational research, the researcher does not attempt to manipulate the variables. He merely observes.

  • In terms of objective :Causal research  aims at identifying causality among variables . This highlights that it allows the researcher to find the cause of a certain variable
  • In terms of Prediction : In causal research, the researcher usually measures the impact each variable has before predicting the causality. It is very important to pay attention to the variables because, in most cases, the lack of control over variables can lead to false predictions. This is why most researchers manipulate the research environment.  In the social sciences especially, it is very difficult to conduct causal research because the environment can consist of many variables that influence the causality that can go unnoticed. Now let us move on to correlational research.
  • In terms Definitions of Causal and Correlational Research: In Causal research aims at identifying causality among variables . In Correlational research attempts to identify associations among variables.
  • In terms of Nature: In causal research, the researcher identifies the cause and effect . In correlational research, the researcher identifies an association.
  • In terms of Manipulation: In causal research, the researcher manipulates the environment. In correlational research, the researcher does not manipulate the environment.
  • In terms of Causality: In Causal research can identify causality. In  Correlational research cannot identify causality among variables
  • In terms of  Subjects Subjects are notassigned to groups. Usually, there is only one group of subjects However, subjects are Randomly selected for participation. In Causal research subjects are not randomly assigned to  control and  experimental groups because it is logistically But, there are control & experimental groups in this type of design….just no random assignment.If possible, they should be randomly selected for participation.
  • In terms of Variables : An important difference between causal-comparative and correlational research is that causal-comparative studies involve two or more groups and one independent variable, while correlational studies involve two or more variables and one group.   In Correlational research Two variables (X and Y) are  measured and the strength and direction of the relationship is determined. In Causal research : Subjects are in pre-formed groups. But, unlike correlational and differential research, an independent variable is manipulate d and the groups are  measured & compared on a dependent variable .
  • In terms of  Statistics In Correlational research: Pearson product-moment, correlation (Pearson’s r. In  Causal research : Chi-square, t-test, ANOVA 
  • In terms Conclusions : In Correlational research: Variable X co-varies with variable Y (i.e., there is a relationship between the two variables.)Cause and effect  cannot be proven.In Causal research: While we may be able to draw some causal conclusions, we can’t do it with as much confidence as if we had used a true experimental design.

Strengths and Limitations of Causal-comparative Research

No research can be perfect in itself. All methods have their strengths as well as weaknesses. The same is applicable in the case of ex-post factor research too. The strengths of the ex-post facto research are: It is considered as a very relevant method in those behavioural researches where the variables can not be manipulated or altered.

Causal-Comparative Research has its limitations which should be recognized:

1. The independent variables cannot be manipulated. Subjects cannot be randomly, or otherwise, assigned to treatment groups.

2. Causes are often multiple and complex rather than single and simple.

For these reasons scientists are reluctant to use the expression cause and effect in  studies in which the variables have not been carefully manipulated.

They prefer to observe that when variable A appears, variable B is consistently associated, possibly for reasons not completely understood or explained.

Strengths of Causal-comparative Research

Causal-compative Research It is less time consuming as well as economical. It gives a chance to the researcher to analyse on basis of his personal opinion and then come out with the best possible conclusion. The weaknesses as well as the limitations of the ex-post facto research are: As discussed earlier, in Causal-compative Research   research, the researcher can not manipulate the independent variables. The researcher can not randomly assign the subjects to different groups. The researcher may not be able to provide a reasonable explanation for the relationship between the independent and dependent variables under study.

While predicting the causal relationships between the variables, the researcher falls prey to the bias called the post hoc fallacy. The concept of post hoc fallacy says that, it is a tendency of human to arrive at conclusions or predictions when two factors go together, one is the cause and the other is the effect. Because delinquency and parenthood go together, we may come to a conclusion that delinquency is the effect and the parenthood is the cause, whereas in reality the peer group to which the child belongs may be the actual reason.

It can therefore be concluded that the ex-post facto research holds a very good position in the field of behavioural sciences. It is the only method which is retrospective in nature, that is, with the help of this method one can trace the history in order to analyse the cause/ reason/action from an effect/behaviour/ event that has already occurred. Although it is a very significant method, yet it has certain limitations as well . The researcher can not manipulate the cause in order to see the alterations on its effect. This again marks a question on the validity of the findings of the research. Equally the researcher can not randomly assign the subjects in to groups and has no control over the variables. Yet, it is one of the very useful methods as it has several implications in the field

Comments are closed.

  • Search for:

Recent Posts

  • HINDU RELIGION TERMS
  • Mathematics Laboratory and it’s Application in mathematics Teaching
  • Super- conscious Experience- The How Aspect
  • The Wardha Scheme of Education –GANDHI JI POINT OF VIEW
  • SIGMOND FREUD ON DREAMS
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • Entries RSS
  • Comments RSS
  • WordPress.org
  • Quantitative causal-comparative research: definition, types, methodology, methods, characteristics

On this Page

  • Introduction to Quantitative causal-comparative research
  • Assumptions of quantitative causal-comparative research

Types of quantitative causal-comparative research

Quantitative causal-comparative research methodology.

  • Questions quantitative causal-comparative research Methodology tries to Answer
  • Quantitative Causal-comparative research Methodology-Diagrammatic Approach

Quantitative causal-comparative research methods

Characteristics of quantitative causal-comparative research, advantages of quantitative causal-comparative research, disadvantages of quantitative causal-comparative research, quantitative causal-comparative research: definition, types, methodology, methods, characteristics, examples and advantages.

How do we define the term quantitative causal-comparative research? Is it the same as quantitative descriptive research or quantitative correlational research? Look at this…

1.1 Definition:

Quantitative Causal-Comparative Research is a non-experimental type of research which involves investigating the cause of differences in behavior for two groups which are similar or the same. The concern of the researcher is to find out the reason why a certain condition which has been exposed to a certain group of people have different results when subjected to another similar group in the future. For example, the investigator may wish to interrogate why youth in school who went through corporal punishment are highly disciplined in all what they do in the society as compared to youth in the current times when they are subjected to the same corporal punishment in school.

You see, this type of research has an element of comparison and again there is no manipulation of the variable under study. The researcher is looking at youth in school in both cases and the punishment is of corporal nature. Causal-comparative research designs are also referred to as ex- post facto research design. That is, research where by the researcher looks at the “after-the-fact” situation or scenario. The reason for this is that the study first observes a difference that exists within a group of people who are from a similar group.

In this kind of research, the investigator selects two groups for comparison to establish the cause for the difference already noticed. Now, you should not confuse the quantitative causal-comparative research with experimental research where cause-effect link is pronounced. This is because in the case of causal-comparative research, the cause may not be true cause and again none of the variables being studied is manipulated. In the case of experimental research for your information as we shall see, there is true cause because the independent variables are manipulated.

1.2 Assumptions of quantitative causal-comparative research

i). Independent variable is always assumed to have already occurred.

ii). The main concern of the researcher is the changes occurring on the dependent variable.

iii). There is a difference between two similar groups.

iv). The groups are similar by all standards.

v). The independent variable is the grouping variable for it is the one associated with causing the two groups look different.

vi). Independent variable naturally pre-exists. i.e., the condition had already occurred (the fact).

vii). Independent variable cannot be manipulated.

NB: In our current discussion, we are adding the word “quantitative” so as to simply highlight that the variables are gauged using numerical terms.

There are two main types of quantitative causal-comparative research as explained below;

1.Retrospective Causal Comparative Research

This is research where by the researcher forms the research problem or forms the research question after the effect caused by the independent variable has occurred. The researcher attempts to find out whether or not a variable influences another variable.

2.Prospective Causal Comparative Research

This is research where by the researcher proposes or postulates the research problem or the research question before the effect caused by the independent variable has occurred. The researcher attempts to find out whether or not a variable CAN influence another variable. It is a rare practice for that is not what is expected.

Example of Quantitative Causal-Comparative Research

The following is an illustration of how causal comparative research works. First, let us remind ourselves what kind of research is this. This is research which the researcher endeavors to trace a cause-effect relationship between the independent and dependent variable in a situation whereby the independent variable (commonly referred to as self-selecting independent variable) cannot be manipulated due to the fact that it has already occurred.

So, suppose the researcher wants to test the high chances of conducting HIV Aids amongst the youth in a certain city. The researcher will identify two groups

Group one- youth which have engaged themselves in sex workers activity and

Group two -those who are not engaged in sex workers activity, maybe they are in school.

The researcher tests the HIV Aids status of both groups. If group one which have entangled itself in sex workers activities in the city have more members having the infection as compared to those in school, then it can be concluded that sex workers activity is the cause of high rate of HIV Aids infection in that city.

Does quantitative causal-comparative research methods for formulating a research problem the same as quantitative causal-comparative research method for data analysis ? The answer is NO. Look at the definitional differences as per our explanation below.

3.1 Definition

Quantitative Causal-Comparative research Methodology is the reasonable progression or step by step design on how to solve a research problem through gathering of the relevant information. Quantitative Causal-Comparative research methodology entails selection of logical procedure on the topic to be studied. That is the research problem, how specific objectives will be identified/or formulated. Identification of knowledge gaps to be filled, research hypotheses to construct, the methods utilized to identify population and sample size, nature of data to be collected and how it will be analyzed, data presentation and interpretations thereof and the reporting of the research findings.

The aforementioned description on quantitative causal-comparative research methodology is in tandem with Kothari (1984) proposal who was of the idea that research methodology is the foundation behind the methods we use in the context of our research study. This provides a logic as to why one is using a particular method or technique at a particular stage in the research process and not others so that research output is accomplished as expected.

3.2 Questions quantitative causal-comparative research Methodology tries to Answer

The quantitative causal-comparative research question focuses on influence of the predictor variable on the dependent variable where by the predictor variable is not manipulated. In other words, it is invariant. For example, “Does height level determine the athlete’s sports performance?” this is an example of a descriptive causal-effect research in which the researcher is primarily interested in causal connection.

This is an example of a quantitative causal-comparative research in which the researcher is primarily concerned with difference between two groups, seeking to establish a cause-effect association. So, this study tries to answer “What” kind of questions.

The following matrix depicts the link between quantitative causal-comparative research and the type of research methodology adopted and then a clarification of the rational approach linked with this type and then in the last column, the research method(s) used in formulating the research problem. Remember these methods are specifically for quantitative causal-comparative research which is a sub-set of Quantitative research.

causal comparative research

3.3 Quantitative Causal-comparative research Methodology-Diagrammatic Approach

The following diagram represents a summary of logical roadmap to be adhered to in quantitative causal-comparative research methodology where Quantitative or Numerical methods are used to measure/gauge the study variables.

causal comparative research

3.4 Logical Steps; Basic Research Methodology

The following logical steps describe the Quantitative causal-comparative research methodology. From step one to six, it represents a logical way of how systematically the subject matter need to be dealt with. Remember that in this approach, the researcher is only curious of establishing why similar groups differ with one another.

Step 1: Identify the Topic of the study

In this case, which entails causal comparative viewpoint, the researcher formulates the research problem in mind that there are causes of the difference seen the groups of focus and therefore he/she approaches the matter in a logical manner for the aim of the study is to investigate the cause of differences between two or similar groups. So, the choices of groups made must be making sense or be plausible.

Once the researcher identifies probable causes (independent variables), they are used to establish the research question, hypothesis or research problem. Therefore, the research question or hypothesis will be framed in a way to portray the difference being investigated. For example, a research question can be framed as follows;

“How does Covid-19 Vaccination amongst the old-65 years versus Un-vaccinated old-65 years affect immunity to Corona attach in Dubai?”

“Does age have an effect on heart health complications-breathing problem?”

Step 2: Literature Review

The researcher has to undertake literature review which is a process of re-visiting the past literature which is related to similar studies addressing the same variables. This approach helps the researcher to identify possible causes or consequences of a particular difference.  The review aid in identification of the contextual, methodological and conceptual or theoretical research gaps which is the impetus for the current Quantitative Causal-comparative research.

Step 3: Selection of Research Participants

Unlike the other non-experimental research, the researcher needs to be extra careful when selecting the groups to participate in this type of research. This is because the investigator needs to take note of the following aspects;

-Need for proper definition of the independent variable for they are used as the grouping variables.

-Need to select groups with significant differences which is measurable.

-Need to select homogenous groups.

-Need to control extraneous variables that can interfere with the set assumption of quantitative causal-comparative research.

Step 4: Data Collection

The process of data collection in this stage is flexible for there is no one way which is strictly followed to collect data. But so long as sufficient and relevant data is collected, any instrument can be used to achieve this goal. There is no law to that.

Step 5: Data Analysis

Data analysis for quantitative causal-comparative research based on both the descriptive and inferential analysis models. This is then followed by the statistical comparison of two or more groups on some quantitative benchmark. 

Step 6: Research Findings and conclusions

In this step, the researcher has performed various data analysis and the outcome is ready for reporting and provision of conclusions. In this stage, the researcher has established whether the identified independent variables are the causes of the differences between the two groups. It is then at this point where he/she concludes of the presence of causing variables although not very easy to do so.

4.1 Definition

Research methods are the procedures that are applied in all the phases of research developments. They are apparatuses used to guarantee the end consequences of research task are accomplished. These techniques vary from one stage of research process to another. These methods are further classified in to two categories, namely;

a) Pre-Data analysis methods

b) Data Analysis related methods

As per Table 1.1 in this article, quantitative causal-comparative research methods indicated in that table (refer), namely survey, systematic observation and secondary research methods for the purposes of formulating the research problem and are some of the methods which fall under pre-data analysis category.  However, in this discussion of quantitative causal-comparative research methods, you should appreciate that there is no distinct way of collecting data as long as the data collected is authentic for data analysis.

  • Justifies the reasons as to why similar groups are different
  • Ex post facto-this type of research is such that the cause has already occurred or happened before the researcher arrives.
  • Causal-comparative research describes conditions that already exist.
  • Cause-effect relationships-the research portrays that there are causes of the differences.
  • Comparison is the bottom line of this research.
  • Characterized by two (or more) groups and one independent variable.
  • Involves comparison of two or more groups on a single endogenous variable.
  • Reversed study for the researcher looks for the effect, then look for the cause later
  • The two groups apparently look alike until the researcher is guided otherwise.

1). Cost effective-since this research tests possibility of cause-effect relationship, it helps the researcher to minimize his/her resource allocation on experimental research.

2). Causal-comparative research helps in finding out where there is a possibility of a cause-effect relationship before experimental research is performed which may involve more resource allocation.

3). Used in circumstances where the researcher is avoiding use of experimental research which may force the researcher to break ethical, safety or legal rules related to human rights.

4). Help in assessing impacts of changes on existing customs, procedures etc.

5). No researcher biasness-since the variables are not interrupted/not manipulated.

  • Time consuming-causal-comparative research takes a lot of time for it involves comparison of two or more groups for a longer period so as to be in a position to make conclusions.
  • Extraneous variables may portray non-sense relationship. That is, an apparent cause-effect relationship may not be what it seems.
  • The researcher has no control of the variables. For example, the so called self-selected independent variable usually occurs before the researcher’s action. For example, when the respondent is already a smoker. Has already chosen the character of smoking.
  • Causes and effects may be reversed. Sometimes the suspected cause-effect relationship may be the opposite. Such that a group that is under control may portray a significant difference as compared to the experimental group. For example, testing two groups for cancer due to smoking. Those who don’t smoke may be more affected of cancer than the smokers themselves. These results are contrary to the expectations of the investigators.

causal comparative research

IMAGES

  1. Causal Research: Definition, Examples and How to Use it

    causal comparative research

  2. PPT

    causal comparative research

  3. CAUSAL COMPARATIVE RESEARCH

    causal comparative research

  4. Causal Comparative Research: Definition, Types & Benefits

    causal comparative research

  5. Causal comparative research/types, characteristics, advantages and

    causal comparative research

  6. PPT

    causal comparative research

VIDEO

  1. How to interpret the P value?

  2. Quantitative Approach

  3. 4. ប្រភេទនៃការស្រាវជ្រាវ៖ Causal Comparative Designs

  4. Case study, causal comparative or ex-post-facto research, prospective, retrospective research

  5. Causal-Comparative Research Design

  6. Causal

COMMENTS

  1. Causal Comparative Research: Definition, Types & Benefits

    Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables. Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.

  2. Causal Comparative Research: Methods And Examples

    Learn what causal-comparative research is, how it differs from correlation research, and what variables it uses. See examples of causal-comparative research design and its advantages and drawbacks.

  3. Causal Research: Definition, examples and how to use it

    What is causal research? Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables. It's often used by companies to determine the impact of changes in products, features, or services process on critical company metrics.

  4. What is causal-comparative research: Definition, types & methods

    Causal-comparative research is a type of research method where the researcher tries to find out if there is a causal effect relationship between dependent and independent variables. It can be done by looking at past events, existing data, or a group of participants. Learn the advantages, disadvantages, and examples of this method.

  5. Causal Comparative Research: Insights and Implications

    Learn what causal comparative research is, how it differs from other methods, and why it is useful for various fields. Explore its background, characteristics, advantages, and limitations with examples and tips.

  6. Chapter 16 Causal Comparative Research How to Design and Evaluate

    • Causal-comparative research attempts to determine the cause or consequences of differences that already exist between or among groups of individuals. • The basic causal-comparative approach is to begin with a noted difference between two groups and then to look for possible causes for, or consequences of, this difference. ...

  7. Causal-Comparative Study

    Abstract. A nonexperimental design that is employed to explore cause-and-effect relationships is called ex post facto or causal-comparative. The title ex post facto is applied to studies in which data are gathered retrospectively, whereas causal-comparative is used when data are gathered from groups and the independent variable is not ...

  8. Comparative Research Methods

    Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time. ... This recognition of the (causal) significance of ...

  9. Causal-Comparative and Correlational Research

    ABSTRACT. Causal-comparative research, like experimental research, studies group differences, but these differences already exist when the study is conducted. Such group differences may include gender, L1, field of study, school district, or other demographic variables that cannot be manipulated and that require labeling participants according ...

  10. Causal-Comparative Research

    What is Causal-Comparative Research? Similarities and Differences between Causal-Comparative and Correlational Research Similarities and Differences between Causal-Comparative and Experimental Research. Steps Involved in Causal-Comparative Research. Problem Formulation Sample Instrumentation Design. Threats to Internal Validity in Causal ...

  11. Thinking Clearly About Correlations and Causation: Graphical Causal

    Researchers who instead decide to rely on observational data often attempt to deal with its weaknesses by cautiously avoiding causal language: They refer to "associations," "relationships," or tentative "links" between variables instead of clear cause-effect relationships, and they usually add a general disclaimer ("Of course, as the data were only observational, future ...

  12. Using Correlational and Causal-Comparative Research Designs in Practic

    When to use correlational and causal-comparative designs is discussed, as are examples of research questions appropriate for each method. Hypotheses appropriate for each method, issues of sample size, confounding variables, and statistical significance are discussed, as are common aspects of data analysis and data interpretation (e.g., Pearson ...

  13. causal-comparative research design: Topics by Science.gov

    causal-comparative research design. Research designs and making causal inferences from health care studies. PubMed. Flannelly, Kevin J; Jankowski, Katherine R B. 2014-01-01. This article summarizes the major types of research designs used in healthcare research, including experimental, quasi-experimental, and observational studies.

  14. PDF Research Methodology Group UOPX Research Community Causal-Comparative

    Learn what causal-comparative research is, how it differs from correlational and experimental research, and see examples of studies that use this methodology. This web page also provides a list of topics in research designs and a link to a research community.

  15. Guide to 'causal-comparative' research design: Identifying causative

    Learn how to conduct causal-comparative research to identify causative relationships between an independent and a dependent variable. Find out the advantages, disadvantages and threats of this non-experimental method, and see examples of topics and tools for data collection and analysis.

  16. Causal-comparative research designs.

    Abstract. This article describes causal-comparative research designs and examines the role these studies play in rehabilitation research. The goals, assumptions, and data analytic strategies that inhere to causal-comparative research are emphasized, illustrated with examples from the contemporary vocational rehabilitation literature. (PsycINFO ...

  17. The use of Qualitative Comparative Analysis (QCA) to address causality

    Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA ...

  18. (PDF) Causal Comparative Research

    A comparative Study of Popular Poverty Attributions in Europe. Book. Full-text available. Sep 2007. Dorota Lepianka. PDF | a study in which the researcher attempts to determine the cause, or ...

  19. Causal comparative effectiveness analysis of dynamic continuous-time

    Section 5 provides a summary and directions for future research. 2 |. NOTATION AND DYNAMIC REGIMES. ... and the complete datasets are analyzed using IPW methods for the causal comparative analysis. The fit of the CD4 submodel was examined using residual plots and examination of individual-specific fitted curves; for the mortality submodel, we ...

  20. Causal-comparative Research

    Causal-comparative research is an attempt to identify a causative relationship between an independent variable and a dependent variable.The relationship between the independent variable and dependent variable is usually a suggested relationship (not proven) because you (the researcher) do not have complete control over the independent variable.

  21. Quantitative causal-comparative research: definition, types

    Characteristics of quantitative causal-comparative research. Justifies the reasons as to why similar groups are different; Ex post facto-this type of research is such that the cause has already occurred or happened before the researcher arrives. Causal-comparative research describes conditions that already exist.

  22. (PDF) Quantitative Research Designs

    The designs. in this chapter are survey design, descriptive design, correlational design, ex-. perimental design, and causal-comparative design. As we address each research. design, we will learn ...

  23. Causal-Comparative Study

    Abstract. A nonexperimental design that is employed to explore cause-and-effect relationships is called ex post facto or causal-comparative. The title ex post facto is applied to studies in which data are gathered retrospectively, whereas causal-comparative is used when data are gathered from groups and the independent variable is not ...