Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Qualitative vs. Quantitative Research | Differences, Examples & Methods

Qualitative vs. Quantitative Research | Differences, Examples & Methods

Published on April 12, 2019 by Raimo Streefkerk . Revised on June 22, 2023.

When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge.

Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.

Quantitative research is at risk for research biases including information bias , omitted variable bias , sampling bias , or selection bias . Qualitative research Qualitative research is expressed in words . It is used to understand concepts, thoughts or experiences. This type of research enables you to gather in-depth insights on topics that are not well understood.

Common qualitative methods include interviews with open-ended questions, observations described in words, and literature reviews that explore concepts and theories.

Table of contents

The differences between quantitative and qualitative research, data collection methods, when to use qualitative vs. quantitative research, how to analyze qualitative and quantitative data, other interesting articles, frequently asked questions about qualitative and quantitative research.

Quantitative and qualitative research use different research methods to collect and analyze data, and they allow you to answer different kinds of research questions.

Qualitative vs. quantitative research

Quantitative and qualitative data can be collected using various methods. It is important to use a data collection method that will help answer your research question(s).

Many data collection methods can be either qualitative or quantitative. For example, in surveys, observational studies or case studies , your data can be represented as numbers (e.g., using rating scales or counting frequencies) or as words (e.g., with open-ended questions or descriptions of what you observe).

However, some methods are more commonly used in one type or the other.

Quantitative data collection methods

  • Surveys :  List of closed or multiple choice questions that is distributed to a sample (online, in person, or over the phone).
  • Experiments : Situation in which different types of variables are controlled and manipulated to establish cause-and-effect relationships.
  • Observations : Observing subjects in a natural environment where variables can’t be controlled.

Qualitative data collection methods

  • Interviews : Asking open-ended questions verbally to respondents.
  • Focus groups : Discussion among a group of people about a topic to gather opinions that can be used for further research.
  • Ethnography : Participating in a community or organization for an extended period of time to closely observe culture and behavior.
  • Literature review : Survey of published works by other authors.

A rule of thumb for deciding whether to use qualitative or quantitative data is:

  • Use quantitative research if you want to confirm or test something (a theory or hypothesis )
  • Use qualitative research if you want to understand something (concepts, thoughts, experiences)

For most research topics you can choose a qualitative, quantitative or mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an inductive vs. deductive research approach ; your research question(s) ; whether you’re doing experimental , correlational , or descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.

Quantitative research approach

You survey 300 students at your university and ask them questions such as: “on a scale from 1-5, how satisfied are your with your professors?”

You can perform statistical analysis on the data and draw conclusions such as: “on average students rated their professors 4.4”.

Qualitative research approach

You conduct in-depth interviews with 15 students and ask them open-ended questions such as: “How satisfied are you with your studies?”, “What is the most positive aspect of your study program?” and “What can be done to improve the study program?”

Based on the answers you get you can ask follow-up questions to clarify things. You transcribe all interviews using transcription software and try to find commonalities and patterns.

Mixed methods approach

You conduct interviews to find out how satisfied students are with their studies. Through open-ended questions you learn things you never thought about before and gain new insights. Later, you use a survey to test these insights on a larger scale.

It’s also possible to start with a survey to find out the overall trends, followed by interviews to better understand the reasons behind the trends.

Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analyzed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.

Analyzing quantitative data

Quantitative data is based on numbers. Simple math or more advanced statistical analysis is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.

Applications such as Excel, SPSS, or R can be used to calculate things like:

  • Average scores ( means )
  • The number of times a particular answer was given
  • The correlation or causation between two or more variables
  • The reliability and validity of the results

Analyzing qualitative data

Qualitative data is more difficult to analyze than quantitative data. It consists of text, images or videos instead of numbers.

Some common approaches to analyzing qualitative data include:

  • Qualitative content analysis : Tracking the occurrence, position and meaning of words or phrases
  • Thematic analysis : Closely examining the data to identify the main themes and patterns
  • Discourse analysis : Studying how communication works in social contexts

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Streefkerk, R. (2023, June 22). Qualitative vs. Quantitative Research | Differences, Examples & Methods. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/qualitative-quantitative-research/

Is this article helpful?

Raimo Streefkerk

Raimo Streefkerk

Other students also liked, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, mixed methods research | definition, guide & examples, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

This website may not work correctly because your browser is out of date. Please update your browser .

Qualitative comparative analysis

Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest.

QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome. These are then subject to a minimisation procedure that identifies the simplest set of conditions that can account for all the observed outcomes, as well as their absence.

The results are typically expressed in statements expressed in ordinary language or as Boolean algebra. For example:

  • A combination of Condition A and condition B or a combination of condition C and condition D will lead to outcome E.
  • In Boolean notation this is expressed more succinctly as A*B + C*D→E

QCA results are able to distinguish various complex forms of causation, including:

  • Configurations of causal conditions, not just single causes. In the example above, there are two different causal configurations, each made up of two conditions.
  • Equifinality, where there is more than one way in which an outcome can happen. In the above example, each additional configuration represents a different causal pathway
  • Causal conditions which are necessary, sufficient, both or neither, plus more complex combinations (known as INUS causes – insufficient but necessary parts of a configuration that is unnecessary but sufficient), which tend to be more common in everyday life. In the example above, no one condition was sufficient or necessary. But each condition is an INUS type cause
  • Asymmetric causes – where the causes of failure may not simply be the absence of the cause of success. In the example above, the configuration associated with the absence of E might have been one like this: A*B*X + C*D*X →e  Here X condition was a sufficient and necessary blocking condition.
  • The relative influence of different individual conditions and causal configurations in a set of cases being examined. In the example above, the first configuration may have been associated with 10 cases where the outcome was E, whereas the second might have been associated with only 5 cases.  Configurations can be evaluated in terms of coverage (the percentage of cases they explain) and consistency (the extent to which a configuration is always associated with a given outcome).

QCA is able to use relatively small and simple data sets. There is no requirement to have enough cases to achieve statistical significance, although ideally there should be enough cases to potentially exhibit all the possible configurations. The latter depends on the number of conditions present. In a 2012 survey of QCA uses the median number of cases was 22 and the median number of conditions was 6.  For each case, the presence or absence of a condition is recorded using nominal data i.e. a 1 or 0. More sophisticated forms of QCA allow the use of “fuzzy sets” i.e. where a condition may be partly present or partly absent, represented by a value of 0.8 or 0.2 for example. Or there may be more than one kind of presence, represented by values of 0, 1, 2 or more for example. Data for a QCA analysis is collated in a simple matrix form, where rows = cases and columns = conditions, with the rightmost column listing the associated outcome for each case, also described in binary form.

QCA is a theory-driven approach, in that the choice of conditions being examined needs to be driven by a prior theory about what matters. The list of conditions may also be revised in the light of the results of the QCA analysis if some configurations are still shown as being associated with a mixture of outcomes. The coding of the presence/absence of a condition also requires an explicit view of that condition and when and where it can be considered present. Dichotomisation of quantitative measures about the incidence of a condition also needs to be carried out with an explicit rationale, and not on an arbitrary basis.

Although QCA was originally developed by Charles Ragin some decades ago it is only in the last decade that its use has become more common amongst evaluators. Articles on its use have appeared in Evaluation and the American Journal of Evaluation.

For a worked example, see Charles Ragin’s What is Qualitative Comparative Analysis (QCA)? ,  slides 6 to 15 on The bare-bones basics of crisp-set QCA.

[A crude summary of the example is presented here]

In his presentation Ragin provides data on 65 countries and their reactions to austerity measures imposed by the IMF. This has been condensed into a Truth Table (shown below), which shows all possible configurations of four different conditions that were thought to affect countries’ responses: the presence or absence of severe austerity, prior mobilisation, corrupt government, rapid price rises. Next to each configuration is data on the outcome associated with that configuration – the numbers of countries experiencing mass protest or not. There are 16 configurations in all, one per row. The rightmost column describes the consistency of each configuration: whether all cases with that configuration have one type of outcome, or a mixed outcome (i.e. some protests and some no protests). Notice that there are also some configurations with no known cases.

comparative analysis of the qualitative and quantitative research approaches

Ragin’s next step is to improve the consistency of the configurations with mixed consistency. This is done either by rejecting cases within an inconsistent configuration because they are outliers (with exceptional circumstances unlikely to be repeated elsewhere) or by introducing an additional condition (column) that distinguishes between those configurations which did lead to protest and those which did not. In this example, a new condition was introduced that removed the inconsistency, which was described as  “not having a repressive regime”.

The next step involves reducing the number of configurations needed to explain all the outcomes, known as minimisation. Because this is a time-consuming process, this is done by an automated algorithm (aka a computer program) This algorithm takes two configurations at a time and examines if they have the same outcome. If so, and if their configurations are only different in respect to one condition this is deemed to not be an important causal factor and the two configurations are collapsed into one. This process of comparisons is continued, looking at all configurations, including newly collapsed ones, until no further reductions are possible.

[Jumping a few more specific steps] The final result from the minimisation of the above truth table is this configuration:

SA*(PR + PM*GC*NR)

The expression indicates that IMF protest erupts when severe austerity (SA) is combined with either (1) rapid price increases (PR) or (2) the combination of prior mobilization (PM), government corruption (GC), and non-repressive regime (NR).

This slide show from Charles C Ragin, provides a detailed explanation, including examples, that clearly demonstrates the question, 'What is QCA?'

This book, by Schneider and Wagemann, provides a comprehensive overview of the basic principles of set theory to model causality and applications of Qualitative Comparative Analysis (QCA), the most developed form of set-theoretic method, for research ac

This article by Nicolas Legewie provides an introduction to Qualitative Comparative Analysis (QCA). It discusses the method's main principles and advantages, including its concepts.

COMPASSS (Comparative methods for systematic cross-case analysis) is a website that has been designed to develop the use of systematic comparative case analysis  as a research strategy by bringing together scholars and practitioners who share its use as

This paper from Patrick A. Mello focuses on reviewing current applications for use in Qualitative Comparative Analysis (QCA) in order to take stock of what is available and highlight best practice in this area.

Marshall, G. (1998). Qualitative comparative analysis. In A Dictionary of Sociology Retrieved from https://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/qualitative-comparative-analysis

Expand to view all resources related to 'Qualitative comparative analysis'

  • An introduction to applied data analysis with qualitative comparative analysis
  • Qualitative comparative analysis: A valuable approach to add to the evaluator’s ‘toolbox’? Lessons from recent applications

'Qualitative comparative analysis' is referenced in:

  • 52 weeks of BetterEvaluation: Week 34 Generalisations from case studies?
  • Week 18: is there a "right" approach to establishing causation in advocacy evaluation?

Framework/Guide

  • Rainbow Framework :  Check the results are consistent with causal contribution
  • Data mining

Back to top

© 2022 BetterEvaluation. All right reserved.

Enago Academy

Qualitative Vs. Quantitative Research — A step-wise guide to conduct research

' src=

A research study includes the collection and analysis of data. In quantitative research, the data are analyzed with numbers and statistics, and in qualitative research, the data analyzed are non-numerical and perceive the meaning of social reality.

What Is Qualitative Research?

Qualitative research observes and describes a phenomenon to gain a deeper understanding of a subject. It is also used to generate hypotheses for further studies. In general, qualitative research is explanatory and helps understands how an individual perceives non-numerical data, like video, photographs, or audio recordings. The qualitative data is collected from diary accounts or interviews and analyzed by grounded theory or thematic analysis.

When to Use Qualitative Research?

Qualitative research is used when the outcome of the research study is to disseminate knowledge and understand concepts, thoughts, and experiences. This type of research focuses on creating ideas and formulating theories or hypotheses .

Benefits of Qualitative Research

  • Unlike quantitative research, which relies on numerical data, qualitative research relies on data collected from interviews, observations, and written texts.
  • It is often used in fields such as sociology and anthropology, where the goal is to understand complex social phenomena.
  • Qualitative research is considered to be more flexible and adaptive, as it is used to study a wide range of social aspects.
  • Additionally, qualitative research often leads to deeper insights into the research study. This helps researchers and scholars in designing their research methods .

Qualitative Research Example

In research, to understand the culture of a pharma company, one could take an ethnographic approach. With an experience in the company, one could gather data based on the —

  • Field notes with observations, and reflections on one’s experiences of the company’s culture
  • Open-ended surveys for employees across all the company’s departments via email to find out variations in culture across teams and departments
  • Interview sessions with employees and gather information about their experiences and perspectives.

What Is Quantitative Research?

Quantitative research is for testing hypotheses and measuring relationships between variables. It follows the process of objectively collecting data and analyzing it numerically, to determine and control variables of interest. This type of research aims to test causal relationships between variables and provide generalized results. These results determine if the theory proposed for the research study could be accepted or rejected.

When to Use Quantitative Research?

Quantitative research is used when a research study needs to confirm or test a theory or a hypothesis. When a research study is focused on measuring and quantifying data, using a quantitative approach is appropriate. It is often used in fields such as economics, marketing, or biology, where researchers are interested in studying trends and relationships between variables .

Benefits of Quantitative Research

  • Quantitative data is interpreted with statistical analysis . The type of statistical study is based on the principles of mathematics and it provides a fast, focused, scientific and relatable approach.
  • Quantitative research creates an ability to replicate the test and results of research. This approach makes the data more reliable and less open to argument.
  • After collecting the quantitative data, expected results define which statistical tests are applicable and results provide a quantifiable conclusion for the research hypothesis
  • Research with complex statistical analysis is considered valuable and impressive. Quantitative research is associated with technical advancements like computer modeling and data-based decisions.

Quantitative Research Example

An organization wishes to conduct a customer satisfaction (CSAT) survey by using a survey template. From the survey, the organization can acquire quantitative data and metrics on the brand or the organization based on the customer’s experience. Various parameters such as product quality, pricing, customer experience, etc. could be used to generate data in the form of numbers that is statistically analyzed.

qualitative vs. quantitative research

Data Collection Methods

1. qualitative data collection methods.

Qualitative data is collected from interview sessions, discussions with focus groups, case studies, and ethnography (scientific description of people and cultures with their customs and habits). The collection methods involve understanding and interpreting social interactions.

Qualitative research data also includes respondents’ opinions and feelings, which is conducted face-to-face mostly in focus groups. Respondents are asked open-ended questions either verbally or through discussion among a group of people, related to the research topic implemented to collect opinions for further research.

2. Quantitative Data Collection Methods

Quantitative research data is acquired from surveys, experiments, observations, probability sampling, questionnaire observation, and content review. Surveys usually contain a list of questions with multiple-choice responses relevant to the research topic under study. With the availability of online survey tools, researchers can conduct a web-based survey for quantitative research.

Quantitative data is also assimilated from research experiments. While conducting experiments, researchers focus on exploring one or more independent variables and studying their effect on one or more dependent variables.

A Step-wise Guide to Conduct Qualitative and Quantitative Research

  • Understand the difference between types of research — qualitative, quantitative, or mixed-methods-based research.
  • Develop a research question or hypothesis. This research approach will define which type of research one could choose.
  • Choose a method for data collection. Depending on the process of data collection, the type of research could be determined.
  • Analyze and interpret the collected data. Based on the analyzed data, results are reported.
  • If observed results are not equivalent to expected results, consider using an unbiased research approach or choose both qualitative and quantitative research methods for preferred results.

Qualitative Vs. Quantitative Research – A Comparison

With an awareness of qualitative vs. quantitative research and the different data collection methods , researchers could use one or both types of research approaches depending on their preferred results. Moreover, to implement unbiased research and acquire meaningful insights from the research study, it is advisable to consider both qualitative and quantitative research methods .

Through this article, you would have understood the comparison between qualitative and quantitative research. However, if you have any queries related to qualitative vs. quantitative research, do comment below or email us.

' src=

Well explained and easy to understand.

Rate this article Cancel Reply

Your email address will not be published.

comparative analysis of the qualitative and quantitative research approaches

Enago Academy's Most Popular Articles

Content Analysis vs Thematic Analysis: What's the difference?

  • Reporting Research

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for data interpretation

In research, choosing the right approach to understand data is crucial for deriving meaningful insights.…

Confounding Variables

Demystifying the Role of Confounding Variables in Research

In the realm of scientific research, the pursuit of knowledge often involves complex investigations, meticulous…

Research Interviews for Data Collection

Research Interviews: An effective and insightful way of data collection

Research interviews play a pivotal role in collecting data for various academic, scientific, and professional…

Planning Your Data Collection

Planning Your Data Collection: Designing methods for effective research

Planning your research is very important to obtain desirable results. In research, the relevance of…

comparative analysis of the qualitative and quantitative research approaches

  • Manuscripts & Grants
  • Trending Now

Unraveling Research Population and Sample: Understanding their role in statistical inference

Research population and sample serve as the cornerstones of any scientific inquiry. They hold the…

6 Steps to Evaluate the Effectiveness of Statistical Hypothesis Testing

How to Use Creative Data Visualization Techniques for Easy Comprehension of…

comparative analysis of the qualitative and quantitative research approaches

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

comparative analysis of the qualitative and quantitative research approaches

What should universities' stance be on AI tools in research and academic writing?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Transl Behav Med
  • v.4(2); 2014 Jun

Logo of transbehavmed

Using qualitative comparative analysis to understand and quantify translation and implementation

Heather kane.

RTI International, 3040 Cornwallis Road, Research Triangle Park, P.O. Box 12194, Durham, NC 27709 USA

Megan A Lewis

Pamela a williams, leila c kahwati.

Understanding the factors that facilitate implementation of behavioral medicine programs into practice can advance translational science. Often, translation or implementation studies use case study methods with small sample sizes. Methodological approaches that systematize findings from these types of studies are needed to improve rigor and advance the field. Qualitative comparative analysis (QCA) is a method and analytical approach that can advance implementation science. QCA offers an approach for rigorously conducting translational and implementation research limited by a small number of cases. We describe the methodological and analytic approach for using QCA and provide examples of its use in the health and health services literature. QCA brings together qualitative or quantitative data derived from cases to identify necessary and sufficient conditions for an outcome. QCA offers advantages for researchers interested in analyzing complex programs and for practitioners interested in developing programs that achieve successful health outcomes.

INTRODUCTION

In this paper, we describe the methodological features and advantages of using qualitative comparative analysis (QCA). QCA is sometimes called a “mixed method.” It refers to both a specific research approach and an analytic technique that is distinct from and offers several advantages over traditional qualitative and quantitative methods [ 1 – 4 ]. It can be used to (1) analyze small to medium numbers of cases (e.g., 10 to 50) when traditional statistical methods are not possible, (2) examine complex combinations of explanatory factors associated with translation or implementation “success,” and (3) combine qualitative and quantitative data using a unified and systematic analytic approach.

This method may be especially pertinent for behavioral medicine given the growing interest in implementation science [ 5 ]. Translating behavioral medicine research and interventions into useful practice and policy requires an understanding of the implementation context. Understanding the context under which interventions work and how different ways of implementing an intervention lead to successful outcomes are required for “T3” (i.e., dissemination and implementation of evidence-based interventions) and “T4” translations (i.e., policy development to encourage evidence-based intervention use among various stakeholders) [ 6 , 7 ].

Case studies are a common way to assess different program implementation approaches and to examine complex systems (e.g., health care delivery systems, interventions in community settings) [ 8 ]. However, multiple case studies often have small, naturally limited samples or populations; small samples and populations lack adequate power to support conventional, statistical analyses. Case studies also may use mixed-method approaches, but typically when researchers collect quantitative and qualitative data in tandem, they rarely integrate both types of data systematically in the analysis. QCA offers solutions for the challenges posed by case studies and provides a useful analytic tool for translating research into policy recommendations. Using QCA methods could aid behavioral medicine researchers who seek to translate research from randomized controlled trials into practice settings to understand implementation. In this paper, we describe the conceptual basis of QCA, its application in the health and health services literature, and its features and limitations.

CONCEPTUAL BASIS OF QCA

QCA has its foundations in historical, comparative social science. Researchers in this field developed QCA because probabilistic methods failed to capture the complexity of social phenomena and required large sample sizes [ 1 ]. Recently, this method has made inroads into health research and evaluation [ 9 – 13 ] because of several useful features as follows: (1) it models equifinality , which is the ability to identify more than one causal pathway to an outcome (or absence of the outcome); (2) it identifies conjunctural causation , which means that single conditions may not display their effects on their own, but only in conjunction with other conditions; and (3) it implies asymmetrical relationships between causal conditions and outcomes, which means that causal pathways for achieving the outcome differ from causal pathways for failing to achieve the outcome.

QCA is a case-oriented approach that examines relationships between conditions (similar to explanatory variables in regression models) and an outcome using set theory; a branch of mathematics or of symbolic logic that deals with the nature and relations of sets. A set-theoretic approach to modeling causality differs from probabilistic methods, which examines the independent, additive influence of variables on an outcome. Regression models, based on underlying assumptions about sampling and distribution of the data, ask “what factor, holding all other factors constant at each factor’s average, will increase (or decrease) the likelihood of an outcome .” QCA, an approach based on the examination of set, subset, and superset relationships, asks “ what conditions —alone or in combination with other conditions—are necessary or sufficient to produce an outcome .” For additional QCA definitions, see Ragin [ 4 ].

Necessary conditions are those that exhibit a superset relationship with the outcome set and are conditions or combinations of conditions that must be present for an outcome to occur. In assessing necessity, a researcher “identifies conditions shared by cases with the same outcome” [ 4 ] (p. 20). Figure  1 shows a hypothetical example. In this figure, condition X is a necessary condition for an effective intervention because all cases with condition X are also members of the set of cases with the outcome present; however, condition X is not sufficient for an effective intervention because it is possible to be a member of the set of cases with condition X, but not be a member of the outcome set [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 13142_2014_251_Fig1_HTML.jpg

Necessary and sufficient conditions and set-theoretic relationships

Sufficient conditions exhibit subset relationships with an outcome set and demonstrate that “the cause in question produces the outcome in question” [ 3 ] (p. 92). Figure  1 shows the multiple and different combinations of conditions that produce the hypothetical outcome, “effective intervention,” (1) by having condition A present, (2) by having condition D present, or (3) by having the combination of conditions B and C present. None of these conditions is necessary and any one of these conditions or combinations of conditions is sufficient for the outcome of an effective intervention.

QCA AS AN APPROACH AND AS AN ANALYTIC TECHNIQUE

The term “QCA” is sometimes used to refer to the comparative research approach but also refers to the “analytic moment” during which Boolean algebra and set theory logic is applied to truth tables constructed from data derived from included cases. Figure  2 characterizes this distinction. Although this figure depicts steps as sequential, like many research endeavors, these steps are somewhat iterative, with respecification and reanalysis occurring along the way to final findings. We describe each of the essential steps of QCA as an approach and analytic technique and provide examples of how it has been used in health-related research.

An external file that holds a picture, illustration, etc.
Object name is 13142_2014_251_Fig2_HTML.jpg

QCA as an approach and as an analytic technique

Operationalizing the research question

Like other types of studies, the first step involves identifying the research question(s) and developing a conceptual model. This step guides the study as a whole and also informs case, condition (c.f., variable), and outcome selection. As mentioned above, QCA frames research questions differently than traditional quantitative or qualitative methods. Research questions appropriate for a QCA approach would seek to identify the necessary and sufficient conditions required to achieve the outcome. Thus, formulating a QCA research question emphasizes what program components or features—individually or in combination—need to be in place for a program or intervention to have a chance at being effective (i.e., necessary conditions) and what program components or features—individually or in combination—would produce the outcome (i.e., sufficient conditions). For example, a set theoretic hypothesis would be as follows: If a program is supported by strong organizational capacity and a comprehensive planning process, then the program will be successful. A hypothesis better addressed by probabilistic methods would be as follows: Organizational capacity, holding all other factors constant, increases the likelihood that a program will be successful.

For example, Longest and Thoits [ 15 ] drew on an extant stress process model to assess whether the pathways leading to psychological distress differed for women and men. Using QCA was appropriate for their study because the stress process model “suggests that particular patterns of predictors experienced in tandem may have unique relationships with health outcomes” (p. 4, italics added). They theorized that predictors would exhibit effects in combination because some aspects of the stress process model would buffer the risk of distress (e.g., social support) while others simultaneously would increase the risk (e.g., negative life events).

Identify cases

The number of cases in a QCA analysis may be determined by the population (e.g., 10 intervention sites, 30 grantees). When particular cases can be chosen from a larger population, Berg-Schlosser and De Meur [ 16 ] offer other strategies and best practices for choosing cases. Unless the number of cases relies on an existing population (i.e., 30 programs or grantees), the outcome of interest and existing theory drive case selection, unlike variable-oriented research [ 3 , 4 ] in which numbers are driven by statistical power considerations and depend on variation in the dependent variable. For use in causal inference, both cases that exhibit and do not exhibit the outcome should be included [ 16 ]. If a researcher is interested in developing typologies or concept formation, he or she may wish to examine similar cases that exhibit differences on the outcome or to explore cases that exhibit the same outcome [ 14 , 16 ].

For example, Kahwati et al. [ 9 ] examined the structure, policies, and processes that might lead to an effective clinical weight management program in a large national integrated health care system, as measured by mean weight loss among patients treated at the facility. To examine pathways that lead to both better and poorer facility-level weight loss, 11 facilities from among those with the largest weight loss outcomes and 11 facilities from among those with the smallest were included. By choosing cases based on specific outcomes, Kahwati et al. could identify multiple patterns of success (or failure) that explain the outcome rather than the variability associated with the outcome.

Identify conditions and outcome sets

Selecting conditions relies on the research question, conceptual model, and number of cases similar to other research methods. Conditions (or “sets” or “condition sets”) refer to the explanatory factors in a model; they are similar to variables. Because QCA research questions assess necessary and sufficient conditions, a researcher should consider which conditions in the conceptual model would theoretically produce the outcome individually or in combination. This helps to focus the analysis and number of conditions. Ideally, for a case study design with a small (e.g., 10–15) or intermediate (e.g., 16–100) number of cases, one should aim for fewer than five conditions because in QCA a researcher assesses all possible configurations of conditions. Adding conditions to the model increases the possible number of combinations exponentially (i.e., 2 k , where k = the number of conditions). For three conditions, eight possible combinations of the selected conditions exist as follows: the presence of A, B, C together, the lack of A with B and C present, the lack of A and lack of B with C present, and so forth. Having too many conditions will likely mean that no cases fall into a particular configuration, and that configuration cannot be assessed by empirical examples. When one or more configurations are not represented by the cases, this is known as limited diversity, and QCA experts suggest multiple strategies for managing such situations [ 4 , 14 ].

For example, Ford et al. [ 10 ] studied health departments’ implementation of core public health functions and organizational factors (e.g., resource availability, adaptability) and how those conditions lead to superior and inferior population health changes. They operationalized three core public functions (i.e., assessment of environmental and population public health needs, capacity for policy development, and authority over assurance of healthcare operations) and operationalized those for their study by using composite measures of varied health indicators compiled in a UnitedHealth Group report. In this examination of 41 state health departments, the authors found that all three core public health functions were necessary for population health improvement. The absence of any of the core public health functions was sufficient for poorer population health outcomes; thus, only the health departments with the ability to perform all three core functions had improved outcomes. Additionally, these three core functions in combination with either resource availability or adaptability were sufficient combinations (i.e., causal pathways) for improved population health outcomes.

Calibrate condition and outcome sets

Calibration refers to “adjusting (measures) so that they match or conform to dependably known standards” and is a common way of standardizing data in the physical sciences [ 4 ] (p. 72). Calibration requires the researcher to make sense of variation in the data and apply expert knowledge about what aspects of the variation are meaningful. Because calibration depends on defining conditions based on those “dependably known standards,” QCA relies on expert substantive knowledge, theory, or criteria external to the data themselves [ 14 ]. This may require researchers to collaborate closely with program implementers.

In QCA, one can use “crisp” set or “fuzzy” set calibration. Crisp sets, which are similar to dichotomous categorical variables in regression, establish decision rules defining a case as fully in the set (i.e., condition) or fully out of the set; fuzzy sets establish degrees of membership in a set. Fuzzy sets “differentiate between different levels of belonging anchored by two extreme membership scores at 1 and 0” [ 14 ] (p.28). They can be continuous (0, 0.1, 0.2,..) or have qualitatively defined anchor points (e.g., 0 is fully out of the set; 0.33 is more out than in the set; 0.66 is more in than out of the set; 1 is fully in the set). A researcher selects fuzzy sets and the corresponding resolution (i.e., continuous, four cutoff points, six cutoff) based on theory and meaningful differences between cases and must be able to provide a verbal description for each cutoff point [ 14 ]. If, for example, a researcher cannot distinguish between 0.7 and 0.8 membership in a set, then a more continuous scoring of cases would not be useful, rather a four point cutoff may better characterize the data. Although crisp and fuzzy sets are more commonly used, new multivariate forms of QCA are emerging as are variants that incorporate elements of time [ 14 , 17 , 18 ].

Fuzzy sets have the advantage of maintaining more detail for data with continuous values. However, this strength also makes interpretation more difficult. When an observation is coded with fuzzy sets, a particular observation has some degree of membership in the set “condition A” and in the set “condition NOT A.” Thus, when doing analyses to identify sufficient conditions, a researcher must make a judgment call on what benchmark constitutes recommendation threshold for policy or programmatic action.

In creating decision rules for calibration, a researcher can use a variety of techniques to identify cutoff points or anchors. For qualitative conditions, a researcher can define decision rules by drawing from the literature and knowledge of the intervention context. For conditions with numeric values, a researcher can also employ statistical approaches. Ideally, when using statistical approaches, a researcher should establish thresholds using substantive knowledge about set membership (thus, translating variation into meaningful categories). Although measures of central tendency (e.g., cases with a value above the median are considered fully in the set) can be used to set cutoff points, some experts consider the sole use of this method to be flawed because case classification is determined by a case’s relative value in regard to other cases as opposed to its absolute value in reference to an external referent [ 14 ].

For example, in their study of National Cancer Institutes’ Community Clinical Oncology Program (NCI CCOP), Weiner et al. [ 19 ] had numeric data on their five study measures. They transformed their study measures by using their knowledge of the CCOP and by asking NCI officials to identify three values: full membership in a set, a point of maximum ambiguity, and nonmembership in the set. For their outcome set, high accrual in clinical trials, they established 100 patients enrolled accrual as fully in the set of high accrual, 70 as a point of ambiguity (neither in nor out of the set), and 50 and below as fully out of the set because “CCOPs must maintain a minimum of 50 patients to maintain CCOP funding” (p. 288). By using QCA and operationalizing condition sets in this way, they were able to answer what condition sets produce high accrual, not what factors predict more accrual. The advantage is that by using this approach and analytic technique, they were able to identify sets of factors that are linked with a very specific outcome of interest.

Obtain primary or secondary data

Data sources vary based on the study, availability of the data, and feasibility of data collection; data can be qualitative or quantitative, a feature useful for mixed-methods studies and systematically integrating these different types of data is a major strength of this approach. Qualitative data include program documents and descriptions, key informant interviews, and archival data (e.g., program documents, records, policies); quantitative data consists of surveys, surveillance or registry data, and electronic health records.

For instance, Schensul et al. [ 20 ] relied on in-depth interviews for their analysis; Chuang et al. [ 21 ] and Longest and Thoits [ 15 ] drew on survey data for theirs. Kahwati et al. [ 9 ] used a mixed-method approach combining data from key informant interviews, program documents, and electronic health records. Any type of data can be used to inform the calibration of conditions.

Assign set membership scores

Assigning set membership scores involves applying the decision rules that were established during the calibration phase. To accomplish this, the research team should then use the extracted data for each case, apply the decision rule for the condition, and discuss discrepancies in the data sources. In their study of factors that influence health care policy development in Florida, Harkreader and Imershein [ 22 ] coded contextual factors that supported state involvement in the health care market. Drawing on a review of archival data and using crisp set coding, they assigned a value of 1 for the presence of a contextual factor (e.g., presence of federal financial incentives promoting policy, unified health care provider policy position in opposition to state policy, state agency supporting policy position) and 0 for the absence of a contextual factor.

Construct truth table

After completing the coding, researchers create a “truth table” for analysis. A truth table lists all of the possible configurations of conditions, the number of cases that fall into that configuration, and the “consistency” of the cases. Consistency quantifies the extent to which cases that share similar conditions exhibit the same outcome; in crisp sets, the consistency value is the proportion of cases that exhibit the outcome. Fuzzy sets require a different calculation to establish consistency and are described at length in other sources [ 1 – 4 , 14 ]. Table  1 displays a hypothetical truth table for three conditions using crisp sets.

Sample of a hypothetical truth table for crisp sets

1 fully in the set, 0 fully out of the set

QCA AS AN ANALYTIC TECHNIQUE

The research steps to this point fall into QCA as an approach to understanding social and health phenomena. Analysis of the truth table is the sine qua non of QCA as an analytic technique. In this section, we provide an overview of the analysis process, but analytic techniques and emerging forms of analysis are described in multiple texts [ 3 , 4 , 14 , 17 ]. The use of computer software to conduct truth table analysis is recommended and several software options are available including Stata, fsQCA, Tosmana, and R.

A truth table analysis first involves the researcher assessing which (if any) conditions are individually necessary or sufficient for achieving the outcome, and then second, examining whether any configurations of conditions are necessary or sufficient. In instances where contradictions in outcomes from the same configuration pattern occur (i.e., one case from a configuration has the outcome; one does not), the researcher should also consider whether the model is properly specified and conditions are calibrated accurately. Thus, this stage of the analysis may reveal the need to review how conditions are defined and whether the definition should be recalibrated. Similar to qualitative and quantitative research approaches, analysis is iterative.

Additionally, the researcher examines the truth table to assess whether all logically possible configurations have empiric cases. As described above, when configurations lack cases, the problem of limited diversity occurs. Configurations without representative cases are known as logical remainders, and the researcher must consider how to deal with those. The analysis of logical remainders depends on the particular theory guiding the research and the research priorities. How a researcher manages the logical remainders has implications for the final solution, but none of the solutions based on the truth table will contradict the empirical evidence [ 14 ]. To generate the most conservative solution term, a researcher makes no assumptions about truth table rows with no cases (or very few cases in larger N studies) and excludes them from the logical minimization process. Alternately, a researcher can choose to include (or exclude) rows with no cases from analysis, which would generate a solution that is a superset of the conservative solution. Choosing inclusion criteria for logical remainders also depends on theory and what may be empirically possible. For example, in studying governments, it would be unlikely to have a case that is a democracy (“condition A”), but has a dictator (“condition B”). In that circumstance, the researcher may choose to exclude that theoretically implausible row from the logical minimization process.

Third, once all the solutions have been identified, the researcher mathematically reduces the solution [ 1 , 14 ]. For example, if the list of solutions contains two identical configurations, except that in one configuration A is absent and in the other A is present, then A can be dropped from those two solutions. Finally, the researcher computes two parameters of fit: coverage and consistency. Coverage determines the empirical relevance of a solution and quantifies the variation in causal pathways to an outcome [ 14 ]. When coverage of a causal pathway is high, the more common the solution is, and more of the outcome is accounted for by the pathway. However, maximum coverage may be less critical in implementation research because understanding all of the pathways to success may be as helpful as understanding the most common pathway. Consistency assesses whether the causal pathway produces the outcome regularly (“the degree to which the empirical data are in line with a postulated subset relation,” p. 324 [ 14 ]); a high consistency value (e.g., 1.00 or 100 %) would indicate that all cases in a causal pathway produced the outcome. A low consistency value would suggest that a particular pathway was not successful in producing the outcome on a regular basis, and thus, for translational purposes, should not be recommended for policy or practice changes. A causal pathway with high consistency and coverage values indicates a result useful for providing guidance; a high consistency with a lower coverage score also has value in showing a causal pathway that successfully produced the outcome, but did so less frequently.

For example, Kahwati et al. [ 9 ] examined their truth table and analyzed the data for single conditions and combinations of conditions that were necessary for higher or lower facility-level patient weight loss outcomes. The truth table analysis revealed two necessary conditions and four sufficient combinations of conditions. Because of significant challenges with logical remainders, they used a bottom-up approach to assess whether combinations of conditions yielded the outcome. This entailed pairing conditions to ensure parsimony and maximize coverage. With a smaller number of conditions, a researcher could hypothetically find that more cases share similar characteristics and could assess whether those cases exhibit the same outcome of interest.

At the completion of the truth table analysis, Kahwati et al. [ 9 ] used the qualitative data from site interviews to provide rich examples to illustrate the QCA solutions that were identified, which explained what the solutions meant in clinical practice for weight management. For example, having an involved champion (usually a physician), in combination with low facility accountability, was sufficient for program success (i.e., better weight loss outcomes) and was related to better facility weight loss. In reviewing the qualitative data, Kahwati et al. [ 9 ] discovered that involved champions integrate program activities into their clinical routines and discuss issues as they arise with other program staff. Because involved champions and other program staff communicated informally on a regular basis, formal accountability structures were less of a priority.

ADVANTAGES AND LIMITATIONS OF QCA

Because translational (and other health-related) researchers may be interested in which intervention features—alone or in combination—achieve distinct outcomes (e.g., achievement of program outcomes, reduction in health disparities), QCA is well suited for translational research. To assess combinations of variables in regression, a researcher relies on interaction effects, which, although useful, become difficult to interpret when three, four, or more variables are combined. Furthermore, in regression and other variable-oriented approaches, independent variables are held constant at the average across the study population to isolate the independent effect of that variable, but this masks how factors may interact with each other in ways that impact the ultimate outcomes. In translational research, context matters and QCA treats each case holistically, allowing each case to keep its own values for each condition.

Multiple case studies or studies with the organization as the unit of analysis often involve a small or intermediate number of cases. This hinders the use of standard statistical analyses; researchers are less likely to find statistical significance with small sample sizes. However, QCA draws on analyses of set relations to support small-N studies and to identify the conditions or combinations of conditions that are necessary or sufficient for an outcome of interest and may yield results when probabilistic methods cannot.

Finally, QCA is based on an asymmetric concept of causation , which means that the absence of a sufficient condition associated with an outcome does not necessarily describe the causal pathway to the absence of the outcome [ 14 ]. These characteristics can be helpful for translational researchers who are trying to study or implement complex interventions, where more than one way to implement a program might be effective and where studying both effective and ineffective implementation practices can yield useful information.

QCA has several limitations that researchers should consider before choosing it as a potential methodological approach. With small- and intermediate-N studies, QCA must be theory-driven and circumscribed by priority questions. That is, a researcher ideally should not use a “kitchen sink” approach to test every conceivable condition or combination of conditions because the number of combinations increases exponentially with the addition of another condition. With a small number of cases and too many conditions, the sample would not have enough cases to provide examples of all the possible configurations of conditions (i.e., limited diversity), or the analysis would be constrained to describing the characteristics of the cases, which would have less value than determining whether some conditions or some combination of conditions led to actual program success. However, if the number of conditions cannot be reduced, alternate QCA techniques, such as a bottom-up approach to QCA or two-step QCA, can be used [ 14 ].

Another limitation is that programs or clinical interventions involved in a cross-site analysis may have unique programs that do not seem comparable. Cases must share some degree of comparability to use QCA [ 16 ]. Researchers can manage this challenge by taking a broader view of the program(s) and comparing them on broader characteristics or concepts, such as high/low organizational capacity, established partnerships, and program planning, if these would provide meaningful conclusions. Taking this approach will require careful definition of each of these concepts within the context of a particular initiative. Definitions may also need to be revised as the data are gathered and calibration begins.

Finally, as mentioned above, crisp set calibration dichotomizes conditions of interest; this form of calibration means that in some cases, the finer grained differences and precision in a condition may be lost [ 3 ]. Crisp set calibration provides more easily interpretable and actionable results and is appropriate if researchers are primarily interested in the presence or absence of a particular program feature or organizational characteristic to understand translation or implementation.

QCA offers an additional methodological approach for researchers to conduct rigorous comparative analyses while drawing on the rich, detailed data collected as part of a case study. However, as Rihoux, Benoit, and Ragin [ 17 ] note, QCA is not a miracle method, nor a panacea for all studies that use case study methods. Furthermore, it may not always be the most suitable approach for certain types of translational and implementation research. We outlined the multiple steps needed to conduct a comprehensive QCA. QCA is a good approach for the examination of causal complexity, and equifinality could be helpful to behavioral medicine researchers who seek to translate evidence-based interventions in real-world settings. In reality, multiple program models can lead to success, and this method accommodates a more complex and varied understanding of these patterns and factors.

Implications

Practice : Identifying multiple successful intervention models (equifinality) can aid in selecting a practice model relevant to a context, and can facilitate implementation.

Policy : QCA can be used to develop actionable policy information for decision makers that accommodates contextual factors.

Research : Researchers can use QCA to understand causal complexity in translational or implementation research and to assess the relationships between policies, interventions, or procedures and successful outcomes.

  • Technical Support
  • Find My Rep

You are here

Qualitative Comparative Analysis in Mixed Methods Research and Evaluation

Qualitative Comparative Analysis in Mixed Methods Research and Evaluation

  • Leila C. Kahwati - RTI International
  • Heather L. Kane - RTI International
  • Description

Qualitative Comparative Analysis in Mixed Methods Research and Evaluation provides a user-friendly introduction for using Qualitative Comparative Analysis (QCA) as part of a mixed methods approach to research and evaluation. Offering practical, in-depth, and applied guidance for this unique analytic technique that is not provided in any current mixed methods textbook, the chapters of this guide skillfully build upon one another to walk researchers through the steps of QCA in logical order. To enhance and further reinforce learning, authors Leila C. Kahwati and Heather L. Kane provide supportive learning objectives, summaries, and exercises, as well as author-created datasets for use in R via the companion site.   Qualitative Comparative Analysis in Mixed Methods Research and Evaluation is Volume 6 in SAGE’s Mixed Methods Research Series. To learn more about each text in the series, please visit sagepub.com/mmrs .

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

Supplements

“This book is written in a way that is easy to follow and should expand the range of fields in which QCA is used. Also, there are quite a few principles and practice tips articulated, especially in later chapters, which are applicable more broadly across social sciences and evaluation work. Novice researchers will find those suggestions especially helpful, even if QCA does not become a major tool in their practice.”

“The practical, how-to, nature of the text is very appealing to me as an instructor. I like the examples and appreciate the numerous figures used to illustrate processes and arguments for visual learners.”

“The text introduces an important, specific approach to research.”

“I think the key strengths of this text are its organization and breadth. From an organization perspective, the wealth of resources and focus is essential for guiding the reader/learner toward practical keywords, i.e. language, and skills necessary to implement.”

This is a very good resource for students and teaching

  • Use of a concrete example that is woven across multiple chapters provides a thread of continuity that allows readers to follow the step-by-step process for understanding the method. 
  • A guiding heuristic helps orient the reader at the beginning of each chapter to understand where they are in the process of conducting an analysis.
  • Analytic checklists easily summarize the analytic process described in the chapter and serve as a reference.
  • Practice exercises provide essential practice and reinforce key concepts.
  • Helpful summaries and key points succinctly summarize main points of each chapter.

Sample Materials & Chapters

Chapter 1: Qualitative Comparative Analysis as Part of a Mixed Methods Approach

Chapter 5: Analyzing the Data -- Initial Analyses

For instructors

Select a purchasing option, related products.

Configurational Comparative Methods

This title is also available on SAGE Research Methods , the ultimate digital methods library. If your library doesn’t have access, ask your librarian to start a trial .

quantitative approaches to comparative analyses: data properties and their implications for theory, measurement and modelling

  • Introduction
  • Open access
  • Published: 06 November 2015
  • Volume 14 , pages 385–393, ( 2015 )

Cite this article

You have full access to this open access article

  • robert neumann 1 &
  • peter graeff 2  

6025 Accesses

2 Citations

1 Altmetric

Explore all metrics

While there is an abundant use of macro data in the social sciences, little attention is given to the sources or the construction of these data. Owing to the restricted amount of indices or items, researchers most often apply the ‘available data at hand’. Since the opportunities to analyse data are constantly increasing and the availability of macro indicators is improving as well, one may be enticed to incorporate even qualitatively inferior indicators for the sake of statistically significant results. The pitfalls of applying biased indicators or using instruments with unknown methodological characteristics are biased estimates, false statistical inferences and, as one potential consequence, the derivation of misleading policy recommendations. This Special Issue assembles contributions that attempt to stimulate the missing debate about the criteria of assessing aggregate data and their measurement properties for comparative analyses.

Avoid common mistakes on your manuscript.

INTRODUCTION

The social sciences are witnessing an ever increasing supply of data at the aggregate levels on several key dimensions of societal progress or politico-institutional conditions. Next to standardised sources for comparing countries worldwide ( Solt, 2014 ), a bulge of indicators have been introduced over the past three decades to allow for comparative analyses regarding such issues as levels of perceived corruption, quality of governance, environmental sustainability, political rights and democratic freedom. And while there is an abundant use of these macro data, less attention has been given to the sources or to the construction of these data. Despite the spike in data availability, information on countries or regions often remains restricted to only a handful of indicators compiled by organisations that have the resources and know-how to offer worldwide a coverage of countries. Due to this restricted amount of indices or items, researchers for the most part apply the ‘available data at hand’ with only little consideration of their measurement properties.

There already have been attempts to address questions of data quality within the community of comparative political science. Herrera and Kapur (2007) try to foster the debate about the quality of comparative data sets by highlighting the three components of validity, coverage and accuracy. Mudde and Schedler (2010) discuss the challenges of data choice, distinguishing between procedural and outcome-oriented criteria when data quality is to be assessed. They relate the procedural criterion to aspects of transparency, reliability and replicability of data. The latter criteria is connected to validity, accuracy and precision ( Mudde and Schedler, 2010 : 411). Both groups of authors agree that research on data properties usually offers little scientific rewards, but that the debate about the measures is crucial and requires constant stimulation.

A few landmark books and articles have laid out some fundamental guidelines and approaches concerning case selection, operationalisation and implications for comparative model testing at the macro level (see for instance King et al, 1994 ; Adcock and Collier, 2001 ; Gerring, 2001 ). Yet it appears that the discussion within comparative research about measurement properties of different indicators lags the ongoing application of numerous indices in all sorts of comparative empirical research. That is, theoretical and empirical work with new and improved measurements has so far refrained from the opportunity to enhance an exchange about the conceptual framework for comparative multivariate modelling. Furthermore, it often remains problematic to grasp the core intentions of different streams of knowledge production especially when the computation of new cross-country indices was performed in response to prior criticism of existing measures.

DATA PROPERTIES AND THEIR TRADE-OFF

Judging data properties from a qualitative and quantitative perspective, King et al (1994 : 63, 97) propose the criteria of unbiasedness, efficiency and consistency. In particular they concentrate on the inferential performance of measures. Here, bias relates to the property to introduce specific variance into the measurement, which in turn leads to non-random variation between different or repeated applications of the measure in inferential tasks. For example, Hawken and Munck (2011 : 4) report that ratings on perceived corruption made by commercial risk assessment agencies systematically rate economies as more corrupt than surveys of business executives, representing a bias ‘which does not seem consistent with random measurement error’. Efficiency relates to the variance of a measure when taken as an estimator. The simple idea is that an increase in sample size will likely reduce the variance of a measure and will measure a phenomenon more efficiently. But, even King et al (1994 : 66) emphasise that these two properties come with a trade-off that is not always easily reconcilable to achieve consistency, most likely in the form that researchers should allow for more bias in their measure if they achieve larger improvements in efficiency. They do not elaborate on consistency further, although they obviously relate it to reliability, which points towards traditional criteria or properties of measurement theory.

‘… the criteria of validity and reliability remain the cornerstones of any discussions about measurement properties’.

This traditional approach of (psychometric) test or measurement theory usually provides social scientists with a framework to think about properties of measures or data. That is, the criteria of validity and reliability remain the cornerstones of any discussions about measurement properties. Footnote 1 One can define reliability as an ‘agreement between two efforts to measure same trait through maximally similar methods’ ( Campbell and Fiske, 1959 : 83). Usually, this translates to a test of internal consistency of an indicator or test-retest approaches to check whether the systematic variation of an observed phenomenon can be captured by an empirical measure, at several points in time or across different (sub-)samples ( Nunnally and Bernstein, 1978 : 191). Validity represents a more demanding measurement criterion. A few authors have put forward conceptual approaches to address the problems of constructing indices under the perspective of measurement validity (e.g., Bollen, 1989 ; Adcock and Collier, 2001 ). While measurement validity may be broadly defined as the achievement that ‘… scores (including the results of qualitative classification) meaningfully capture the ideas contained in the corresponding concept’ ( Adcock and Collier, 2001 : 530), it consists of various subcategories such as content, construct, internal/external validity, convergent/discriminant validity and even touches upon more ambitious concepts such as ecological validity as well. These various dimensions also reflect a variety of sources for measurement errors, whether stemming from the process data collection (randomisation versus case selection), survey mode and origin of data, data operationalisation or aggregation of different data sources.

Three aspects require us to think harder about the feasibility of these classical concepts of measurement theory. First, the increasing availability of data for the computation or aggregation of macro indicators should improve the reliability of measurements. In fact, it seems that econometricians have completely abandoned the idea of measurement validity and instead focus on statistical techniques for aggregating data. For instance, a recent debate has yielded the impression that reliability remains the main goal to be established, while the concept of validity are not treated as equally important (see the discussion between Kaufmann et al (2010) and Thomas (2010) ). The problem with the idea to increase the reliability of measures arises at the point when validity is sacrificed due to ‘methodological contamination’ ( Sullivan and Feldman, 1979 : 19), especially with regards to the notion that reliability ‘represents a necessary but not sufficient condition for validity’ ( Nunally and Bernstein, 1978 : 192, italics in the original). Hence, aggregated or broadly defined measures that are unable to discriminate concepts and which are theoretically distinct – and hence are not supposed to be measured by the initial approaches – do not necessarily represent threats to the reliability, but rather to the validity. This is especially the case in empirical tests of theoretical predictions regarding the determinants or consequences of certain politico-institutional conditions, where invalid measures are likely to generate biased coefficients due to measurement error among independent or even dependent variables ( Herrera and Kapur, 2007 ). To this end, results will subsequently lack generalisability. For example, combining several reliable measures of the same phenomena to increase the reliability of the aggregate measure can only claim to be unbiased if all underlying measures capture the same portion of systematic variation in a phenomenon and are able to exclude random measurement error equally well. Testing theories with aggregate measures always comes with the caveat of introducing random measurement error into a measure that is supposed to only represent systematic variation in a phenomenon (see for instance Bollen, 2009 for a discussion), despite being highly reliable.

The potential for a trade-off between reliability and components of validity leads to the second aspect to keep in mind when thinking about measurement properties: Lack of validity may only bother researchers who refer to a theory-driven approach of quantitative analyses. The shift towards a data-driven approach puts less emphasis on the underlying theory from which one derives hypotheses to be tested. Hypothesis testing may even be the least important aspect of statistical modelling ( Varian, 2014 : 5). Instead, the goals of data analyses are prediction, forecasting specific behaviours, events or outcomes based on large sets of data, prior knowledge or prior evidence. Due to large amounts of data available and the increasing computer capacities that have enabled the widespread use of Bayesian approaches or machine learning techniques in the social sciences (see Gelman et al, 2014 ; Jackman, 2009 ), claims can be made that measurement properties that derive their ideas from a theory-driven perspective may lose its relevance. Given this shift, it implies an increasing importance for concepts such as reliability or predictive validity that appear closer to the data-driven approach. Footnote 2

The third challenge confronts comparative scholars working with individual-level data. Here, the extension and longevity of survey programmes such as the World Values Surveys or the International Social Science Project (ISSP) have made the application of multilevel models for comparative cross-sectional longitudinal analyses feasible ( Beck, 2007 ; Fairbrother, 2014 ). Given these opportunities, one core assumption is that measurement invariance holds across countries. That is, questionnaire items capture the same underlying concept across different contexts of data collection in a similar way. On the other hand, the theoretical emphasis on the contextuality of social phenomena creates a desire to reflect such idiosyncratic characteristics of a society within the subsequent measurements approaches.

This creates another trade-off for scholars within the respective research communities. As in the case of reliability and validity, contextually reliable measures can come with a lack of measurement invariance. Given that measurement invariance is tested via its discrepancy to some theoretical model, the shift to data-driven approaches may affect the importance of this particular measurement property in a similar fashion as illustrated for the relationship between reliability and validity.

We perceive this development as neither definitive nor one-dimensional. Measurement theory and the concepts like validity remain crucial to evaluate and apply the right instruments and to know where to look when research questions are to be answered. That is, how to think or assess the properties of data becomes one crucial aspect of any empirical endeavour. But they seldom represent the only criteria for assessing the characteristics of data. Our own work was concentrated on the aspect of comparing different indices by their measurement properties ( Neumann and Graeff, 2010 , 2013 ). One conclusion from this work is that researchers face certain incentives that require decisions on how to cope with the aforementioned trade-offs when measures from comparative data are applied.

THE EDITED SPECIAL ISSUE

Despite the known problems with comparative data, only a few questions remain answered and the stream of new indicators constantly enhances new challenges facing current comparative research. Some key problems can be summarised as follows: How to account for the contextuality of measuring country characteristics while maintaining comparability? What are the consequences when prior knowledge and existing empirical findings are to be included into the derivation of existing and new indicators? How to assess the accuracy of an index and how to even define or measure accuracy in a measurement sense?

This edited issue comprises papers in which the properties of applied aggregate data and the underlying sources for the analysis are explicitly reflected. As the authors bring in different methodological backgrounds, the papers apply the variety of contemporary approaches dealing with reliability and validity. This does not always coincide with a psychometric notion of constructs or measurement criteria. The authors do not, however, fall prey to typical publication strategies such as reporting only significant and/or theoretical congruent results instead of null-results ( Gelman and Loken, 2014 ). All papers share the ambition to accurately reflect the underlying theoretical meaning of the constructs of interest. By this, they refer to the above mentioned key questions in their own way.

Susanne Pickel et al (2015 ) present a new framework for comparative social scientists that tackles one of the most prominent topics in political research: the quality of democracy. In particular, the authors propose a framework to assess the measurement properties of three prominent indices of the quality of democracy. This evaluative process requires both the integration of theoretical considerations about the definitional clarity and validity of the underlying concepts as well as empirical concerns about choice of data sources or procedures of operationalisation and aggregation. Their contribution picks up several important points when one deals with the measurement of macro phenomena. First, although the definition of a concept that encompasses concept validity may vary between researchers or research schools, an assessment of the measurement properties remains tied to rather objective criteria like reliability, transparency, parsimony or replicability. Second, the assessment of a concept and its measurement characteristic ultimately face the challenge of measuring contextual characteristics of a political system as close as possible while adhering to more general measurement principles. The latter represents a task for researchers who want to investigate the comparability of indices. Pickel et al apply a framework that includes twenty criteria, focusing on three indices of quality of democracy. The authors state that a theory-based conceptualisation represents the necessary condition for an attempt to face the (potential) trade-off between the adequacy of a measure and its property to compare it with other measures in a meaningful way.

Mark David Nieman and Jonathan Ring (2015 ) pick up one of the other big topics of political research: human rights. Their starting point is that all researchers dealing with country data on human rights have to rely on a restricted number of data sources. Namely, the Cingranelli-Richards (CIRI) or the Political Terror Scale (PTS) represents two widely used indices that are both constructed by using the same country reports on human rights violations from the United States State Department and Amnesty International. Their main concern is that if data resources share systematic measurement error, for instance due to politico-ideological or geopolitical bias in the country reports, these properties will likely be reflected in the indices constructed from these data sources. After clarifying why the reports of the US State Department possess such undesirable measurement properties, they propose specific remedies for the problem. Nieman and Ring discuss possible solutions such as data truncation as well as strategies of correcting for systematic bias using an instrumental variable approach. Their replication analysis reveals that the application of the corrected version indeed changes results from prior analyses. Their work highlights the importance of the decisions during the process of indicator choice and subsequent analysis, whereas some choice sets and their consequences regarding inferential reasoning pose conflicting incentives for researchers given the publication bias favouring statistical significant findings ( Brodeur et al, 2012 ).

Joakim Kreutz (2015) also scrutinises the methodological foundations of the PTS and CIRI. By referring to both indices, he tries to clarify the connection between human rights and the level of state repression in eighteen West African countries. But instead of focusing on repression levels, Kreutz focuses on changes in repression. By highlighting the importance of repression dynamics, he extends prior evidence on the connection of state repression and politico-institutional factors. From a measurement perspective, disaggregating levels of repression by the direction of change (increase/decrease) and by the nature of repressive actions (indiscriminate, selective targeting) may improve our understanding of the contextual features of repression dynamics. His study provides several implications for current research efforts that try to disentangle the relationship between levels of democracy and state repression.

Alexander Schmotz identifies a gap in the political science literature about the measurement of cooptation, which is the way by which non-members are absorbed by a ruling elite. Concepts of co-optation become particularly important for explaining the upholding of autocratic regimes. As such, issues of co-optation are at the heart of political science research but are only seldom operationalised, especially across time. Schmotz develops an index that is capable to measure several threats to autocratic regimes by social pressure groups. Co-optation is a way to deal with these threats. This topic illustrates some general problems in social science research, namely that theoretical ideas, their predictions about causes and effects, and their testing in empirical research are often intertwined. In such a situation, measurement quality (e.g., content validity) is also related to the performance of the index, in particular if the concept of co-optation refers to a ‘seemingly unrelated set of indicators’ ( Schmotz, 2015 ). Counterintuitive findings are then of particular importance as in study by Schmotz. He comes up with the conclusion that the concept of co-optation might not be as important as the relevant literature suggests. Such a finding – based on a new index with the potential for testing and improving its measurement features – will incite the discussion in this field and will most likely lead to refinements of theoretical ideas and their operationalisations.

Barbara Bechter and Bernd Brandl (2015 ) start with the observation that comparative research is mainly based on aggregates on the national level. This ‘methodological nationalism’ comes to a dead end if the variance between countries for the variable of interest vanishes (which typically occurs for political regime indicators for western countries, such as the Polity index). They provide an excellent example for an answer to the question about what accounts for the contextuality of comparative research measures as they find that for the field of industrial relations relevant variables reveal more variability across industrial sectors than across countries. This does not imply the meaninglessness of cross-country comparisons. Rather, it opens the perspective to alternative levels of analysis, not only in the field of industrial relations.

William Pollock, Jason Barabas, Jennifer Jerit, Martijn Schoonvelde, Susan Banducci and Daniel Stevens ( 2015 ) introduce their study of media effects with the statement that results from analyses of the degree of media exposure on certain attitudes or public opinion are affected by ‘data issues related to the number of observations, the timing of the inquiry, and (most importantly) the design choices that lead to alternative counterfactuals’ ( Pollock et al, 2015 ). In an attempt to provide a comprehensive overview, two identification strategies (difference-in-difference estimator versus within-survey/within-subject) for causal claims from cross- or single country survey data are compared to a traditional approach of statistical inference from regression analyses. Using the European Social Survey and information about media-related events during the data collection process allows them to investigate media effects of political or economic events across countries, across types and number of events as well as across time. With a focus on the external validity of such (quasi-)experimental use of survey data, they are able to generate in parts counterintuitive results regarding the impact of sample size and design effects. Their study emphasises that the process of data collection and design choices have an important impact on subsequent data analyses.

By referring to psychometric techniques, Jan Cieciuch et al (2015 ) raise the question about reliable ways of testing measurement invariance. As a precondition for comparing data, measurement invariance can be determined at the level of theoretical constructs (or latent variables), at the level of relations between the theoretical constructs and their indicators or at the level of indicators themselves. Standard methods to pinpoint measurement invariance based on factor analytical techniques are prone to produce false inferences due to model misspecifications. Cieciuch and his colleagues pick up the discussion in literature about model misspecification and show how one can assess whether a certain level of measurement invariance is obtained. As misspecification must be considered as a matter of degree, their study stimulates the discussion about the question, how much misspecification is acceptable.

King et al (1994: 25) clarify earlier that the achievement of reliability and validity represent key goals in any social inquiry, whether qualitative or quantitative in nature.

This change does not imply a shift from deductive to inductive reasoning from data to theories, because researchers remain bound to deriving their results from a theoretical framework. The nomological core of the data-driven approach stems from the distributive characteristics of different probability distributions. See Gelman and Shalizi (2014) for more details on this line of reasoning.

Adcock, R. and Collier, D. (2001) ‘Measurement validity: A shared standard for qualitative and quantitative research’, American Political Science Review 95 (3): 529–546.

Article   Google Scholar  

Beck, N. (2007) ‘From statistical nuisances to serious modeling: Changing how we think about the analysis of time-series–cross-section data’, Political Analysis 15 (2): 97–100. doi:10.1093/pan/mpm001.

Bechter, B. and Brandl, B. (2015) ‘Measurement and analysis of industrial relations aggregates: What is the relevant unit of analysis in comparative research?’ European Political Science 14(4): 422–438.

Bollen, K.A. (1989) Structural Equations with Latent Variables, New York, NY: Wiley.

Book   Google Scholar  

Bollen, K.A. (2009) ‘Liberal democracy series I, 1972–1988: Definition, measurement, and trajectories’, Electoral Studies 28 (3): 368–374.

Brodeur, A., Lé, M., Sangnier, M. and Zylberberg, Y. (2012) Star wars: The empirics strike back’, Paris School of Economics Working Paper 2012–29, pp. 1-.

Campbell, D.T. and Fiske, D.W. (1959) ‘Convergent and discriminant validity by the mutitrait-multimethod matrix’, Psychological Bulletin 56 (2): 81–105.

Cieciuch, J., Davidov, E., Oberski, D.L. and Algersheimer, R. (2015) ‘Testing for measurement invariance by detecting local misspecification and an illustration across online and paper-and-pencil samples’, European Political Science 14(4): 521–538.

Fairbrother, M. (2014) ‘Two multilevel modeling techniques for analyzing comparative longitudinal survey datasets’, Political Science Research and Methods 2 (1): 119–140.

Gelman, A., Carlin, J., Stern, H., Dunson, D.B., Vehtari, A. and Rubin, D. (2014) Bayesian Data Analysis, 3rd edn. London: CRC Press.

Google Scholar  

Gelman, A. and Shalizi, C. (2014) ‘Philosophy and the practice of Bayesian statistics’, British Journal of Mathematical and Statistical Psychology 66 (1): 8–38.

Gelman, A. and Loken, E. (2014) ‘The statistical crisis in science data-dependent analysis – a ‘garden of forking paths’ – explains why many statistically significant comparisons don't hold up’, American Scientist 102 (6): 460. doi:10.1511/2014.111.460.

Gerring, J. (2001) Social Science Methodology: A Criterial Framework, Cambridge: Cambridge University Press.

Hawken, A. and Munck, G.L. (2011) ‘Does the evaluator make a difference? Measurement validity in corruption research’, Measurement Validity in Corruption Research.

Herrera, Y.M. and Kapur, D. (2007) ‘Improving data quality: Actors, incentives, and capabilities’, Political Analysis 15 (4): 365–386.

Jackman, S. (2009) Bayesian Analysis for the Social Sciences, New York: John Wiley & Sons.

Kaufmann, D., Kraay, A. and Mastruzzi, M. (2010) ‘Response to ‘what do the worldwide governance indicators measure?’’, European Journal of Development Research 22 (1): 55–58.

King, G., Keohane, R.O. and Verba, S. (1994) Designing Social Inquiry: Scientific Inference in Qualitative Research, Princeton, NJ: Princeton University Press.

Kreutz, J. (2015) ‘Separating dirty war from dirty peace: Revisiting the conceptualization of state repression in quantitative data’, European Political Science 14(4): 458–472.

Mudde, C. and Schedler, A. (2010) ‘Introduction: Rational data choice’, Political Research Quarterly 63 (2): 410–416.

Neumann, R. and Graeff, P. (2010) ‘A multitrait-multimethod approach to pinpoint the validity of aggregated governance indicators’, Quality & Quantity 44 (5): 849–864.

Neumann, R. and Graeff, P. (2013) ‘Method bias in comparative research: Problems of construct validity as exemplified by the measurement of ethnic diversity’, Journal of Mathematical Sociology 37 (2): 85–112.

Nieman, M.D. and Ring, J.J. (2015) ‘The construction of human rights: Accounting for systematic bias in common human rights measures’, European Political Science 14(4): 473–495.

Nunally, J.C. and Bernstein, I.H. (1978) Psychometric Theory, New York: McGraw-Hill.

Pickel, S., Stark, T. and Breustedt, W. (2015) ‘Assessing the quality of quality measures of democracy: a theoretical framework and its empirical application’, European Political Science 14(4): 496–520.

Pollock, W., Barabas, J., Jerit, J., Schoonvelde, M., Banducci, S. and Stevens, D. (2015) ‘Studying media events in the European social surveys across research designs, countries, time, issues, and outcomes’, European Political Science 14(4): 394–421.

Schmotz, A. (2015) ‘Vulnerability and compensation – Constructing an index of co-optation in autocratic regimes’, European Political Science 14(4): 439–457.

Solt, F. (2014) ‘The Standardized World Income Inequality Database‘, Working paper. SWIID Version 5.0, October 2014. http://myweb.uiowa.edu/fsolt/index.html .

Sullivan, J.L. and Feldman, S. (1979) ‘Multiple indicators – An introduction‘ Sage University Paper series in Quantitative Applications in the Social Sciences No. 07–15, Beverly Hills and London: Sage.

Thomas, M. (2010) ‘What do the worldwide governance indicators measure?’ European Journal of Development Research 22 (1): 31–54.

Varian, H.R. (2014) ‘Big data: New tricks for econometrics’, The Journal of Economic Perspectives 28 (2): 3–28.

Download references

Acknowledgements

Parts of this Special Issue follow upon the symposium ‘The Quality of Measurement – Validity, Reliability and its Ramifications for Multivariate Modelling in Social Sciences’ held at Technische Universität Dresden from 21 to 22 September 2012. Videos of the presentations from the Symposium can be accessed through the website of the symposium at http://tinyurl.com/vwmeasurement . This symposium was financed by the Volkswagen Foundation, which supported the publication of this special issue as well. We thank all participants of the symposium for their remarks and contributions. Foremost, we thank the Volkswagen Foundation for their financial support.

Author information

Authors and affiliations.

Technische Universität Dresden, Dresden, 01069, Germany

robert neumann

University of Kiel, Christian-Albrechts-Platz 4, Kiel, 24118, Germany

peter graeff

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to robert neumann .

Additional information

The online version of this article is available Open Access

Rights and permissions

This work is licensed under a Creative Commons Attribution 3.0 Unported License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/

Reprints and permissions

About this article

neumann, r., graeff, p. quantitative approaches to comparative analyses: data properties and their implications for theory, measurement and modelling. Eur Polit Sci 14 , 385–393 (2015). https://doi.org/10.1057/eps.2015.59

Download citation

Published : 06 November 2015

Issue Date : 01 December 2015

DOI : https://doi.org/10.1057/eps.2015.59

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • reliability
  • measurement
  • quantitative analysis
  • comparative politics
  • comparative sociology
  • Find a journal
  • Publish with us
  • Track your research

comparative analysis of the qualitative and quantitative research approaches

BRADFORD SCHOLARS

  •   Bradford Scholars
  • Management and Law
  • Management and Law Publications

A Comparative Analysis of Qualitative and Quantitative Research Methods and a Justification for Adopting Mixed Methods in Social Research.

Thumbnail

Publication date

Peer-reviewed, collections.

entitlement

Related items

Showing items related by title, author, creator and subject.

Thumbnail

Relationally Reflexive Practice: A Generative Approach to Theory Development in Qualitative Research

Qualitative data analysis using a dialogical approach.

Thumbnail

Patient and public involvement in designing and conducting doctoral research: the whys and the hows

Export search results.

The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.

You are using an outdated browser. Please upgrade your browser to improve your experience.

Comparative Analysis of Qualitative And Quantitative Research

  • 1. Master Of Library And Information Science

Description

There’s no hard and fast rule for qualitative versus quantitative research, and it’s often taken for granted. It is claimed here that the divide between qualitative and quantitative research is ambiguous, incoherent, and hence of little value, and that its widespread use could have negative implications. This conclusion is supported by a variety of arguments. Qualitative researchers, for example, have varying perspectives on fundamental problems (such as the use of quantification and causal analysis), which makes the difference as such shaky. In addition, many elements of qualitative and quantitative research overlap significantly, making it difficult to distinguish between the two. Practically in the case of field research, the Qualitative and quantitative approach can't be distinguished clearly as the study pointed. The distinction may limit innovation in the development of new research methodologies, as well as cause complication and wasteful activity. As a general rule, it may be desirable not to conceptualise research approaches at such abstract levels as are done in the context of qualitative or quantitative methodologies. Discussions of the benefits and drawbacks of various research methods, rather than general research questions, are recommended.

Comparative Analysis of Qualitative and Quantitative Research (SSRN).pdf

Files (246.0 kb), additional details, identifiers.

  • Allwood, C. M. (2011). The distinction between qualitative and quantitative research methods is problematic. Quality & Quantity, 46(5), 1417–1429. https://doi.org/10.1007/s11135-011-9455-8
  • Bryman, A. (2016). Social Research Methods (5th ed.). Oxford, UK: Oxford University Press.
  • Coghlan, D., & Brydon-Miller, M. (2014). The SAGE Encyclopedia of Action Research. https://doi.org/10.4135/9781446294406
  • Creswell, J. W. (2008). Educational research : planning, conducting, and evaluating quantitative and qualitative research. Pearson Education International.
  • Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE Handbook of Qualitative Research (pp. 1–32). Thousand Oaks: Sage Publications.
  • Humphrey, B. (n.d.). The difference between qualitative and quantitative research – Method in Madness by Dovetail. Retrieved August 4, 2018, from https://dovetailapp.com/blog/qualitative-quantitative-research/
  • Madrigal, D., & McClain, B. (2012, September 3). Strengths and Weaknesses of Quantitative and Qualitative Research :: UXmatters. https://www.uxmatters.com/mt/archives/2012/09/strengths-and-weaknesses-of-quantitative-and-qualitative-research.php
  • Miller, B. (2018a, January 27). 15 Advantages and Disadvantages of Quantitative Research. Retrieved from https://greengarageblog.org/15-advantages-and-disadvantages-of-quantitative-research
  • Miller, B. (2018b, March 23). 19 Advantages and Disadvantages of Qualitative Research Methods. Retrieved from https://greengarageblog.org/19-advantages-and-disadvantages-of-qualitative-research-methods
  • Paul Nchoji Nkwi, Nyamongo, I. K., & Gery Wayne Ryan. (2001). Field research into socio-cultural issues : methodological guidelines. Yaoundé, Cameroon: International Center For Applied Social Sciences, Research, And Training.
  • Physiopedia. (n.d.). Qualitative Research (S. Buxton, Ed.). Retrieved August 7, 2018, from https://www.physio-pedia.com/Qualitative_Research#cite_note-Nkwi-5
  • Punch, K. (1998). Introduction to social research : quantitative and qualitative approaches. London ; Thousand Oaks, Calif.: Sage Publications.
  • Raghunath, M. (2016, March 3). 7 Differences Between Qualitative and Quantitative Research. Durofy - Business, Technology, Entertainment and Lifestyle Magazine. https://durofy.com/7-differences-between-qualitative-and-quantitative-research
  • Streefkerk, R. (2019, April 12). Qualitative vs. quantitative research. Retrieved from https://www.scribbr.com/methodology/qualitative-quantitative-research/#:~:text=and%20qualitative%20methods%3F-
  • Surbhi, S. (2018, November 19). Difference Between Qualitative and Quantitative Research (With Comparison Chart) - Key Differences. https://keydifferences.com/difference-between-qualitative-and-quantitative-research.html
  • The Interaction Design Foundation. (n.d.). What is Quantitative Research? Retrieved August 8, 2018, from https://www.interaction-design.org/literature/topics/quantitative-research
  • Voxco. (2018, September 9). Qualitative research methodology: Definition, Types and examples - Voxco. Retrieved from https://www.voxco.com/blog/qualitative-research-methodology-definition-types-and-examples/
  • Wagle, K. (2018, May 25). Quantitative Vs Qualitative Research: 20+ Differences. http://www.publichealthnotes.com/differences-qualitative-quantitative-research/
  • https://www.orau.gov/cdcynergy/soc2web/content/phase05/phase05_step03_deeper_qualitative_and_quantitative.htm

This site uses cookies. Find out more on how we use cookies

  • Open access
  • Published: 23 September 2023

Educational interventions targeting pregnant women to optimise the use of caesarean section: What are the essential elements? A qualitative comparative analysis

  • Rana Islamiah Zahroh   ORCID: orcid.org/0000-0001-7831-2336 1 ,
  • Katy Sutcliffe   ORCID: orcid.org/0000-0002-5469-8649 2 ,
  • Dylan Kneale   ORCID: orcid.org/0000-0002-7016-978X 2 ,
  • Martha Vazquez Corona   ORCID: orcid.org/0000-0003-2061-9540 1 ,
  • Ana Pilar Betrán   ORCID: orcid.org/0000-0002-5631-5883 3 ,
  • Newton Opiyo   ORCID: orcid.org/0000-0003-2709-3609 3 ,
  • Caroline S. E. Homer   ORCID: orcid.org/0000-0002-7454-3011 4 &
  • Meghan A. Bohren   ORCID: orcid.org/0000-0002-4179-4682 1  

BMC Public Health volume  23 , Article number:  1851 ( 2023 ) Cite this article

1227 Accesses

1 Citations

1 Altmetric

Metrics details

Caesarean section (CS) rates are increasing globally, posing risks to women and babies. To reduce CS, educational interventions targeting pregnant women have been implemented globally, however, their effectiveness is varied. To optimise benefits of these interventions, it is important to understand which intervention components influence success. In this study, we aimed to identify essential intervention components that lead to successful implementation of interventions focusing on pregnant women to optimise CS use.

We re-analysed existing systematic reviews that were used to develop and update WHO guidelines on non-clinical interventions to optimise CS. To identify if certain combinations of intervention components (e.g., how the intervention was delivered, and contextual characteristics) are associated with successful implementation, we conducted a Qualitative Comparative Analysis (QCA). We defined successful interventions as interventions that were able to reduce CS rates. We included 36 papers, comprising 17 CS intervention studies and an additional 19 sibling studies (e.g., secondary analyses, process evaluations) reporting on these interventions to identify intervention components. We conducted QCA in six stages: 1) Identifying conditions and calibrating the data; 2) Constructing truth tables, 3) Checking quality of truth tables; 4) Identifying parsimonious configurations through Boolean minimization; 5) Checking quality of the solution; 6) Interpretation of solutions. We used existing published qualitative evidence synthesis to develop potential theories driving intervention success.

We found successful interventions were those that leveraged social or peer support through group-based intervention delivery, provided communication materials to women, encouraged emotional support by partner or family participation, and gave women opportunities to interact with health providers. Unsuccessful interventions were characterised by the absence of at least two of these components.

We identified four key essential intervention components which can lead to successful interventions targeting women to reduce CS. These four components are 1) group-based delivery, 2) provision of IEC materials, 3) partner or family member involvement, and 4) opportunity for women to interact with health providers. Maternal health services and hospitals aiming to better prepare women for vaginal birth and reduce CS can consider including the identified components to optimise health and well-being benefits for the woman and baby.

Peer Review reports

Introduction

In recent years, caesarean section (CS) rates have increased globally [ 1 , 2 , 3 , 4 ]. CS can be a life-saving procedure when vaginal birth is not possible; however, it comes with higher risks both in the short- and long-term for women and babies [ 1 , 5 ]. Women with CS have increased risks of surgical complications, complications in future pregnancies, subfertility, bowel obstruction, and chronic pain [ 5 , 6 , 7 , 8 ]. Similarly, babies born through CS have increased risks of hypoglycaemia, respiratory problems, allergies and altered immunity [ 9 , 10 , 11 ]. At a population level, CS rates exceeding 15% are unlikely to reduce mortality rates [ 1 , 12 ]. Despite these risks, an analysis across 154 countries reported a global average CS rate of 21.1% in 2018, projected to increase to 28.5% by 2030 [ 3 ].

There are many reasons for the increasing CS rates, and these vary between and within countries. Increasingly, non-clinical factors across different societal dimensions and stakeholders (e.g. women and communities, health providers, and health systems) are contributing to this increase [ 13 , 14 , 15 , 16 , 17 ]. Women may prefer CS over vaginal birth due to fear of labour or vaginal birth, previous negative experience of childbirth, perceived increased risks of vaginal birth, beliefs about an auspicious or convenient day of birth, or beliefs that caesarean section is safer, quick, and painless compared to vaginal birth [ 13 , 14 , 15 ].

Interventions targeting pregnant women to reduce CS have been implemented globally. A Cochrane intervention review synthesized evidence from non-clinical interventions targeting pregnant women and family, providers, and health systems to reduce unnecessary CS, and identified 15 interventions targeting women [ 18 ]. Interventions targeting women primarily focused on improving women’s knowledge around birth, improving women’s ability to cope during labour, and decreasing women’s stress related to labour through childbirth education, and decision aids for women with previous CS [ 18 ]. These types of interventions aim to reduce the concerns of pregnant women and their partners around childbirth, and prepare them for vaginal birth.

The effectiveness of interventions targeting women in reducing CS is mixed [ 18 , 19 ]. Plausible explanations for this limited success include the multifactorial nature of the factors driving increases in CS, as well as the contextual characteristics of the interventions, which may include the study environment, participant characteristics, intensity of exposure to the intervention and method of implementation. Understanding which intervention components are essential influencers of the success of the interventions is conducive to optimising benefits. This study used a Qualitative Comparative Analysis (QCA) approach to re-analyse evidence from existing systematic reviews to identify essential intervention components that lead to the successful implementation of non-clinical interventions focusing on pregnant women to optimise the use of CS. Updating and re-analysing existing systematic reviews using new analytical frameworks may help to explore the heterogeneity in effects and ascertain why some studies appear to be effective while others are not.

Data sources, case selection, and defining outcomes

Developing a logic model.

We developed a logic model to guide our understanding of different pathways and intervention components potentially leading to successful implementation (Additional file 1 ). The logic model was developed based on published qualitative evidence syntheses and systematic reviews [ 18 , 20 , 21 , 22 , 23 , 24 ]. The logic model depicts the desired outcome of reduced CS rates in low-risk women (at the time of admission for birth, these women are typically represented by Robson groups 1–4 [ 25 ] and are women with term, cephalic, singleton pregnancies without a previous CS) and works backwards to understand what inputs and processes are needed to achieve the desired outcome. Our logic model shows multiple pathways to success and highlights the interactions between different levels of factors (women, providers, societal, health system) (Additional file 1 ). Based on the logic model, we have separated our QCA into two clusters of interventions: 1) interventions targeting women, and 2) interventions targeting health providers. The results of analysis on interventions targeting health providers have been published elsewhere [ 26 ]. The logic model was also used to inform the potential important components that influence success.

Identifying data sources and selecting cases

We re-analysed the systematic reviews which were used to inform the development and update of World Health Organization (WHO) guidelines. In 2018, WHO issued global guidance on non-clinical interventions to reduce unnecessary CS, with interventions designed to target three different levels or stakeholders: women, health providers, and health systems [ 27 ]. As part of the guideline recommendations, a series of systematic reviews about CS interventions were conducted: 1) a Cochrane intervention review of effectiveness by Chen et al. (2018) [ 18 ] and 2) three qualitative evidence syntheses exploring key stakeholder perspectives and experiences of interventions focusing on women and communities, health professionals, and health organisations, facilities and systems by Kingdon et al. (2018) [ 20 , 21 , 22 ]. Later on, Opiyo and colleagues (2020) published a scoping review of financial and regulatory interventions to optimise the use of CS [ 23 ].

Therefore, the primary data sources of this QCA are the intervention studies included in Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ]. We used these two systematic reviews as not only they are comprehensive, but they were also used to inform the WHO guidelines development. A single intervention study is referred to as a “case”. Eligible cases were intervention studies focusing on pregnant women and aimed to reduce or optimise the use of CS. No restrictions on study design were imposed in the QCA. Therefore, we also assessed the eligibility of intervention studies excluded from Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ] due to ineligible study designs (such as cohort study, uncontrolled before and after study, interrupted time series with fewer than three data points), as these studies could potentially show other pathways to successful implementation. We complemented these intervention studies with additional intervention studies published since the last review updates in 2018 and 2020, to include intervention studies that are likely to meet the review inclusion criteria for future review updates. No further search was conducted as QCA is suitable for medium-N cases, approximately around 10–50 cases, and inclusion of more studies may threaten study rigour [ 28 ].

Once eligible studies were selected, we searched for their ‘sibling studies’. Sibling studies are studies linked to the included intervention studies, such as formative research or process evaluations which may have been published separately. Sibling studies can provide valuable additional information about study context, intervention components, and implementation outcomes (e.g. acceptability, fidelity, adherence, dosage), which may not be well described in a single article about intervention effectiveness. We searched for sibling studies using the following steps: 1) reference list search of the intervention studies included in Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ], 2) reference list search of the qualitative studies included in Kingdon et al. (2018) reviews [ 20 , 21 , 22 ]; and 3) forward reference search of the intervention studies (through “Cited by” function) in Scopus and Web of Science. Sibling studies were included if they included any information on intervention components or implementation outcomes, regardless of the methodology used. One author conducted the study screening independently (RIZ), and 10% of the screening was double-checked by a second author (MAB). Disagreements during screening were discussed until consensus, and with the rest of the author team if needed.

Defining outcomes

We assessed all outcomes related to the mode of birth in the studies included in the Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ] reviews. Based on the consistency of outcome reporting, we selected “overall CS rate” as the primary outcome of interest due to its presence across studies. We planned to rank the rate ratio across these studies to select the 10 most successful and unsuccessful intervention studies. However, due to heterogeneity in how CS outcomes were reported across studies (e.g. odds ratios, rate ratios, percentages across different intervention stages), the final categorisation of successful or unsuccessful interventions is based on whether the CS rate decreased, based on the precision of the confidence interval or p-value (successful, coded as 1), or CS rate increased or did not change (unsuccessful, coded as 0).

Assessing risk of bias in intervention studies

All intervention studies eligible for inclusion were assessed for risk of bias. All studies included in Chen et al. (2018) and Opiyo et al. (2020) already had risk of bias assessed and reported [ 18 , 23 ], and we used these assessments. Additional intervention studies outside the included studies on these reviews were assessed using the same tools depending on the type of evidence (two randomized controlled trials and one uncontrolled before and after study), and details of the risk of bias assessment results can be found in Additional file 2 . We excluded studies with a high risk of bias to ensure that the analysis was based on high-quality studies and to enhance the ability of researchers to develop deep case knowledge by limiting the overall number of studies.

Qualitative comparative analysis (QCA)

QCA was first developed and used in political sciences and has since been extended to systematic reviews of complex health interventions [ 24 , 29 , 30 , 31 ]. Despite the term “qualitative”, QCA is not a typical qualitative analysis, and is often conceptualised as a methodology that bridges qualitative and quantitative methodologies based on its process, data used and theoretical standpoint [ 24 ]. Here, QCA is used to identify if certain configurations or combinations of intervention components (e.g. participants, types of interventions, contextual characteristics, and intervention delivery) are associated with the desired outcome [ 31 ]. These intervention components are referred to as “conditions” in the QCA methodology. Whilst statistical synthesis methods may be used to examine intervention heterogeneity in systematic reviews, such as meta-regression, QCA is a particularly suitable method to understand complex interventions like those aiming to optimise CS, as it allows for multiple overlapping pathways to causality [ 31 ]. Moreover, QCA allows the exploration of different combinations of conditions, rather than relying on a single condition leading to intervention effectiveness [ 31 ]. Although meta-regression allows for the assessment of multiple conditions, a sufficient number of studies may not be available to conduct the analysis. In complex interventions, such as interventions aiming to optimise the use of CS, single condition or standard meta-analysis may be less likely to yield usable and nuanced information about what intervention components are more or less likely to yield success [ 31 ].

QCA uses ‘set theory’ to systematically compare characteristics of the cases (e.g. intervention in the case of systematic reviews) in relation to the outcomes [ 31 , 32 ]. This means QCA compares the characteristics of the successful ‘cases’ (e.g. interventions that are effective) to those unsuccessful ‘cases’ (e.g. interventions that are not effective). The comparison is conducted using a scoring system based on ‘set membership’ [ 31 , 32 ]. In this scoring, conditions and outcomes are coded based on the extent to which a certain feature is present or absent to form set membership scores [ 31 , 32 ]. There are two scoring systems in QCA: 1) crisp set QCA (csQCA) and 2) fuzzy set QCA (fsQCA). csQCA assigns binary scores of 0 (“fully out” to set membership for cases with certain conditions) and 1 (“fully in” to set membership for cases with certain conditions), while fsQCA assigns ordinal scoring of conditions and outcomes, permitting partial membership scores between 0 and 1 [ 31 , 32 ]. For example, using fsQCA we may assign a five-level scoring system (0, 0.33, 0.5, 0.67, 1), where 0.33 would indicate “more out” than “in” to the set of membership, and 0.67 would indicate “more in” than “out”, and 0.5 would indicate ambiguity (i.e. a lack of information about whether a case was “in” or “out”) [ 31 , 32 ]. In our analysis, we used the combination of both csQCA and fsQCA to calibrate our data. This approach was necessary because some conditions were better suited to binary options using csQCA, while others were more complex, depending on the distribution of cases, and required fsQCA to capture the necessary information. In our final analysis, however, the conditions run on the final analysis were all using the csQCA scoring system.

Two relationships can be investigated using QCA [ 24 , 31 ]. First, if all instances of successful interventions share the same condition(s), this suggests these features are ‘necessary’ to trigger successful outcomes [ 24 , 31 ]. Second, if all instances of a particular condition are associated with successful interventions, this suggests these conditions are ‘sufficient’ for triggering successful outcomes [ 24 , 31 ]. In this QCA, we were interested to explore the relationship of sufficiency: that is, to assess the various combinations of intervention components that can trigger successful outcomes. We were interested in sufficiency because our logic model (explained further below) highlighted the multiple pathways that can lead to a CS and different interventions that may optimise the use of CS along those pathways, which suggested that it would be unlikely for all successful interventions to share the same conditions. We calculated the degree of sufficiency using consistency measures, which evaluate the frequency in which conditions are present when the desired outcome is achieved [ 31 , 32 ]. The conditions with a consistency score of at least 0.8 were considered sufficient in triggering successful interventions [ 31 , 32 ]. At present, there is no tool available for reporting guidelines in the re-analysis of systematic reviews using QCA, however, CARU-QCA is currently being developed for this purpose [ 33 ]. QCA was conducted using R programming software with a package developed by Thiem & Duşa (2013) and QCA with R guidebook [ 32 ]. QCA was conducted in six stages based on Thomas et al. (2014) [ 31 ] and explained below.

QCA stage 1: Identifying conditions, building data tables and calibration

We used a deductive and inductive process to determine the potential conditions (intervention components) that may trigger successful implementation. Conditions were first derived deductively using the developed logic model (Additional file 1 ). We then added additional conditions inductively using Intervention Component Analysis from the intervention studies [ 34 ], and qualitative evidence (“view”) synthesis [ 22 ] using Melendez-Torres’s (2018) approach [ 35 ]. Intervention Component Analysis is a methodological approach that examines factors affecting implementation through reflections from the trialist, which is typically presented in the discussion section of a published trial [ 34 ]. Examples of conditions identified in the Intervention Component Analysis include using an individualised approach, interaction with health providers, policies that encourage CS and acknowledgement of women’s previous birth experiences. After consolidating or merging similar conditions, a total of 52 conditions were selected and extracted from each included intervention and analysed in this QCA (Details of conditions and definitions generated for this study can be found in Additional files 3 and 4 ). We adapted the coding framework from Harris et al. (2019) [ 24 ] by adapting coding rules and six domains that were used, to organize the 52 conditions and make more sense of the data. These six domains are broadly classified as 1) context and participants, 2) intervention design, 3) program content, 4) method of engagement, 5) health system factors, and 6) process outcomes.

One author (RIZ) extracted data relevant to the conditions for each included study into a data table, which was then double-reviewed by two other authors (MVC, MAB). The data table is a matrix in which each case is represented in a row, and columns are used to represent the conditions. Following data extraction, calibration rules using either csQCA or fsQCA (e.g. group-based intervention delivery condition: yes = 1 (present), no = 0 (absent)) were developed through consultation with all authors. We developed a table listing the conditions and rules of coding the conditions, by either direct or transformational assignment of quantitative and qualitative data [ 24 , 32 ] (Additional file 3 depicts the calibration rules). The data tables were then calibrated by applying scores, to explore the extent to which interventions have ‘set membership’ with the outcome or conditions of interest. During this iterative process, the calibration criteria were explicitly defined, emerging from the literature and the cases themselves. It is important to note, that maximum ambiguity is typically scored as 0.5 in QCA, however, we decided it would be more appropriate to assume that if a condition was not reported it was unlikely to be a feature of the intervention, so we treated not reported as “absence” that is we coded it 0.

QCA stage 2: Constructing truth tables

Truth tables are an analytical tool used in QCA to analyse associations between configurations of conditions and outcomes. Whereas the data table represents individual cases (rows) and individual conditions (columns) – the truth table synthesises this data to examine configurations – with each row representing a different configuration of the conditions. The columns indicate a) which conditions are featured in the configuration in that row, b) how many of the cases are represented by that configuration, and c) their association with the outcome.

We first constructed the truth tables based on context and participants, intervention designs, program content, and method of engagement; however, no configurations to trigger successful interventions were observed. Instead, we observed limited diversity, meaning there were many instances in which the configurations were unsupported by cases, likely due to the presence of too many conditions in the truth tables. We used the learning from these truth tables to return to the literature to explore potential explanatory theories about what conditions are important from the perspectives of participants and trialists to trigger successful interventions (adhering to the ‘utilisation of view’ perspective [ 35 ]). Through this process, we found that women and communities liked to learn new information about childbirth, and desired emotional support from partners and health providers while learning [ 22 ]. They also appreciated educational interventions that provide opportunities for discussion and dialogue with health providers and align with current clinical practice and advice from health providers [ 22 ]. Therefore, three models of truth tables were iteratively constructed and developed based on three important hypothesised theories about how the interventions should be delivered: 1) how birth information was provided to women, 2) emotional support was provided to women (including interactions between women and providers), and 3) a consolidated model examining the interactions of important conditions identified from model 1 and 2. We also conducted a sub-analysis of interventions targeting both women and health providers or systems (‘multi-target interventions’). This sub-analysis was conducted to explore if similar conditions were observed in triggering successful interventions in multi-target interventions, among the components for women only. Table 1 presents the list of truth tables that were iteratively constructed and refined.

QCA stage 3: Checking quality of truth tables

We iteratively developed and improved the quality of truth tables by checking the configurations of successful and unsuccessful interventions, as recommended by Thomas et al. (2014) [ 31 ]. This includes by assessing the number of studies clustering to each configuration, and exploring the presence of any contradictory results between successful and unsuccessful interventions. We found contradictory configurations across the five truth tables, which were resolved by considering the theoretical perspectives and iteratively refining the truth tables.

QCA stage 4: Identifying parsimonious configurations through Boolean minimization

Once we determined that the truth tables were suitable for further analysis, we used Boolean minimisation to explore pathways resulting in successful intervention through the configurations of different conditions [ 31 ]. We simplified the “complex solution” of the pathways to a “parsimonious solution” and an “intermediate solution” by incorporating logical remainders (configurations where no cases were observed) [ 36 ].

QCA stage 5: Checking the quality of the solution

We presented the intermediate solution as the final solution instead of the most parsimonious solution, as it is most closely aligned with the underlying theory. We checked consistency and coverage scores to assess if the pathways identified were sufficient to trigger success. We also checked the intermediate solution by negating the outcome to see if it predicts the observed solutions.

QCA stage 6: Interpretation of solutions

We iteratively interpreted the results of the findings through discussions among the QCA team. This reflexive approach ensured that the results of the analysis considered the perspectives from the literature discourse, methodological approach, and that the results were coherent with the current understanding of the phenomenon.

Overview of included studies

Out of 79 intervention studies assessed by Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ], 17 intervention studies targeted women and are included, comprising 11 interventions targeting only women [ 37 , 38 , 39 , 40 , 41 , 42 , 43 ] and six interventions targeting both women and health providers or systems [ 44 , 45 , 46 , 47 , 48 , 49 ]. From 17 included studies, 19 sibling studies were identified [ 43 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 ]. Thus, a total of 36 papers from 17 intervention studies are included in this QCA (See Fig.  1 : PRISMA Flowchart).

figure 1

PRISMA flowchart. *Sibling studies: studies that were conducted in the same settings, participants, and timeframe; **Intervention components: information on intervention input, activities, and outputs, including intervention context and other characteristics

The 11 interventions targeting women comprised of five successful interventions [ 37 , 68 , 69 , 70 , 71 ] and six unsuccessful interventions [ 37 , 38 , 39 , 40 , 41 , 42 , 43 ] in reducing CS. Sixteen sibling studies were identified, from five out of 11 included interventions [ 37 , 41 , 43 , 70 , 71 ]. Included studies were conducted in six countries across North America (2 from Canada [ 38 ] and 1 from United States of America [ 71 ]), Asia–Pacific (1 from Australia [ 41 ]), 5 from Iran [ 39 , 40 , 68 , 69 , 70 ]), Europe (2 from Finland [ 37 , 42 ], 1 from United Kingdom [ 43 ]). Six studies were conducted in high-income countries, while five studies were conducted in upper-middle-income countries (all from Iran). All 11 studies targeted women, with three studies also explicitly targeting women’s partners [ 68 , 69 , 71 ]. One study delivering psychoeducation allowed women to bring any family members to accompany them during the intervention but did not specifically target partners [ 37 ]. All 11 studies delivered childbirth education, with four delivering general antenatal education [ 38 , 40 , 68 , 69 ], six delivering psychoeducation [ 37 , 39 , 41 , 42 , 70 , 71 ], and one implementing decision aids [ 43 ]. All studies were included in Chen et al. (2018), and some risks of bias were identified [ 18 ] (Additional file 2).

The multi-target interventions consisted of five successful interventions [ 44 , 45 , 46 , 47 , 48 ] and one unsuccessful intervention [ 49 ]. Sibling studies were only identified from one study [ 48 ]. The interventions were delivered in five countries across: South America (1 from Brazil [ 46 ]), Asia–Pacific (4 from China [ 44 , 45 , 47 , 49 ]), Europe (1 from Italy [ 48 ], 1 from Ireland [ 48 ], and 1 from Germany [ 48 ]). Three studies were conducted in high-income countries and five studies in upper middle-income countries. The multi-target interventions targeted women, health providers and health organisations. For this analysis, however, we only consider the components of the intervention that targeted women, which was typically childbirth education. One study came from Chen et al. (2018) [ 18 ] and was graded as having some concerns [ 47 ], two studies from Opiyo et al. (2020) [ 23 ] were graded as having no serious concerns [ 45 , 46 ], and three studies are newly published studies assessed as low [ 44 ] and some concerns about risk of bias [ 48 , 49 ] Table 2 and 3 show characteristics of included studies.

The childbirth education interventions included information about mode of birth, birth process, mental health and coping strategies, pain relief methods, and partners’ roles in birth. Most interventions were delivered in group settings, and only in three studies they were delivered on a one-to-one basis [ 38 , 41 , 42 ]. Only one study explicitly stated that the intervention was individualised to a woman’s unique needs and experiences [ 38 ].

Overall, there was limited theory used to design interventions among the included studies: less than half of interventions (7/17) explicitly used theory in designing the intervention. Among the seven interventions that used theory in intervention development, the theories included the health promotion-disease prevention framework [ 38 ], midwifery counselling framework [ 41 ], cognitive behavioural therapy [ 42 ], Ost’s applied relaxation [ 70 ], conceptual model of parenting [ 71 ], attachment and social cognitive theories [ 37 ], and healthcare improvement scale-up framework [ 46 ]. The remaining 10 studies only relied on previously published studies to design the interventions. We identified very limited process evaluation or implementation outcome evidence related to the included interventions, which is a limitation of the field of CS and clinical interventions more broadly.

  • Qualitative comparative analysis

Model 1 – How birth information was provided to women

Model 1 is constructed based on the finding from Kingdon et al. (2018) [ 22 ] that women and communities enjoy learning new birth information, as it opens up new ways of thinking about vaginal birth and CS. Learning new information allows them to understand better the benefits and risks of CS and vaginal births, as well as increase their knowledge about CS [ 22 ].

We used four conditions in constructing model 1 truth table: 1) the provision of information, education, and communication (IEC) materials on what to expect during labour and birth, 2) type of education delivered (antenatal education or psychoeducation), and 3) group-based intervention delivery. We explored this model considering other conditions, such as type of information provided (e.g. information about mode of birth including birth process, mental health and coping strategies, pain relief), delivery technique (e.g. didactic, practical) and frequency and duration of intervention delivery; however these additional conditions did not result in configurations.

Of 16 possible configurations, we identified seven configurations (Table 4 ). The first two row shows perfect consistency of configurations (inclusion = 1) in five studies [ 37 , 68 , 69 , 70 , 71 ] in which all conditions are present, except antenatal education or psychoeducation. The remaining configurations are unsuccessful interventions. Interestingly, when either IEC materials or group-based intervention delivery are present (but not both), implementation is likely to be unsuccessful (rows 3–7).

Boolean minimisation identified two intermediate pathways to successful interventions (Fig.  2 ). The two pathways are similar, except for one condition: type of education. The antenatal education or psychoeducation materials is the content tailored to the type of women they target. Therefore, from the two pathways, we can see that the presence of distribution of IEC materials on birth information and group-based intervention delivery of either antenatal education to the general population of women (e.g. not groups of women with specific risks or conditions) or psychoeducation to women with fear of birth trigger successful interventions. From this solution, we can see that the successful interventions are consistently characterised by the presence of both IEC materials and group-based intervention delivery.

figure 2

Intermediate pathways from model 1 that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

Model 2 – Emotional support was provided to women

Model 2 was constructed based on the theory that women desire emotional support alongside the communication of information about childbirth [ 22 ]. This includes emotional support from husbands or partners, health professional, or doulas [ 22 ]. Furthermore, Kingdon et al. (2018) describe the importance of two-way conversation and dialogue between women and providers during pregnancy care, particularly to ensure the opportunity for discussion [ 22 ]. Interventions may generate more questions than they answered, creating the need and desire of women to have more dialogue with health professionals [ 22 ]. Women considered intervention content to be most useful when it complements clinical care, is consistent with advice from health professionals and provides a basis for more informed, meaningful dialogue between women and care providers [ 22 ].

Based on this underlying theory, we constructed model 3 truth table by considering three conditions representative of providing emotional support to women, including partner or family member involvement, group-based intervention delivery which provide social or peer support to women, and opportunity for women to interact with health providers. Of 8 possible configurations, we identified six configurations (Table 5 ). The first three rows represent successful interventions with perfect consistency (inclusion = 1). The first row shows successful interventions with all conditions present. The second and third row shows successful interventions with all conditions except partner or family member involvement or interaction with health providers. The remaining rows represent unsuccessful interventions, where at least two conditions are absent.

Boolean minimisation identified two intermediate pathways to successful interventions (Fig.  3 ). In the first pathway, the partner or family members involvement and group-based intervention delivery enable successful interventions. In the second pathway, however, when partner or family members are not involved, successful interventions can happen only when interaction with health providers is included alongside group-based intervention. From these two pathways, we can see that group-based intervention, involvement of partner and family member, and opportunity for women to interact with providers seem to be important in driving intervention success.

figure 3

Intermediate pathways from model 2 that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

Consolidated model – Essential conditions to prompt successful interventions focusing on women

Using the identified important conditions observed in models 1 and 2, we constructed a consolidated model to examine the final essential conditions which could prompt successful educational interventions targeting women. We merged and tested four conditions: the provision of IEC materials on what to expect during labour and birth, group-based intervention delivery, partner or family member involvement, and opportunity for interaction between women and health providers.

Of the 16 possible configurations, we identified six configurations (Table 6 ). The first three rows show configurations resulting in successful interventions with perfect consistency (inclusion = 1). The first row shows successful interventions with all conditions present; the second and third rows show successful interventions with all conditions present except interaction with health providers or partner or family member involvement. The remaining three rows are configurations of unsuccessful interventions, missing at least two conditions, including the consistent absence of partner or family member involvement.

Boolean minimisation identified two intermediate pathways to successful intervention (Fig.  4 ). The first pathway shows that the opportunity for women to interact with health providers, provision of IEC materials, and group-based intervention delivery prompts successful interventions. The second pathway, however, shows that when there is no opportunity for women to interact with health providers, it is important to have partner or family member involvement alongside group-based intervention delivery and provision of IEC materials. These two pathways suggest that the delivery of educational interventions accompanied by provision of IEC materials and presence of emotional support for women during the intervention is important to trigger successful interventions. These pathways also emphasise that emotional support for women during the intervention can come from either partner, family member, or health provider. For the consolidated model, we did not simplify the solution further, as the intermediate solution is more theoretically sound compared to the most parsimonious solution.

figure 4

Intermediate pathways from consolidated model that trigger successful interventions targeting pregnant women to optimise CS.  In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid.

Sub-analysis – Interventions targeting both women and health providers or systems

In this sub-analysis, we run the important conditions identified from the consolidated model, added condition of multi-target intervention, and applied it to 17 interventions: 11 interventions targeting women, and six interventions targeting both women and health providers or systems (multi-target interventions).

Of 32 possible configurations, we identified eight configurations (Table 7 ). The first four rows show configurations with successful interventions with perfect consistency (inclusion = 1). The first row is where all the multi-target interventions are clustered, except the unsuccessful intervention Zhang (2020) [ 49 ], and where all the conditions are present. All the conditions in the second to fourth rows are present, except multi-target interventions (all rows), interaction with health providers (third row) and partner and family member involvement (fourth row). The remaining rows are all configurations to unsuccessful interventions, where at least three conditions are missing, except row 8, which is a single case row. This case is the only multi-target intervention that is unsuccessful and in which partner or family members were not involved.

The Boolean minimisation identified two intermediate pathways (Fig.  5 ). The first pathway shows that partner or family involvement, provision of IEC materials, and group-based intervention delivery prompt successful interventions. The first pathway is comprised of all five successful multi-target interventions [ 44 , 45 , 46 , 47 , 48 ] and four of 11 interventions targeting only women [ 37 , 68 , 69 , 71 ]. The second pathway shows that when multi-target interventions are absent, but when interaction with health providers is present, alongside provision of IEC materials and group-based intervention delivery, it prompts successful interventions (3/11 interventions targeting women only [ 37 , 69 , 70 ]). The first pathway shows that there are successful configurations with and without multi-target interventions. Therefore, similar to the interventions targeting women, when implementing multi-target interventions, intervention components targeting women are more likely to be successful when partners or family members are involved, interventions are implemented through group-based intervention delivery, IEC materials were provided, and there is an opportunity for women to interact with health providers.

figure 5

Intermediate pathways from multi-target interventions sub-analysis that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

To summarise, there are four essential intervention components which trigger successful educational interventions focusing on pregnant women to reduce CS, this includes 1) group-based intervention delivery, 2) provision of IEC materials on what to expect during labour and birth, 3) partner or family member involvement on the intervention, and 4) opportunity for women to interact with health providers. These conditions do not work in siloed or independently but instead work jointly as parts of configurations to enable successful interventions.

Our extensive QCA identified configurations of essential intervention components which are sufficient to trigger successful interventions to optimised CS. Educational interventions focusing on women were successful by: 1) leveraging social or peer support through group-based intervention delivery, 2) improving women’s knowledge and awareness of what to expect during labour and birth, 3) ensuring women have emotional support through partner or family participation in the intervention, and 4) providing opportunities for women to interact with health providers. We found that the absence of two or more of the above characteristics in an intervention result in unsuccessful interventions. Unlike our logic model, which predicted engagement strategies (i.e. intensity, frequency, technique, recruitment, incentives) to be essential to intervention success, we found that “support” seems to be central in maximising benefits of interventions targeting women.

Group-based intervention delivery is present across all four truth tables and eight pathways leading to successful intervention implementation, suggesting that group-based intervention delivery is an essential component of interventions targeting women. Despite this, we cannot conclude that group-based intervention delivery is a necessary condition, as there may be other pathways not captured in this QCA. The importance of group-based intervention delivery may be due to the group setting providing women with a sense of confidence through peer support and engagement. In group-based interventions, women may feel more confident when learning with others and peer support may motivate women. Furthermore, all group-based interventions in our included studies are conducted at health facilities, which may provide women with more confidence that information is aligned with clinical recommendations. Evidence on benefits of group-based interventions involving women who are pregnant has been demonstrated previously [ 72 , 73 ]. Women reported that group-based interventions reduce their feelings of isolation, provide access to group support, and allow opportunities for them to share their experiences [ 72 , 74 , 75 , 76 ]. This is aligned with social support theory, in which social support through a group or social environment may provide women with feelings of reassurance, compassion, reduce feelings of uncertainty, increase sense of control, access to new contacts to solve problems, and provision of instrumental support, which eventually influence positive health behaviours [ 72 , 77 ]. Women may resolve their uncertainties around mode of birth by sharing their concerns with others and learning at the same time how others cope with it. These findings are consistent with the benefits associated with group-based antenatal care, which is recommended by WHO [ 78 , 79 ].

Kingdon et al. (2018) reported that women and communities liked learning new birth information, as it opens new ways of thinking about vaginal birth and CS, and educates about benefits of different modes of birth, including risks of CS. Our QCA is aligned with this finding where provision of information about birth through education delivery leads to successful interventions but with certain caveats. That is, provision of birth information should be accompanied by IEC materials and through group-based intervention delivery. There is not enough information to distinguish what type of IEC materials lead to successful intervention; however, it is important to note that the format of the IEC materials (such as paper-based or mobile application) may affect success. More work is needed to understand how women and families react to format of IEC materials; for example, will paper-based IEC materials be relegated over more modern methods of reaching women with information through digital applications? The QUALI-DEC (Quality decision-making (QUALI-DEC) by women and healthcare providers for appropriate use of caesarean section) study is currently implementing a decision-analysis tool to help women make an informed decision on preferred mode of birth using both a paper-based and mobile application that may shed some light on this [ 80 ].

Previous research has shown that women who participated in interventions aiming to reduce CS desired emotional support (from partners, doulas or health providers) alongside the communication about childbirth [ 22 ]. Our QCA is aligned with this finding in which emotional support from partners or family members is highly influential in leading to successful interventions. Partner involvement in maternity care has been extensively studied and has been demonstrated to improve maternal health care utilisation and outcomes [ 81 ]. Both women and their partners perceived that partner involvement is crucial as it facilitates men to learn directly from providers, thus promoting shared decision-making among women and partners and enabling partners to reinforce adherence to any beneficial suggestions [ 82 , 83 , 84 , 85 , 86 ]. Partners provide psychosocial support to women, for example through being present during pregnancy and the childbirth process, as well as instrumental support, which includes supporting women financially [ 82 , 83 , 84 ]. Despite the benefits of partner involvement, partner's participation in maternity care is still low [ 82 ], as reflected in this study where only four out of 11 included interventions on this study involved partner or family member involvement. Reasons for this low participation, which include unequal gender norms and limited health system capability [ 82 , 84 , 85 , 86 ], should be explored and addressed to ensure the benefits of the interventions.

Furthermore, our QCA demonstrates the importance of interaction with health providers to trigger successful interventions. The interaction of women with providers in CS decision-making, however, is on a “nexus of power, trust, and risk”, where it may be beneficial but can also reinforce the structural oppression of women [ 13 ]. A recent study on patient-provider interaction in CS decision-making concluded that the interaction between providers who are risk-averse, and women who are cautious about their pregnancies in the health system results in discouragement of vaginal births [ 87 ]. However, this decision could be averted by meaningful communication between women and providers where CS risks and benefits are communicated in an environment where vaginal birth is encouraged [ 87 ]. Furthermore, the reasons women desire interaction with providers can come from opposite directions. Some women see providers as the most trusted and knowledgeable source, in which women can trust the judgement and ensure that the information learned is reliable and evidenced-based [ 22 ]. On the other hand, some women may have scepticism towards providers where women understand that providers’ preference may negatively influence their preferred mode of birth [ 22 ]. Therefore, adequate, two-way interaction is important for women to build a good rapport with providers.

It is also important to note that we have limited evidence (3/17 intervention studies) involving women with previous CS. Vaginal birth after previous CS (VBAC) can be a safe and positive experience for some women, but there are also potential risks depending on their obstetric history [ 88 , 89 , 90 ]. Davis (2020) found that women were motivated to have VBAC due to negative experiences of CS, such as the difficult recovery, and that health providers' roles served as pivotal drivers in motivating women towards VBAC [ 91 ]. Other than this, VBAC also requires giving birth in a suitably staffed and equipped maternity unit, with staff trained on VBAC, equipment for labour monitoring, and resources for emergency CS if needed [ 89 , 90 ]. There is comparatively less research conducted on VBAC and trial of labour after CS [ 88 ]. Therefore, more work is needed to explore if there are potentially different pathways that lead to successful intervention implementation for women with previous CS. It may be more likely that interventions targeting various stakeholders are more crucial in this group of women. For example, both education for women and partners or families, as well as training to upskill health providers might be needed to support VBAC.

Strength and limitations

We found many included studies had poor reporting of the interventions, including the general intervention components (e.g. presence of policies that may support interventions) and process evaluation components, which is reflective of the historical approach to reporting trial data. This poor reporting means we could not engage further in the interventions and thus may have missed important conditions that were not reported. However, we have attempted to compensate for limited process evaluation components by identifying all relevant sibling studies that could contribute to a better understanding of context. Furthermore, there are no studies conducted in low-income countries, despite rapidly increasing CS rates in these settings. Lastly, we were not able to conduct more nuanced analyses about CS, such as exploring how CS interventions impacted changes to emergency versus elective CS, VBAC, or instrumental birth, due to an insufficient number of studies and heterogeneity in outcome measurements. Therefore, it is important to note that we are not necessarily measuring the optimal outcome of interest—reducing unnecessary CS. However, it is unlikely that these non-clinical interventions will interfere with a decision of CS based on clinical indications.

Despite these limitations, this is the first study aiming to understand how certain interventions can be successful in targeting women to optimise CS use. We used the QCA approach and new analytical frameworks to re-analyse existing systematic review evidence to generate new knowledge. We ensure robustness through the use of a logic model and worked backwards in understanding what aspects are different in the intervention across different outcomes. The use of QCA and qualitative evidence synthesis ensured that the results are theory-driven, incorporate participants’ perspectives into the analysis, and explored iteratively to find the appropriate configurations, reducing the risk of data fishing. Lastly, this QCA extends the understanding of effectiveness review conducted by Chen et al. (2018) [ 18 ] by explaining the potential intervention components which may influence heterogeneity.

Implications for practice and research

To aid researchers and health providers to reduce CS in their contexts and designing educational interventions targeting women during pregnancy, we have developed a checklist of key components or questions to consider when designing the interventions that may help lead to successful implementation:

Is the intervention delivered in a group setting?

Are IEC materials on what to expect during labour and birth disseminated to women?

Are women’s partners or families involved in the intervention?

Do women have opportunities to interact with health providers?

We have used this checklist to explore the extent to which the included interventions in our QCA include these components using a matrix model (Fig.  6 ).

figure 6

Matrix model assessing the extent to which the included intervention studies have essential intervention components identified in the QCA

Additionally, future research on interventions to optimise the use of CS should report the intervention components implemented, including process outcomes such as fidelity, attrition, contextual factors (e.g. policies, details of how the intervention is delivered), and stakeholder factors (e.g. women’s perceptions and satisfaction). These factors are important in not just evaluating whether the intervention is successful or not, but also in exploring why similar interventions can work in one but not in another context. There is also a need for more intervention studies implementing VBAC to reduce CS, to understand how involving women with previous CS may result in successful interventions. Furthermore, more studies understanding impact of the interventions targeting women in LMICs are needed.

This QCA illustrates crucial intervention components and potential pathways that can trigger successful educational interventions to optimise CS, focusing on pregnant women. The following intervention components are found to be sufficient in triggering successful outcomes: 1) group-based delivery, 2) provision of IEC materials, 3) partner or family member involvement, and 4) opportunity for women to interact with health providers. These intervention components do not work in siloed or independently but instead work jointly as parts of configurations to enable successful interventions. Researchers, trialists, hospitals, or other institutions and stakeholders planning interventions focusing on pregnant women can consider including these components to ensure benefits. More studies understanding impact of the interventions targeting women to optimise CS are needed from LMICs. Researchers should clearly describe and report intervention components in trials, and consider how process evaluations can help explain why trials were successful or not. More robust trial reporting and process evaluations can help to better understand mechanisms of action and why interventions may work in one context yet not another.

Availability of data and materials

Additional information files have been provided and more data may be provided upon request to [email protected].

Abbreviations

Coverage score

  • Caesarean section

Crisp set qualitative comparative analysis

Fuzzy set qualitative comparative analysis

Information, education, and communication

Inclusion score

Low- and middle-income countries

Proportional reduction in inconsistency

Quality decision-making by women and healthcare providers for appropriate use of caesarean section

Vaginal birth after previous caesarean section

World Health Organization

World Health Organization. WHO statement on caesarean section rates. Available from: https://www.who.int/publications/i/item/WHO-RHR-15.02 . Cited 20 Sept 2023.

Zahroh RI, Disney G, Betrán AP, Bohren MA. Trends and sociodemographic inequalities in the use of caesarean section in Indonesia, 1987–2017. BMJ Global Health. 2020;5:e003844. https://doi.org/10.1136/bmjgh-2020-003844 .

Article   PubMed   PubMed Central   Google Scholar  

Betran AP, Ye J, Moller A-B, Souza JP, Zhang J. Trends and projections of caesarean section rates: global and regional estimates. BMJ Global Health. 2021;6:e005671. https://doi.org/10.1136/bmjgh-2021-005671 .

Boerma T, Ronsmans C, Melesse DY, Barros AJD, Barros FC, Juan L, et al. Global epidemiology of use of and disparities in caesarean sections. The Lancet. 2018;392:1341–8. https://doi.org/10.1016/S0140-6736(18)31928-7 .

Article   Google Scholar  

Sandall J, Tribe RM, Avery L, Mola G, Visser GH, Homer CS, et al. Short-term and long-term effects of caesarean section on the health of women and children. Lancet. 2018;392:1349–57. https://doi.org/10.1016/S0140-6736(18)31930-5 .

Article   PubMed   Google Scholar  

Abenhaim HA, Tulandi T, Wilchesky M, Platt R, Spence AR, Czuzoj-Shulman N, et al. Effect of Cesarean Delivery on Long-term Risk of Small Bowel Obstruction. Obstet Gynecol. 2018;131:354–9. https://doi.org/10.1097/AOG.0000000000002440 .

Gurol-Urganci I, Bou-Antoun S, Lim CP, Cromwell DA, Mahmood TA, Templeton A, et al. Impact of Caesarean section on subsequent fertility: a systematic review and meta-analysis. Hum Reprod. 2013;28:1943–52. https://doi.org/10.1093/humrep/det130 .

Article   CAS   PubMed   Google Scholar  

Hesselman S, Högberg U, Råssjö E-B, Schytt E, Löfgren M, Jonsson M. Abdominal adhesions in gynaecologic surgery after caesarean section: a longitudinal population-based register study. BJOG: An Int J Obstetrics Gynaecology. 2018;125:597–603. https://doi.org/10.1111/1471-0528.14708 .

Article   CAS   Google Scholar  

Tita ATN, Landon MB, Spong CY, Lai Y, Leveno KJ, Varner MW, et al. Timing of elective repeat cesarean delivery at term and neonatal outcomes. N Engl J Med. 2009;360:111–20. https://doi.org/10.1056/NEJMoa0803267 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wilmink FA, Hukkelhoven CWPM, Lunshof S, Mol BWJ, van der Post JAM, Papatsonis DNM. Neonatal outcome following elective cesarean section beyond 37 weeks of gestation: a 7-year retrospective analysis of a national registry. Am J Obstet Gynecol. 2010;202(250):e1-8. https://doi.org/10.1016/j.ajog.2010.01.052 .

Keag OE, Norman JE, Stock SJ. Long-term risks and benefits associated with cesarean delivery for mother, baby, and subsequent pregnancies: Systematic review and meta-analysis. PLoS Med. 2018;15:e1002494. https://doi.org/10.1371/journal.pmed.1002494 .

Ye J, Betrán AP, Guerrero Vela M, Souza JP, Zhang J. Searching for the optimal rate of medically necessary cesarean delivery. Birth. 2014;41:237–44. https://doi.org/10.1111/birt.12104 .

Eide KT, Morken N-H, Bærøe K. Maternal reasons for requesting planned cesarean section in Norway: a qualitative study. BMC Pregnancy Childbirth. 2019;19:102. https://doi.org/10.1186/s12884-019-2250-6 .

Long Q, Kingdon C, Yang F, Renecle MD, Jahanfar S, Bohren MA, et al. Prevalence of and reasons for women’s, family members’, and health professionals’ preferences for cesarean section in China: A mixed-methods systematic review. PLoS Med. 2018;15. https://doi.org/10.1371/journal.pmed.1002672 .

McAra-Couper J, Jones M, Smythe L. Caesarean-section, my body, my choice: The construction of ‘informed choice’ in relation to intervention in childbirth. Fem Psychol. 2012;22:81–97. https://doi.org/10.1177/0959353511424369 .

Panda S, Begley C, Daly D. Clinicians’ views of factors influencing decision-making for caesarean section: A systematic review and metasynthesis of qualitative, quantitative and mixed methods studies. PLoS One 2018;13. https://doi.org/10.1371/journal.pone.0200941 .

Takegata M, Smith C, Nguyen HAT, Thi HH, Thi Minh TN, Day LT, et al. Reasons for increased Caesarean section rate in Vietnam: a qualitative study among Vietnamese mothers and health care professionals. Healthcare. 2020;8:41. https://doi.org/10.3390/healthcare8010041 .

Chen I, Opiyo N, Tavender E, Mortazhejri S, Rader T, Petkovic J, et al. Non-clinical interventions for reducing unnecessary caesarean section. Cochrane Database Syst Rev. 2018. https://doi.org/10.1002/14651858.CD005528.pub3 .

Catling-Paull C, Johnston R, Ryan C, Foureur MJ, Homer CSE. Non-clinical interventions that increase the uptake and success of vaginal birth after caesarean section: a systematic review. J Adv Nurs. 2011;67:1662–76. https://doi.org/10.1111/j.1365-2648.2011.05662.x .

Kingdon C, Downe S, Betran AP. Non-clinical interventions to reduce unnecessary caesarean section targeted at organisations, facilities and systems: Systematic review of qualitative studies. PLOS ONE. 2018;13:e0203274. https://doi.org/10.1371/journal.pone.0203274 .

Kingdon C, Downe S, Betran AP. Interventions targeted at health professionals to reduce unnecessary caesarean sections: a qualitative evidence synthesis. BMJ Open. 2018;8:e025073. https://doi.org/10.1136/bmjopen-2018-025073 .

Kingdon C, Downe S, Betran AP. Women’s and communities’ views of targeted educational interventions to reduce unnecessary caesarean section: a qualitative evidence synthesis. Reprod Health. 2018;15:130. https://doi.org/10.1186/s12978-018-0570-z .

Opiyo N, Young C, Requejo JH, Erdman J, Bales S, Betrán AP. Reducing unnecessary caesarean sections: scoping review of financial and regulatory interventions. Reprod Health. 2020;17:133. https://doi.org/10.1186/s12978-020-00983-y .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

World Health Organization. Robson Classifcation: Implementation Manual. 2017. Available from: https://www.who.int/publications/i/item/9789241513197 . Cited 20 Sept 2023.

Zahroh RI, Kneale D, Sutcliffe K, Vazquez Corona M, Opiyo N, Homer CSE, et al. Interventions targeting healthcare providers to optimise use of caesarean section: a qualitative comparative analysis to identify important intervention features. BMC Health Serv Res. 2022;22:1526. https://doi.org/10.1186/s12913-022-08783-9 .

World Health Organization. WHO recommendations: non-clinical interventions to reduce unnecessary caesarean sections. 2018. Available from: https://www.who.int/publications/i/item/9789241550338 . Cited 20 Sept 2023.

Hanckel B, Petticrew M, Thomas J, Green J. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health. 2021;21:877. https://doi.org/10.1186/s12889-021-10926-2 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: Re-analysis of a systematic review to identify pathways to effectiveness. Health Expect. 2018;21:574–84. https://doi.org/10.1111/hex.12667 .

Chatterley C, Javernick-Will A, Linden KG, Alam K, Bottinelli L, Venkatesh M. A qualitative comparative analysis of well-managed school sanitation in Bangladesh. BMC Public Health. 2014;14:6. https://doi.org/10.1186/1471-2458-14-6 .

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3:67. https://doi.org/10.1186/2046-4053-3-67 .

Dușa A. QCA with R: A Comprehensive Resource. 2021. Available from: https://bookdown.org/dusadrian/QCAbook/ . Cited 20 Sept 2023.

Kneale D, Sutcliffe K, Thomas J. Critical Appraisal of Reviews Using Qualitative Comparative Analyses (CARU-QCA): a tool to critically appraise systematic reviews that use qualitative comparative analysis. In: Abstracts of the 26th Cochrane Colloquium, Santiago, Chile. Cochrane Database of Systematic Reviews 2020;(1 Suppl 1). https://doi.org/10.1002/14651858.CD201901 .

Sutcliffe K, Thomas J, Stokes G, Hinds K, Bangpan M. Intervention Component Analysis (ICA): a pragmatic approach for identifying the critical features of complex interventions. Syst Rev. 2015;4:140. https://doi.org/10.1186/s13643-015-0126-z .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Thomas J. Developing and testing intervention theory by incorporating a views synthesis into a qualitative comparative analysis of intervention effectiveness. Res Synth Methods. 2019;10:389–97. https://doi.org/10.1002/jrsm.1341 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45. https://doi.org/10.1186/1471-2288-8-45 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Saisto T. Obstetric outcome after intervention for severe fear of childbirth in nulliparous women – randomised trial. BJOG: An Int J Obstetrics Gynaecology. 2013;120:75–84. https://doi.org/10.1111/1471-0528.12011 .

Fraser W, Maunsell E, Hodnett E, Moutquin JM. Randomized controlled trial of a prenatal vaginal birth after cesarean section education and support program Childbirth alternatives Post-Cesarean study group. Am J Obstet Gynecol. 1997;176:419–25. https://doi.org/10.1016/s0002-9378(97)70509-x .

Masoumi SZ, Kazemi F, Oshvandi K, Jalali M, Esmaeili-Vardanjani A, Rafiei H. Effect of training preparation for childbirth on fear of normal vaginal delivery and choosing the type of delivery among pregnant women in Hamadan, Iran: a randomized controlled trial. J Family Reprod Health. 2016;10:115–21.

PubMed   PubMed Central   Google Scholar  

Navaee M, Abedian Z. Effect of role play education on primiparous women’s fear of natural delivery and their decision on the mode of delivery. Iran J Nurs Midwifery Res. 2015;20:40–6.

Fenwick J, Toohill J, Gamble J, Creedy DK, Buist A, Turkstra E, et al. Effects of a midwife psycho-education intervention to reduce childbirth fear on women’s birth outcomes and postpartum psychological wellbeing. BMC Pregnancy Childbirth. 2015;15:284. https://doi.org/10.1186/s12884-015-0721-y .

Saisto T, Salmela-Aro K, Nurmi J-E, Könönen T, Halmesmäki E. A randomized controlled trial of intervention in fear of childbirth. Obstet Gynecol. 2001;98:820–6. https://doi.org/10.1016/S0029-7844(01)01552-6 .

Montgomery AA, Emmett CL, Fahey T, Jones C, Ricketts I, Patel RR, et al. Two decision aids for mode of delivery among women with previous Caesarean section: randomised controlled trial. BMJ: British Medic J. 2007;334:1305–9.

Xia X, Zhou Z, Shen S, Lu J, Zhang L, Huang P, et al. Effect of a two-stage intervention package on the cesarean section rate in Guangzhou, China: A before-and-after study. PLOS Medicine. 2019;16:e1002846. https://doi.org/10.1371/journal.pmed.1002846 .

Yu Y, Zhang X, Sun C, Zhou H, Zhang Q, Chen C. Reducing the rate of cesarean delivery on maternal request through institutional and policy interventions in Wenzhou. China PLoS ONE. 2017;12:1–12. https://doi.org/10.1371/journal.pone.0186304 .

Borem P, de Cássia SR, Torres J, Delgado P, Petenate AJ, Peres D, et al. A quality improvement initiative to increase the frequency of Vaginal delivery in Brazilian hospitals. Obstet Gynecol. 2020;135:415–25. https://doi.org/10.1097/AOG.0000000000003619 .

Ma R, Lao Terence T, Sun Y, Xiao H, Tian Y, Li B, et al. Practice audits to reduce caesareans in a tertiary referral hospital in south-western China. Bulletin World Health Organiz. 2012;90:488–94. https://doi.org/10.2471/BLT.11.093369 .

Clarke M, Devane D, Gross MM, Morano S, Lundgren I, Sinclair M, et al. OptiBIRTH: a cluster randomised trial of a complex intervention to increase vaginal birth after caesarean section. BMC Pregnancy Childbirth. 2020;20:143. https://doi.org/10.1186/s12884-020-2829-y .

Zhang L, Zhang L, Li M, Xi J, Zhang X, Meng Z, et al. A cluster-randomized field trial to reduce cesarean section rates with a multifaceted intervention in Shanghai. China BMC Medicine. 2020;18:27. https://doi.org/10.1186/s12916-020-1491-6 .

Fenwick J, Gamble J, Creedy DK, Buist A, Turkstra E, Sneddon A, et al. Study protocol for reducing childbirth fear: a midwife-led psycho-education intervention. BMC Pregnancy Childbirth. 2013;13:190. https://doi.org/10.1186/1471-2393-13-190 .

Toohill J, Fenwick J, Gamble J, Creedy DK, Buist A, Turkstra E, et al. A randomized controlled trial of a psycho-education intervention by midwives in reducing childbirth fear in pregnant women. Birth. 2014;41:384–94. https://doi.org/10.1111/birt.12136 .

Toohill J, Callander E, Gamble J, Creedy D, Fenwick J. A cost effectiveness analysis of midwife psycho-education for fearful pregnant women – a health system perspective for the antenatal period. BMC Pregnancy Childbirth. 2017;17:217. https://doi.org/10.1186/s12884-017-1404-7 .

Turkstra E, Mihala G, Scuffham PA, Creedy DK, Gamble J, Toohill J, et al. An economic evaluation alongside a randomised controlled trial on psycho-education counselling intervention offered by midwives to address women’s fear of childbirth in Australia. Sex Reprod Healthc. 2017;11:1–6. https://doi.org/10.1016/j.srhc.2016.08.003 .

Emmett CL, Shaw ARG, Montgomery AA, Murphy DJ, DiAMOND study group. Women’s experience of decision making about mode of delivery after a previous caesarean section: the role of health professionals and information about health risks. BJOG 2006;113:1438–45. https://doi.org/10.1111/j.1471-0528.2006.01112.x .

Emmett CL, Murphy DJ, Patel RR, Fahey T, Jones C, Ricketts IW, et al. Decision-making about mode of delivery after previous caesarean section: development and piloting of two computer-based decision aids. Health Expect. 2007;10:161–72. https://doi.org/10.1111/j.1369-7625.2006.00429.x .

Hollinghurst S, Emmett C, Peters TJ, Watson H, Fahey T, Murphy DJ, et al. Economic evaluation of the DiAMOND randomized trial: cost and outcomes of 2 decision aids for mode of delivery among women with a previous cesarean section. Med Decis Making. 2010;30:453–63. https://doi.org/10.1177/0272989X09353195 .

Frost J, Shaw A, Montgomery A, Murphy D. Women’s views on the use of decision aids for decision making about the method of delivery following a previous caesarean section: Qualitative interview study. BJOG : An Int J Obstetrics Gynaecology. 2009;116:896–905. https://doi.org/10.1111/j.1471-0528.2009.02120.x .

Rees KM, Shaw ARG, Bennert K, Emmett CL, Montgomery AA. Healthcare professionals’ views on two computer-based decision aids for women choosing mode of delivery after previous caesarean section: a qualitative study. BJOG. 2009;116:906–14. https://doi.org/10.1111/j.1471-0528.2009.02121.x .

Emmett CL, Montgomery AA, Murphy DJ. Preferences for mode of delivery after previous caesarean section: what do women want, what do they get and how do they value outcomes? Health Expect. 2011;14:397–404. https://doi.org/10.1111/j.1369-7625.2010.00635.x .

Bastani F, Hidarnia A, Montgomery KS, Aguilar-Vafaei ME, Kazemnejad A. Does relaxation education in anxious primigravid Iranian women influence adverse pregnancy outcomes?: a randomized controlled trial. J Perinat Neonatal Nurs. 2006;20:138–46. https://doi.org/10.1097/00005237-200604000-00007 .

Feinberg ME, Kan ML. Establishing Family Foundations: Intervention Effects on Coparenting, Parent/Infant Well-Being, and Parent-Child Relations. J Fam Psychol. 2008;22:253–63. https://doi.org/10.1037/0893-3200.22.2.253 .

Me F, Ml K, Mc G. Enhancing coparenting, parenting, and child self-regulation: effects of family foundations 1 year after birth. Prevention Science: Official J Soc Prevention Res. 2009;10. https://doi.org/10.1007/s11121-009-0130-4 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Saisto T. Life satisfaction, general well-being and costs of treatment for severe fear of childbirth in nulliparous women by psychoeducative group or conventional care attendance. Acta Obstet Gynecol Scand. 2015;94:527–33. https://doi.org/10.1111/aogs.12594 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Ryding E-L, et al. Group psychoeducation with relaxation for severe fear of childbirth improves maternal adjustment and childbirth experience–a randomised controlled trial. J Psychosom Obstet Gynaecol. 2015;36:1–9. https://doi.org/10.3109/0167482X.2014.980722 .

Healy P, Smith V, Savage G, Clarke M, Devane D, Gross MM, et al. Process evaluation for OptiBIRTH, a randomised controlled trial of a complex intervention designed to increase rates of vaginal birth after caesarean section. Trials. 2018;19:9. https://doi.org/10.1186/s13063-017-2401-x .

Clarke M, Savage G, Smith V, Daly D, Devane D, Gross MM, et al. Improving the organisation of maternal health service delivery and optimising childbirth by increasing vaginal birth after caesarean section through enhanced women-centred care (OptiBIRTH trial): study protocol for a randomised controlled trial (ISRCTN10612254). Trials. 2015;16:542. https://doi.org/10.1186/s13063-015-1061-y .

Lundgren I, Healy P, Carroll M, Begley C, Matterne A, Gross MM, et al. Clinicians’ views of factors of importance for improving the rate of VBAC (vaginal birth after caesarean section): a study from countries with low VBAC rates. BMC Pregnancy Childbirth. 2016;16:350. https://doi.org/10.1186/s12884-016-1144-0 .

Sharifirad G, Rezaeian M, Soltani R, Javaheri S, Mazaheri MA. A survey on the effects of husbands’ education of pregnant women on knowledge, attitude, and reducing elective cesarean section. J Educ Health Promotion. 2013;2:50. https://doi.org/10.4103/2277-9531.119036 .

Valiani M, Haghighatdana Z, Ehsanpour S. Comparison of childbirth training workshop effects on knowledge, attitude, and delivery method between mothers and couples groups referring to Isfahan health centers in Iran. Iran J Nurs Midwifery Res. 2014;19:653–8.

Bastani F, Hidarnia A, Kazemnejad A, Vafaei M, Kashanian M. A randomized controlled trial of the effects of applied relaxation training on reducing anxiety and perceived stress in pregnant women. J Midwifery Womens Health. 2005;50:e36-40. https://doi.org/10.1016/j.jmwh.2004.11.008 .

Feinberg ME, Roettger ME, Jones DE, Paul IM, Kan ML. Effects of a psychosocial couple-based prevention program on adverse birth outcomes. Matern Child Health J. 2015;19:102–11. https://doi.org/10.1007/s10995-014-1500-5 .

Evans K, Spiby H, Morrell CJ. Developing a complex intervention to support pregnant women with mild to moderate anxiety: application of the medical research council framework. BMC Pregnancy Childbirth. 2020;20:777. https://doi.org/10.1186/s12884-020-03469-8 .

Rising SS. Centering pregnancy. An interdisciplinary model of empowerment. J Nurse Midwifery. 1998;43:46–54. https://doi.org/10.1016/s0091-2182(97)00117-1 .

Breustedt S, Puckering C. A qualitative evaluation of women’s experiences of the Mellow Bumps antenatal intervention. British J Midwife. 2013;21:187–94. https://doi.org/10.12968/bjom.2013.21.3.187 .

Evans K, Spiby H, Morrell JC. Non-pharmacological interventions to reduce the symptoms of mild to moderate anxiety in pregnant women a systematic review and narrative synthesis of women’s views on the acceptability of and satisfaction with interventions. Arch Womens Ment Health. 2020;23:11–28. https://doi.org/10.1007/s00737-018-0936-9 .

Hoddinott P, Chalmers M, Pill R. One-to-one or group-based peer support for breastfeeding? Women’s perceptions of a breastfeeding peer coaching intervention. Birth. 2006;33:139–46. https://doi.org/10.1111/j.0730-7659.2006.00092.x .

Heaney CA, Israel BA. Social networks and social support. In Glanz K, Rimer BK, Viswanath K (Eds.), Health behavior and health education: Theory, research, and practice. Jossey-Bass; 2008. pp. 189–210. https://psycnet.apa.org/record/2008-17146-009 .

World Health Organization. WHO recommendations on antenatal care for a positive pregnancy experience. 2016. Available from: https://www.who.int/publications/i/item/9789241549912 . Cited 20 Sept 2023.

World Health Organization. WHO recommendation on group antenatal care. WHO - RHL. 2021. Available from: https://srhr.org/rhl/article/who-recommendation-on-group-antenatal-care . Cited 20 Sept 2023.

Dumont A, Betrán AP, Kabore C, de Loenzien M, Lumbiganon P, Bohren MA, et al. Implementation and evaluation of nonclinical interventions for appropriate use of cesarean section in low- and middle-income countries: protocol for a multisite hybrid effectiveness-implementation type III trial. Implementation Science 2020. https://doi.org/10.21203/rs.3.rs-35564/v2 .

Tokhi M, Comrie-Thomson L, Davis J, Portela A, Chersich M, Luchters S. Involving men to improve maternal and newborn health: A systematic review of the effectiveness of interventions. PLOS ONE. 2018;13:e0191620. https://doi.org/10.1371/journal.pone.0191620 .

Gibore NS, Bali TAL. Community perspectives: An exploration of potential barriers to men’s involvement in maternity care in a central Tanzanian community. PLOS ONE. 2020;15:e0232939. https://doi.org/10.1371/journal.pone.0232939 .

Galle A, Plaieser G, Steenstraeten TV, Griffin S, Osman NB, Roelens K, et al. Systematic review of the concept ‘male involvement in maternal health’ by natural language processing and descriptive analysis. BMJ Global Health. 2021;6:e004909. https://doi.org/10.1136/bmjgh-2020-004909 .

Ladur AN, van Teijlingen E, Hundley V. Male involvement in promotion of safe motherhood in low- and middle-income countries: a scoping review. Midwifery. 2021;103:103089. https://doi.org/10.1016/j.midw.2021.103089 .

Comrie-Thomson L, Tokhi M, Ampt F, Portela A, Chersich M, Khanna R, et al. Challenging gender inequity through male involvement in maternal and newborn health: critical assessment of an emerging evidence base. Cult Health Sex. 2015;17:177–89. https://doi.org/10.1080/13691058.2015.1053412 .

Article   PubMed Central   Google Scholar  

Comrie-Thomson L, Gopal P, Eddy K, Baguiya A, Gerlach N, Sauvé C, et al. How do women, men, and health providers perceive interventions to influence men’s engagement in maternal and newborn health? A qualitative evidence synthesis. Soc Scie Medic. 2021;291:114475. https://doi.org/10.1016/j.socscimed.2021.114475 .

Doraiswamy S, Billah SM, Karim F, Siraj MS, Buckingham A, Kingdon C. Physician–patient communication in decision-making about Caesarean sections in eight district hospitals in Bangladesh: a mixed-method study. Reprod Health. 2021;18:34. https://doi.org/10.1186/s12978-021-01098-8 .

Dodd JM, Crowther CA, Huertas E, Guise J-M, Horey D. Planned elective repeat caesarean section versus planned vaginal birth for women with a previous caesarean birth. Cochrane Database Syst Rev. 2013. https://doi.org/10.1002/14651858.CD004224.pub3 .

Royal College of Obstetricians and Gynaecologists. Birth After Previous Caesarean Birth:Green-top Guideline No. 45. 2015. Available from: https://www.rcog.org.uk/globalassets/documents/guidelines/gtg_45.pdf . Cited 20 Sept 2023.

Royal Australian and New Zealand College of Obstetricians and Gynaecologists. Birth after previous caesarean section. 2019. Available from: https://ranzcog.edu.au/RANZCOG_SITE/media/RANZCOG-MEDIA/Women%27s%20Health/Statement%20and%20guidelines/Clinical-Obstetrics/Birth-after-previous-Caesarean-Section-(C-Obs-38)Review-March-2019.pdf?ext=.pdf . Cited 20 Sept 2023.

Davis D, Homer CS, Clack D, Turkmani S, Foureur M. Choosing vaginal birth after caesarean section: Motivating factors. Midwifery. 2020;88:102766. https://doi.org/10.1016/j.midw.2020.102766 .

Download references

Acknowledgements

We extend our thanks to Jim Berryman (Brownless Medical Library, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne) for his help in refining the search strategy for sibling studies.

This research was made possible with the support of UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), a co-sponsored programme executed by the World Health Organization (WHO). RIZ is supported by Melbourne Research Scholarship and Human Rights Scholarship from The University of Melbourne. CSEH is supported by a National Health and Medical Research Council (NHMRC) Principal Research Fellowship. MAB’s time is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200100264) and a Dame Kate Campbell Fellowship (University of Melbourne Faculty of Medicine, Dentistry, and Health Sciences). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The contents of this publication are the responsibility of the authors and do not reflect the views of the UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization.

Author information

Authors and affiliations.

Gender and Women’s Health Unit, Nossal Institute for Global Health, School of Population and Global Health, University of Melbourne, Melbourne, VIC, Australia

Rana Islamiah Zahroh, Martha Vazquez Corona & Meghan A. Bohren

EPPI Centre, UCL Social Research Institute, University College London, London, UK

Katy Sutcliffe & Dylan Kneale

Department of Sexual and Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland

Ana Pilar Betrán & Newton Opiyo

Maternal, Child, and Adolescent Health Programme, Burnet Institute, Melbourne, VIC, Australia

Caroline S. E. Homer

You can also search for this author in PubMed   Google Scholar

Contributions

- Conceptualisation and study design: MAB, APB, RIZ

- Funding acquisition: MAB, APB

- Data curation: RIZ, MAB, MVC

- Investigation, methodology and formal analysis: all authors

- Visualisation: RIZ, MAB

- Writing – original draft preparation: RIZ, MAB

- Writing – review and editing: all authors

Corresponding author

Correspondence to Rana Islamiah Zahroh .

Ethics declarations

Ethics approval and consent to participate.

This study utilised published and openly available data, and thus ethics approval is not required.

Consent for publication

No direct individual contact is involved in this study, therefore consent for publication is not needed.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Logic model in optimizing CS use.

Additional file 2.

Risk of bias assessments.

Additional file 3.

Coding framework and calibration rules.

Additional file 4.

Coding framework as applied to each intervention (data table).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zahroh, R.I., Sutcliffe, K., Kneale, D. et al. Educational interventions targeting pregnant women to optimise the use of caesarean section: What are the essential elements? A qualitative comparative analysis. BMC Public Health 23 , 1851 (2023). https://doi.org/10.1186/s12889-023-16718-0

Download citation

Received : 07 March 2022

Accepted : 07 September 2023

Published : 23 September 2023

DOI : https://doi.org/10.1186/s12889-023-16718-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Maternal health
  • Complex intervention
  • Intervention implementation

BMC Public Health

ISSN: 1471-2458

comparative analysis of the qualitative and quantitative research approaches

From Qualitative to Quantitative | Online Guide to Combining Q&A with Other Research Methods Article

From Qualitative to Quantitative | Online Guide to Combining Q&A with Other Research Methods Article

Anh Vu • 09 Apr 2024 • 5 min read

Are you frustrated with the limitations of your research methods? Many methods have their drawbacks, resulting in incomplete insights.  But there’s an innovative approach that combines qualitative and quantitative methods with Q&A sessions. This article will demonstrate how combining these methods can help you access more data and insights.

Table of Contents

Understanding qualitative and quantitative research, steps to combine q&a with qualitative research methods, steps to combine q&a with quantitative research methods, common challenges when holding q&a sessions, enriching your research with q&a.

Qualitative vs. quantitative research methods differ in the type of questions they help you answer. Qualitative research, like interviews and observations, offers rich insights into people’s thoughts and behaviors. It’s all about understanding the “why” behind actions. 

Conversely, quantitative research focuses on numbers and measurements, giving us clear statistical trends and patterns to answer questions like “what” or “when.” Surveys and experiments fall into this category.

comparative analysis of the qualitative and quantitative research approaches

Each method has its limitations, which a Q&A session can help with. Results and conclusions from qualitative methods might only apply to some because of the small sample size. Q&A can help by getting more opinions from a wider group. On the other hand, quantitative methods give you numbers, but they might miss the details.

With Q&A, you can dig deeper into those details and understand them better. Blending qualitative and quantitative methods with Q&A helps you see the whole picture better, providing unique insights you wouldn’t have otherwise.

Steps to Combine Q&A with Qualitative Research Methods

Picture yourself investigating customer satisfaction in a restaurant for your master degree . Alongside interviews and observations, you organize a Q&A session. Merging Q&A insights with qualitative findings can lead to detailed insights for informed decision-making, such as optimizing staffing during busy hours. Here’s an example of how you do it:

  • Plan your Q&A session: Choose the timing, location, and participants for your session. For instance, consider holding it during quiet times in the restaurant, inviting regular and occasional customers to share feedback. You can also have a virtual session. However, remember that attendees may only be engaged for part of the session, which can impact the quality of their responses.
  • Conduct the Q&A session: Encourage a welcoming atmosphere to boost participation. Start with a warm introduction, express gratitude for attendance, and explain how their input will improve the restaurant experience.
  • Document responses: Take detailed notes during the session to capture critical points and noteworthy quotes. Document customer comments about specific menu items or praises for staff friendliness.
  • Analyze Q&A data: Review your notes and recordings, searching for recurring themes or observations. Compare these insights with your previous research to spot patterns, like common complaints about long wait times during peak hours.
  • Integrate findings: Combine Q&A insights with other research data to gain a better understanding. Identify connections between data sources, such as Q&A feedback confirming survey responses about service speed dissatisfaction.
  • Draw conclusions and make recommendations: Summarize your findings and propose actionable steps. For instance, suggest adjusting staffing levels or implementing a reservation system to address the issues.

Steps to Combine Q&A with Quantitative Research Methods

Now, let’s shift to another scenario. Imagine you’re exploring factors influencing online shopping behavior to refine marketing strategies as part of your online executive MBA requirements. Alongside a questionnaire with effective survey questions , you add Q&A sessions to your method for deeper insights. Here’s how to combine Q&A with quantitative methods:

  • Plan your research design: Determine how Q&A sessions align with your quantitative objectives. Schedule sessions to complement survey data collection, perhaps before or after distributing online surveys.
  • Structure Q&A sessions: Craft questions to gather qualitative insights alongside quantitative data. Use a mix of open-ended questions to explore motivations and closed-ended queries for statistical analysis.
  • Administer surveys: To collect numerical data, you must send surveys to a broader audience. A study on response rates found that sending online surveys can generate a 44.1% response rate. To increase this response rate, refine your population. Ensure the survey questions align with research objectives and are related to the qualitative insights from Q&A sessions.
  • Analyze combined data: Combine Q&A insights with survey data to see shopping trends. Find connections between qualitative feedback on user preferences and quantitative data on purchasing habits. For example, dark roast coffee lovers from your Q&A session might indicate in their surveys that they buy more coffee bags per month than your medium roast lovers.
  • Interpret and report findings: Present results clearly, highlighting critical insights from qualitative and quantitative perspectives. Use visuals like charts or graphs to show trends effectively.
  • Draw implications and recommendations: Based on combined qualitative and quantitative data analysis, provide practical suggestions that can be implemented. For example, recommend customized marketer strategies that attract your medium roast coffee lovers and drive profit.

Hosting Q&A sessions can be tricky, but technology offers solutions to make them smoother. For example, the global presentation software market is expected to grow by 13.5% from 2024 to 2031, emphasizing its growing importance. Here are some common hurdles you might face, along with how technology can help:

  • Limited Participation: Encouraging everyone to join in can take time and effort. Here, virtual Q&A sessions can help, allowing participants to ask questions via their phones and the internet, making involvement easy. You can also offer incentives or rewards, or use an AI presentation maker to create engaging slides.
  • Managing Time Effectively: Balancing time while covering all topics is a challenge. You can address this issue with tools that allow you to approve or deny questions before they appear. You can also set a time limit for discussions.
  • Handling Difficult Questions: Tough questions need careful handling. Allowing anonymity is an effective strategy for this challenge. It helps people feel safer asking difficult questions, promoting honest discussions without fear of judgment.
  • Ensuring Quality Responses: Getting informative responses is vital to a productive Q&A session. Likewise, customizing the Q&A slide with bright backgrounds and fonts keeps participants engaged and ensures effective communication.
  • Navigating Technical Issues: Technical issues can interrupt sessions. Some tools offer helpful features to help you avoid this issue. Allowing participants to upvote questions, for example, can help you prioritize important questions. You could also prepare backup devices for audio and video recordings so you don’t have to worry about losing your data.

Throughout this article, we’ve seen how combining Q&A with other research methods can unlock a wealth of insights that may not be possible through a single method. Whether you’re using Q&A to supplement qualitative research or combining it with quantitative research, the approach can help you gain a more comprehensive understanding of your topic.

Remember to communicate openly, listen attentively, and stay flexible. Following the steps outlined in this article, you can integrate Q&A sessions into your research design and emerge with better, more detailed insights. 

' src=

More from AhaSlides

7 Sample Likert Scale Questionnaires for Effective Research

Master of Science in Threat and Response Management

Introduction to statistics and research methods bootcamp.

The Statistics and Research Methods Bootcamp will provide a foundation for the use of statistics in data analysis and offer students an introduction to quantitative and qualitative methods in support of program coursework and the completion of a Capstone project.

Topics covered include: formulating a research question, identifying data sources, engaging in a literature review, acquiring IRB approval, understanding statistics in context, developing research projects, and analyzing and interpreting quantitative and qualitative data.

  • Python for Data Science
  • Statistics for Data Science
  • Decision-Making and Risk Management

comparative analysis of the qualitative and quantitative research approaches

Main Navigation

Group of students walking on the Coffs Harbour Campus

  • Accept offer and enrol
  • Current Students

Personalise your experience

Did you mean..., diploma of arts and social sciences, art/science collaboration wins waterhouse natural science art prize, unit of study stat4004 data analysis (2025).

Future students: T: 1800 626 481 E: Email your enquiry here

Current students: Contact: Faculty of Health

Students studying at an education collaboration: Please contact your relevant institution

updated - DO NOT REMOVE THIS LINE 6:08 AM on Tue, 9 April

Show me unit information for year

Unit snapshot, credit points, faculty & college.

Faculty of Health

Unit description

Develops students’ data and statistical analysis knowledge and skills for research. Students will learn about the analysis and representation of quantitative research, inferential analyses using standard statistical software, and how research can integrate quantitative and qualitative data using mixed methods approaches.

Unit content

  • Qualitative research – assumptions and analyses
  • Qualitative research – interpreting and presenting findings
  • Inferential statistics – prediction and relationships
  • Inferential statistics – comparing several means
  • Inferential statistics – repeated-measures
  • Integrating quantitative and qualitative data

Availabilities

2025 unit offering information will be available in November 2024

Learning outcomes

Unit Learning Outcomes express learning achievement in terms of what a student should know, understand and be able to do on completion of a unit. These outcomes are aligned with the graduate attributes . The unit learning outcomes and graduate attributes are also the basis of evaluating prior learning.

On completion of this unit, students should be able to:

select appropriate data and statistical analyses to answer a research question

conduct quantitative, qualitative, or mixed methods data analysis

interpret quantitative, qualitative, or mixed methods data analysis

Fee information

Commonwealth Supported courses For information regarding Student Contribution Amounts please visit the Student Contribution Amounts .

Fee paying courses For postgraduate or undergraduate full-fee paying courses please check Domestic Postgraduate Fees OR Domestic Undergraduate Fees .

International

Please check the international course and fee list to determine the relevant fees.

Courses that offer this unit

Bachelor of health and human sciences (honours) (2025), bachelor of health and human sciences (honours) (2024), any questions we'd love to help.

IMAGES

  1. Qualitative vs Quantitative Research: Differences and Examples

    comparative analysis of the qualitative and quantitative research approaches

  2. Qualitative vs. Quantitative Research

    comparative analysis of the qualitative and quantitative research approaches

  3. Qualitative vs Quantitative Research: Differences and Examples

    comparative analysis of the qualitative and quantitative research approaches

  4. Qualitative vs. Quantitative Research: Definition and Types

    comparative analysis of the qualitative and quantitative research approaches

  5. Comparison of quantitative and qualitative research approaches

    comparative analysis of the qualitative and quantitative research approaches

  6. Difference-Between-Quantitative-and-Qualitative-Research-infographic

    comparative analysis of the qualitative and quantitative research approaches

VIDEO

  1. Qualitative vs Quantitative vs Mixed Methods Research: How To Choose Research Methodology

  2. How qualitative and comparative analysis works and how to test the result? Professor Wendy Olsen

  3. Qualitative Comparative Analysis (QCA): Principles and Application

  4. Quantitative Data Analysis 101 Tutorial: Descriptive vs Inferential Statistics (With Examples)

  5. Qualitative Data Analysis 101 Tutorial: 6 Analysis Methods + Examples

  6. Qualitative Research vs Quantitative Research-A Brief Comparison

COMMENTS

  1. 15

    There is a wide divide between quantitative and qualitative approaches in comparative work. Most studies are either exclusively qualitative (e.g., individual case studies of a small number of countries) or exclusively quantitative, most often using many cases and a cross-national focus (Ragin, 1991:7).

  2. Approaches to Qualitative Comparative Analysis and good practices: A

    As applied qualitative (case-based) and mixed-methods research combining different analytical methods in the social sciences is not yet as standardized as research in the quantitative tradition, it is striving to become ever more methodologically sophisticated, rigorous, and transparent (Adcock & Collier, 2001; Brady & Collier, 2010; Mahoney ...

  3. The "qualitative" in qualitative comparative analysis (QCA): research

    Qualitative Comparative Analysis (QCA) includes two main components: QCA "as a research approach" and QCA "as a method". In this study, we focus on the former and, by means of the "interpretive spiral", we critically look at the research process of QCA. We show how QCA as a research approach is composed of (1) an "analytical move", where cases, conditions and outcome(s) are ...

  4. Qualitative vs. Quantitative Research

    When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.

  5. Comparative Analysis of Qualitative And Quantitative Research

    Practically in the case of field research, the Qualitative and quantitative approach can't be distinguished clearly as the study pointed. The distinction may limit innovation in the development of new research methodologies, as well as cause complication and wasteful activity.

  6. Designing Research With Qualitative Comparative Analysis (QCA

    Pp. 19-32 in Configurational Comparative Methods. Qualitative Comparative Analysis (QCA) and Related Techniques, edited by Rihoux B., Ragin C. C. Thousand Oaks, CA ... Goertz G. 2006. "A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research." Political Analysis 1 4:227-49. Crossref. Google Scholar. Mahoney J., Sweet ...

  7. Qualitative comparative analysis

    This book, by Schneider and Wagemann, provides a comprehensive overview of the basic principles of set theory to model causality and applications of Qualitative Comparative Analysis (QCA), the most developed form of set-theoretic method, for research ac. An introduction to applied data analysis with qualitative comparative analysis.

  8. Qualitative Comparative Analysis in Education Research: Its Current

    Qualitative comparative analysis (QCA) is a set-theoretic configurational approach that ... sion between qualitative and quantitative research by offering an innovative way to ... from those of conventional quantitative as well as qualitative research approaches in the social sciences, some scholars repudiate its assumptions and theoretical ...

  9. PDF Designing Research With Qualitative Comparative Analysis (QCA

    applied—as a research approach. Schneider and Wagemann (2010:398) Since Charles Ragin launched "The Comparative Method" in 1987, the methodology and the use of qualitative comparative analysis (QCA) in its different variants have developed impressively (Rihoux and Marx 2013). QCA applications have spread across various disciplines (Rihoux ...

  10. Qualitative Comparative Analysis in Education Research: Its Current

    Qualitative comparative analysis (QCA), a set-theoretic configurational approach based on Boolean algebra, was initially introduced more than 30 years ago and has since been developed largely through the work of Charles Ragin (1987, 2000, 2008).QCA constitutes one of the few genuine methodological innovations in the social sciences over the past decades (Gerring, 2001), and its potential has ...

  11. Qualitative Comparative Analysis: A Hybrid Method for Identifying

    Qualitative comparative analysis (QCA) was developed over 25 years ago to bridge the qualitative and quantitative research gap. Upon searching PubMed and the Journal of Mixed Methods Research, this review identified 30 original research studies that utilized QCA.Perceptions that QCA is complex and provides few relative advantages over other methods may be limiting QCA adoption.

  12. The use of Qualitative Comparative Analysis (QCA) to address causality

    Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA ...

  13. A Comparative Analysis of Qualitative and Quantitative Research Methods

    Cite this article as: Haq, M (2014). A comparative analysis of qualitative and quantitative research methods and a justification for use of mixed methods in social research.

  14. Qualitative Vs. Quantitative Research

    Quantitative Research - A Comparison. Qualitative Vs. Quantitative Research — A step-wise guide to conduct research. (average: 5 out of 5. Total: 2) A research study includes the collection and analysis of data. In quantitative research, the data are analyzed with numbers and statistics, and in qualitative research, the data analyzed are ...

  15. Using qualitative comparative analysis to understand and quantify

    In this paper, we describe the methodological features and advantages of using qualitative comparative analysis (QCA). QCA is sometimes called a "mixed method." It refers to both a specific research approach and an analytic technique that is distinct from and offers several advantages over traditional qualitative and quantitative methods [1 ...

  16. Qualitative Comparative Analysis in Mixed Methods Research and

    Qualitative Comparative Analysis in Mixed Methods Research and Evaluation provides a user-friendly introduction for using Qualitative Comparative Analysis (QCA) as part of a mixed methods approach to research and evaluation. Offering practical, in-depth, and applied guidance for this unique analytic technique that is not provided in any current mixed methods textbook, the chapters of this ...

  17. (PDF) Comparing Qualitative and Quantitative Approaches

    Comparing Qualitative and Quantitative Approaches. The discussion will compare the characteristic of both kinds of research, including their. purpose, research question and hypothesis, data ...

  18. PDF The Qualitative Comparative Analysis: An Overview of a Causal

    One of them is qualitative comparative analysis (QCA), one of approaches used for causal explanation of phenomena of cases performed in the field of international economics and global affairs. Purpose of the article: The main purpose of the article is to provide a detailed overview of the QCA method in global context, to define its methodologic ...

  19. quantitative approaches to comparative analyses: data properties and

    While there is an abundant use of macro data in the social sciences, little attention is given to the sources or the construction of these data. Owing to the restricted amount of indices or items, researchers most often apply the 'available data at hand'. Since the opportunities to analyse data are constantly increasing and the availability of macro indicators is improving as well, one may ...

  20. A Comparative Analysis of Qualitative and Quantitative Research Methods

    Based on the review of contemporary social research methods I believe that mixed methods research produces more accurate results than relying on either qualitative or quantitative methods alone in explaining complex social issues. This paper contributes to the methodological literature in two areas.

  21. Research methods in business: Quantitative and qualitative comparative

    Research in the social sciences is built on either quantitative or qualitative analysis, depending on the research context. Using both quantitative and qualitative analyses in the same study presents major obstacles. In the real business world, empirical studies could benefit from using multiple research methodologies.

  22. Comparative Analysis of Qualitative And Quantitative Research

    There's no hard and fast rule for qualitative versus quantitative research, and it's often taken for granted. It is claimed here that the divide between qualitative and quantitative research is ambiguous, incoherent, and hence of little value, and that its widespread use could have negative implications. This conclusion is supported by a variety of arguments. Qualitative researchers, for ...

  23. Educational interventions targeting pregnant women to optimise the use

    This study used a Qualitative Comparative Analysis (QCA) approach to re-analyse evidence from existing systematic reviews to identify essential intervention components that lead to the successful implementation of non-clinical interventions focusing on pregnant women to optimise the use of CS.

  24. APA free science trainings

    APA Science is offering free science training sessions with experts in the field to introduce viewers to new tools and techniques to support cutting-edge psychological research. Access over 15 free trainings on topics such as intensive longitudinal methods, structural equation modeling, missing data analysis, mediation and moderation analysis, multilevel models for clustered data, and more at ...

  25. Qualitative comparative analysis in educational policy research

    QCA is a case-oriented research method designed to identify causal relationships between variables and a particular outcome (Fiss, 2010; Ragin, 2008).Distinct from quantitative causal methods, QCA requires qualitative data to identify conditions (and combinations of conditions) that lead to a particular result (Ragin, 1987); in other words, it is context-driven, just as many educational ...

  26. From Qualitative to Quantitative

    Plan your research design: Determine how Q&A sessions align with your quantitative objectives. Schedule sessions to complement survey data collection, perhaps before or after distributing online surveys. Structure Q&A sessions: Craft questions to gather qualitative insights alongside quantitative data. Use a mix of open-ended questions to explore motivations and closed-ended queries for ...

  27. Best Practices for Mixed Methods Data Analysis

    Best practices for data analysis in mixed methods research involve careful planning, appropriate method selection, and rigorous integration of both qualitative and quantitative findings. By ...

  28. Introduction to Statistics and Research Methods Bootcamp

    MSTR 30001. The Statistics and Research Methods Bootcamp will provide a foundation for the use of statistics in data analysis and offer students an introduction to quantitative and qualitative methods in support of program coursework and the completion of a Capstone project. Topics covered include: formulating a research question, identifying ...

  29. Effective Data Analysis in Mixed Methods Research

    2Data Integration. Data integration is the core of mixed methods analysis. It's where qualitative insights complement quantitative findings to provide a more complete picture. To do this ...

  30. STAT4004

    Unit description. Develops students' data and statistical analysis knowledge and skills for research. Students will learn about the analysis and representation of quantitative research, inferential analyses using standard statistical software, and how research can integrate quantitative and qualitative data using mixed methods approaches.