The “qualitative” in qualitative comparative analysis (QCA): research moves, case-intimacy and face-to-face interviews

  • Open access
  • Published: 26 March 2022
  • Volume 57 , pages 489–507, ( 2023 )

Cite this article

You have full access to this open access article

comparative analysis research pdf

  • Sofia Pagliarin   ORCID: orcid.org/0000-0003-4846-6072 3 , 4 ,
  • Salvatore La Mendola 2 &
  • Barbara Vis 1  

8452 Accesses

3 Citations

5 Altmetric

Explore all metrics

Qualitative Comparative Analysis (QCA) includes two main components: QCA “as a research approach” and QCA “as a method”. In this study, we focus on the former and, by means of the “interpretive spiral”, we critically look at the research process of QCA. We show how QCA as a research approach is composed of (1) an “analytical move”, where cases, conditions and outcome(s) are conceptualised in terms of sets, and (2) a “membership move”, where set membership values are qualitatively assigned by the researcher (i.e. calibration). Moreover, we show that QCA scholars have not sufficiently acknowledged the data generation process as a constituent research phase (or “move”) for the performance of QCA. This is particularly relevant when qualitative data–e.g. interviews, focus groups, documents–are used for subsequent analysis and calibration (i.e. analytical and membership moves). We call the qualitative data collection process “relational move” because, for data gathering, researchers establish the social relation “interview” with the study participants. By using examples from our own research, we show how a dialogical interviewing style can help researchers gain the in-depth knowledge necessary to meaningfully represent qualitative data into set membership values for QCA, hence improving our ability to account for the “qualitative” in QCA.

Similar content being viewed by others

comparative analysis research pdf

The role of analytic direction in qualitative research

comparative analysis research pdf

Adapting and blending grounded theory with case study: a practical guide

Working at a remove: continuous, collective, and configurative approaches to qualitative secondary analysis.

Avoid common mistakes on your manuscript.

1 Introduction

Qualitative Comparative Analysis (QCA) is a configurational comparative research approach and method for the social sciences based on set-theory. It was introduced in crisp-set form by Ragin ( 1987 ) and later expanded to fuzzy sets (Ragin 2000 ; 2008a ; Rihoux and Ragin 2009 ; Schneider and Wagemann 2012 ). QCA is a diversity-oriented approach extending “the single-case study to multiple cases with an eye toward configurations of similarities and differences” (Ragin 2000 :22). QCA aims at finding a balance between complexity and generalizability by identifying data patterns that can exhibit or approach set-theoretic connections (Ragin 2014 :88).

As a research approach, QCA researchers first conceptualise cases as elements belonging, in kind and/or degree, to a selection of conditions and outcome(s) that are conceived as sets. They then assign cases’ set membership values to conditions and outcome(s) (i.e. calibration). Populations are constructed for outcome-oriented investigations and causation is conceived to be conjunctural and heterogeneous (Ragin 2000 : 39ff). As a method, QCA is the systematic and formalised analysis of the calibrated dataset for cross-case comparison through Boolean algebra operations. Combinations of conditions (i.e. configurations) represent both the characterising features of cases and also the multiple paths towards the outcome (Byrne 2005 ).

Most of the critiques to QCA focus on the methodological aspects of “QCA as a method” (e.g. Lucas and Szatrowski 2014 ), although epistemological issues regarding deterministic causality and subjectivity in assigning set membership values are also discussed (e.g. Collier 2014 ). In response to these critiques, Ragin ( 2014 ; see also Ragin 2000 , ch. 11) emphasises the “mindset shift” needed to perform QCA: QCA “as a method” makes sense only if researchers admit “QCA as a research approach”, including its qualitative component.

The qualitative character of QCA emerges when recognising the relevance of case-based knowledge or “case intimacy”. The latter is key to perform calibration (see e.g. Ragin 2000 :53–61; Byrne 2005 ; Ragin 2008a ; Harvey 2009 ; Greckhamer et al. 2013 ; Gerrits and Verweij 2018 :36ff): when associating “meanings” to “numbers”, researchers engage in a “dialogue between ideas and evidence” by using set-membership values as “ interpretive tools ” (Ragin 2000 : 162, original emphasis). The foundations of QCA as a research approach are explicitly rooted in qualitative, case-oriented research approaches in the social sciences, in particular in the understanding of causation as multiple and configurational, in terms of combinations of conditions, and in the conceptualisation of populations as types of cases, which should be refined in the course of an investigation (Ragin 2000 : 30–42).

Arguably, QCA researchers should make ample use of qualitative methods for the social sciences, such as narrative or semi-structured interviews, focus groups, discourse and document analysis, because this will help gain case intimacy and enable the dialogue between theories and data. Furthermore, as many QCA-studies have a small to medium sample size (10–50 cases), qualitative data collection methods appear to be particularly appropriate to reach both goals. However, so far only around 30 published QCA studies use qualitative data (de Block and Vis 2018 ), out of which only a handful employ narrative interviews (see Sect.  2 ).

We argue that this puzzling observation about QCA empirical research is due to two main reasons. First, quantitative data, in particular secondary data available from official databases, are more malleable for calibration. Although QCA researchers should carefully distinguish between measurement and calibration (see e.g. Ragin, 2008a , b ; Schneider and Wagemann 2012 , Sect. 1.2), quantitative data are more convenient for establishing the three main qualitative anchors (i.e. the cross-over point as maximum ambiguity; the lower and upper thresholds for full set membership exclusion or inclusion). Quantitative data facilitate QCA researchers in performing QCA both as a research approach and method. QCA scholars are somewhat aware of this when discussing “the two QCAs” (large-n/quantitative data and small-n/more frequent use of qualitative data; Greckhamer et al. 2013 ; see also Thomann and Maggetti 2017 ).

Second, the use of qualitative data for performing QCA requires an additional effort from the part of the researcher, because data collected through, for instance, narrative interviews, focus groups and document analysis come in verbal form. Therefore, QCA researchers using qualitative methods for empirical research have to first collect data and only then move to their analysis and conceptualisation as sets (analytical move) and their calibration into “numbers” (membership move) for their subsequent handling through QCA procedures (QCA as a method).

Because of these two main reasons, we claim that data generation (or data construction) should also be recognised and integrated in the QCA research process. Fully accounting for QCA as a “qualitative” research approach necessarily entails questions about the data generation process, especially when qualitative research methods are used that come in verbal, and not numerical, form.

This study’s contributions are twofold. First, we present the “interpretative spiral” (see Fig.  1 ) or “cycle” (Sandelowski et al. 2009 ) where data gradually transit through changes of state: from meanings, to concepts to numerical values. In limiting our discussion to QCA as a research approach, we identified three main moves composing the interpretative spiral: the (1) relational (data generation through qualitative methods), (2) analytical (set conceptualisation) and (3) membership (calibration) moves. Second, we show how in-depth knowledge for subsequent set conceptualisation and calibration can be more effectively generated if the researcher is open, during data collection, to support the interviewee’s narration and to establish a dialogue—a relation—with him/her (i.e. the relational move). It is the researcher’s openness that can facilitate the development of case intimacy for set conceptualisation and assessment (analytical and membership moves). We hence introduce a “dialogical” interviewing style (La Mendola 2009 ) to show how this approach can be useful for QCA researchers. Although we mainly discuss narrative interviews, a dialogical interviewing style can also adapt to face-to-face semi-structured interviews or questionnaires.

figure 1

The interpretative spiral and the relational, analytical and membership moves

Our main aim is to make QCA researchers more aware of “minding their moves” in the interpretative spiral. Additionally, we show how a “dialogical” interviewing style can facilitate the access to the in-depth knowledge of cases useful for calibration. Researchers using narrative interviews who have not yet performed QCA can gain insight into–and potentially see the advantages of–how qualitative data, in particular narrative interviews, can be employed for the performance of QCA (see Gerrits and Verweij 2018 :36ff).

In Sect.  2 we present the interpretative spiral (Fig.  1 ,) the interconnections between the three moves and we discuss the limited use of qualitative data in QCA research. In Sect.  3 , we examine the use of qualitative data for performing QCA by discussing the relational move and a dialogical interviewing style. In Sect.  4 , we examine the analytical and membership moves and discuss how QCA researchers have so far dealt with them when using qualitative data. In Sect.  5 , we conclude by putting forward some final remarks.

2 The interpretative spiral and the three moves

Sandelowski et al. ( 2009 ) state that the conversion of qualitative data into quantitative data (“quantitizing”) necessarily involves “qualitazing”, because researchers perform a “continuous cycling between assigning numbers to meaning and meaning to numbers” (p. 213). “Data” are recognised as “the product of a move on the part of researchers” (p. 209, emphasis added) because information has to be conceptualised, understood and interpreted to become “data”. In Fig.  1 , we tailor this “cycling” to the performance of QCA by means of the interpretative spiral.

Through the interpretative spiral, we show both how knowledge for QCA is transformed into data by means of “moves” and how the gathering of qualitative data consists of a move on its own. Our choice for the term “move” is grounded in the need to communicate a sense of movement along the “cycling” between meanings and numbers. Furthermore, the term “move” resonates with the communicative steps that interviewers and interviewee engage in during an interview (see Sect.  3 below).

Although we present these moves as separate, they are in reality interfaces, because they are part of the same interpretative spiral. They can be thought of as moves in a dance; the latter emerges because of the succession of moves and steps as a whole, as we show below.

The analytical and membership moves are intertwined-as shown by the central “vortex” of the spiral in Fig.  1 -as they are composed of a number of interrelated steps, in particular case selection, theory-led set conceptualisation, definition of the most appropriate set membership scales and of the cross-over and upper and lower thresholds (e.g. crisp-set, 4- or 6-scale fuzzy-sets; see Ragin 2000 :166–171; Rihoux and Ragin 2009 ). Calibration is the last move of the dialogue between theory (concepts of the analytical move) and data (cases). In the membership move, fuzzy sets are used as “an interpretative algebra, a language that is half-verbal-conceptual and half-mathematical-analytical” (Ragin 2000 :4). Calibration is hence a type of “quantitizing” and “qualitizing” (Sandelowski et al. 2009 ). In applied QCA, set membership values can be reconceptualised and recalibrated. This will for instance be done to solve true logical contradictions in the truth table and when QCA results are interpreted by “going back to cases”, hence overlapping with the practices related to QCA “as a method”.

The relational move displayed in Fig.  1 expresses the additional interpretative process that researchers engage in when collecting and analysing qualitative data. De Block and Vis ( 2018 ) show that only around 30 published QCA-studies combine qualitative data with QCA, including a range of additional data, like observations, site visits, newspaper articles.

However, a closer look reveals that the majority of the published QCA-studies using qualitative data employ (semi)structured interviews or questionnaires. Footnote 1 For instance, Basurto and Speer ( 2012 ) Footnote 2 proposed a step-wise calibration process based on a frequency-oriented strategy (e.g. number of meetings, amount of available information) to calibrate the information collected through 99 semi-structured interviews. Fischer ( 2015 ) conducted 250 semi-structured interviews by cooperating with four trained researchers using pre-structured questions, where respondents could voluntarily add “qualitative pieces of information” in “an interview protocol” (p. 250). Henik ( 2015 ) structured and carried out 50 interviews on whistle-blowing episodes to ensure subsequent blind coding of a high number of items (almost 1000), arguably making them resemble face-to-face questionnaires.

In turn, only a few QCA-researchers use data from narrative interviews. Footnote 3 For example, Metelits ( 2009 ) conducted narrative interviews during ethnographic fieldwork over the course of several years. Verweij and Gerrits ( 2015 ) carried out 18 “open” interviews, while Chai and Schoon ( 2016 ) conducted “in-depth” interviews. Wang ( 2016 ), in turn, conducted structured interviews through a questionnaire, following a similar approach as in Fischer ( 2015 ); however, during the interviews, Wang’s respondents were asked to reflexively justify the chosen questionnaire's responses, hence moving the structured interviews closer to narrative ones. Tóth et al. ( 2017 ) performed 28 semi-structured interviews with company managers to evaluate the quality and attractiveness of customer-provider relationships for maintaining future business relations. Their empirical strategy was however grounded in initial focus groups and other semi-structured interviews, composed of open questions in the first part and a questionnaire in the second part (Tóth et al. 2015 ).

Although no interview is completely structured or unstructured, it is useful to conceptualise (semi-)structured and less structured (or narrative) interviews as the two ends of a continuum (Brinkmann 2014 ). Albeit still relatively rare as compared to quantitative data, the more popular integration of (semi-)structured interviews into QCA might be due to the advantages that this type of qualitative data holds for calibration. The “structured” portion of face-to-face semi-structured interviews or questionnaires facilitates the calibration of this type of qualitative data, because quantitative anchor points can be more clearly identified to assign set membership values (see e.g. Basurto and Speer 2012 ; Fischer 2015 ; Henik 2015 ).

Hence, when critically looking at the “qualitative” character of QCA as a research approach, applied research shows that qualitative methods uneasily fit with QCA. This is because data collection has not been recognised as an integral part of the QCA research process. In Sect.  3 , we show how qualitative data, and in particular a dialogical interviewing style, can help researchers to develop case intimacy.

3 The relational move

Social data are not self-evident facts, they do not reveal anything in themselves, but researchers must engage in interpretative efforts concerning their meaning (Sandelowski et al. 2009 ; Silverman, 2017 ). Differently stated, quantitising and qualitising characterise both quantitative and qualitative social data, albeit to different degrees (Sandelowski et al. 2009 ). This is an ontological understanding of reality that is diversely held by post-positivist, critical realist, critical and constructivist approaches (except by positivist scholars; see Guba and Lincoln, 2005 :193ff). Our position is more akin to critical realism that, in contrast to post-modernist perspectives (Spencer et al. 2014 :85ff), holds that reality exists “out there” and that epistemologically, our knowledge of it, although imperfect, is possible–for instance through the scientific method (Sayer 1992 ).

The socially constructed, not self-evident character of social data is manifest in the collection and analysis of qualitative data. Access to the field needs to be earned, as well as trust and consent from participants, to gradually build and expand a network of participants. More than “collected”, data are “gathered”, because they imply the cooperation with participants. Data from interviews and observations are heterogeneous, and need to be transcribed and analysed by researchers, who also self-reflectively experience the entire process of data collection. QCA researchers using qualitative data necessarily have to go through this additional research process–or move-to gather and generate data, before QCA as a research approach can even start. As QCA researchers using qualitative data need to interact with participants to collect their data, we call this additional research process “relational move”.

While we limit our discussion to narrative interviews and select a few references from a vast literature, our claim is that it is the ability of the interviewer to give life to interviews as a distinct type of social interaction that is key for the data collection process (Chase 2005 ; Leech 2002 ; La Mendola 2009 ; Brinkmann. 2014 ). The ability of the interviewer to establish a dialogue with the interviewee–also in the case of (semi-)structured interviews–is crucial to gain access to case-based knowledge and thus develop the case intimacy later needed in the analytical and membership moves. The relational move is about a researcher’s ability to handle the intrinsic duality characterising that specific social interaction we define as an interview. Both (or more) partners have to be considered as necessary actors involved in giving shape to the “inter-view” as an ex-change of views.

Qualitative researchers call this ability “rapport” (Leech, 2002 :665), “contract” or “staging” (Legard et al., 2003 :139). In our specific understanding of the relational move through a “dialogical” Footnote 4 interviewing style, during the interview 1) the interviewer and the interviewee become the “listener” and the “narrator” (Chase, 2005 :660) and 2) a true dialogue between listener and narrator can only take place when they engage in an “I-thou” interaction (Buber 1923 /2008), as we will show below when we discuss selected examples from our own research.

As a communicative style, in a dialogical interview not only the researcher cannot disappear behind the veil of objectivity (Spencer et al. 2014 ), but the researcher is also aware of the relational duality–or “dialogueness”–inherent to the “inter-view”. Dialogical face-to-face interviewing can be compared to a choreography (Brinkman 2014 :283; Silverman 2017 :153) or a dance (La Mendola 2009 , ch. 4 and 5) where one of the partners (the researcher) is the porteur (“supporter”) of the interaction. As in a dancing couple, the listener supports, but does not lead, the narrator in the unfolding of her story. The dialogical approach to interviewing is hence non-directive, but supportive. A key characteristic of dialogical interviews is a particular way of “being in the interview” (see example 2 below) because it requires the researcher to consider the interviewee as a true narrator (a “thou”). Footnote 5

In a dialogical approach to interviews, questions can be thought of as frames through which the listener invites the narrator to tell a story in her own terms (Chase 2005 :662). The narrator becomes the “subject of study” who can be disobedient and capable to raise her own questions (Latour 2000 : 116; see also Lund 2014). This is also compatible with a critical realist ontology and epistemology, which holds that researchers inevitably draw artificial (but negotiable) boundaries around the object and subject of analysis (Gerrits and Verweij 2013). The case-based, or data-driven (ib.), character of QCA as a research approach hence takes a new meaning: in a dialogical interviewing style, although the interviewer/listener proposes a focus of analysis and a frame of meaning, the interviewee/narrator is given the freedom to re-negotiate that frame of meaning (La Mendola 2009 ; see examples 1 and 2 below).

We argue that this is an appropriate way to obtain case intimacy and in-depth knowledge for subsequent QCA, because it is the narrator who proposes meanings that will then be translated by the researcher, in the following moves, into set membership values.

Particularly key for a dialogical interviewing style is the question formulation, where interviewer privileges “how” questions (Becker 1998 ). In this way, “what” and “why” (evaluative) questions are avoided, where the interviewee is asked to rationally explain a process with hindsight and that supposedly developed in a linear way. Also typifying questions are avoided, where the interviewer gathers general information (e.g. Can you tell me about the process through which an urban project is typically built? Can you tell me about your typical day as an academic?). Footnote 6 “Dialogical” questions can start with: “I would like to propose you to tell me about…” and are akin to “grand tour questions” (Spradley 1979 ; Leech 2002 ) or questions posed “obliquely” (Roulston 2018 ) because they aim at collecting stories, episodes in a certain situation or context and allowing the interviewee to be relatively free to answer the questions.

An example taken from our own research on a QCA of large-scale urban transformations in Western Europe illustrates the distinct approach characterising dialogical interviewing. One of our aims was to reconstruct the decision-making process concerning why and how a certain urban transformation took place (Pagliarin et al. 2019 ). QCA has already been previously used to study urban development and spatial policies because it is sensitive to individual cases, while also accounting for cross-case patterns by means of causal complexity (configurations of conditions), equifinality and causal asymmetry (e.g. Byrne 2005 ; Verweij and Gerrits 2015 ; Gerrits and Verweij 2018 ). A conventional way to formulate this question would be: “In your opinion, why did this urban transformation occur at this specific time?” or “Which were the governance actors that decided its implementation?”. Instead, we formulated the question in a narrative and dialogical way:

Example 1 Listener [L]: Can you tell me how the site identification and materialization of Ørestad came about? Narrator [N]: Yes. I mean there’s always a long background for these projects. (…) it’s an urban area built on partly reclaimed land. It was, until the second world war, a seaport and then they reclaimed it during the second world war, a big area. (…) this is the island called Amager. In the western part here, you can see it differs completely from the rest and that’s because they placed a dam all around like this, so it’s below sea level. (…) [L]: When you say “they”, it’s…? [N]: The municipality of Copenhagen. Footnote 7 (…)

In this example, the posed question (“how… [it]… came about?”) is open and oriented toward collecting the specific story of the narrator about “how” the Ørestad project emerged (Becker 1998 ), starting at the specific time point and angle decided by the interviewee. In this example, the interviewee decided to start just after the Second World War (albeit the focus of the research was only from the 1990s) and described the area’s geographical characteristics as a background for the subsequent decision-making processes. It is then up to the researcher to support the narrator in funnelling in the topics and themes of interest for the research. In the above example, the listener asked: “When you say “they”, it’s…?” to signal to the narrator to be more specific about “they”, without however assuming to know the answer (“it’s…?”). In this way, the narrator is supported to expand on the role of Copenhagen municipality without directly asking for it (which is nevertheless always a possibility to be seized by the interviewer).

The specific “dialogical” way of the researcher of “being in the interview” is rooted in the epistemological awareness of the discrepancy between the narrator’s representation and the listener’s. During an interview, there are a number of “representation loops”. As discussed in the interpretative spiral (see Sect.  2 ), the analytical and membership moves are characterised by a number of research steps; similarly, in the relational move the researcher engages in representation loops or interpretative steps when interacting with the interviewee. The researcher holds ( a ) an analytical representation of her focus of analysis, ( b ) which will be re-interpreted by the interviewee (Geertz, 1973 ). In a dialogical style of interview, the researcher also embraces ( c ) her representation of the ( b ) interviewee's interpretation of ( a ) her theory-led representation of the focus of analysis. Taken together, ( a )-( b )-( c ) are the structuring steps of a dialogical interview, where the listener’s and narrator’s representations “dance” with one another. In the relational move, the interviewer is aware of the steps from one representation to another.

In the following Example 2 , the narrator re-elaborated (interpretative step b) the frame of meaning of the listener (interpretative step a) by emphasising to the listener two development stages of a certain project (an airport expansion in Barcelona, Spain), which the researcher did not previously think of (interpretative step c):

Example 2 [L]: Could you tell me about how the project identification and realisation of the Barcelona airport come about? [N]: Of the Barcelona airport? Well. The Barcelona airport is I think a good thermometer of something deeper, which has been the inclusion of Barcelona and of its economy in the global economy. So, in the last 30 years El Prat airport has lived through like two impulses of development, because it lived, let´s say, the necessary adaptation to a specific event, that is the Olympic games. There it lived its first expansion, to what we today call Terminal 2. So, at the end of the ´80 and early ´90, El Prat airport experienced its first big jump. (...) Later, in 2009 (...) we did a more important expansion, because we did not expand the original terminal, but we did a new, bigger one, (...) the one we now call Terminal 1. Footnote 8

If the interviewee is considered as a “thou”, and if the researcher is aware of the representation loops (see above), the collected information can also be helpful for constructing the study population in QCA. The population under analysis is oftentimes not given in advance but gradually defined through the process of casing (Ragin 2000 ). This allows the researcher to be open to construct the study population “with the help of others”, like “informants, people in the area, the interlocutors” (Lund 2014:227). For instance, in example 2 above, the selection of which urban transformations will form the dataset can depend on the importance given by the interviewees to the structuring impact of a certain urban transformation on the overall urban structure of an urban region.

In synthesis, the data collection process is a move on its own in the research process for performing QCA. Especially when the collected data are qualitative, the researcher engages in a relation with the interlocutor to gather information. A dialogical approach emphasises that the quality of the gathered data depends on the quality of the dialogue between narrator and listener (La Mendola 2009 ). When the listener is open to consider the interviewee as a “thou”, and when she is aware of the interpretative steps occurring in the interview, then meaningful case-based knowledge can be accessed.

Case intimacy is at best developed when the researcher is open to integrate her focus of analysis with fieldwork information and when s/he invites, like in a dance, the narrator to tell his story. However, a dialogical interviewing style is not theory-free, but it is “theory-independent”: the dialogical interviewer supports the narration of the interviewee and does not lead the narrator by imposing her own conceptualisations. We argue that such dialogical I-thou interaction during interviews fosters in-depth knowledge of cases, because the narrator is treated as a subject that can propose his interpretation of the focus of analysis before the researcher frames it within her analytical and membership moves.

However, in practice, there is a tension between the researcher's need to collect data and the “here-and-now interactional event of the interview” (Rapley, 2001 :310). It is inevitable that the researcher re-elaborates, to a certain degree, her  analytical framework during the interviews, because this enables the researcher to get acquainted with the object of analysis and to keep the interview content on target with the research goals (Jopke and Gerrits, 2019 ). But is it this re-interpretation of the interviewee's replies and stories by the listener during the interviews that opens the interviewer’s awareness of the representation loops.

4 The analytical and membership moves

Researchers engage in face-to-face interviews as a strategy for data collection by holding specific analytical frameworks and theories. A researcher seldom begins his or her undertakings, even in the exploratory phase, with a completely open mind (Lund 2014:231). This means that the researcher's representations (a and c, see above) of the narrator's representation(s) (b, see above) are related to the theory-led frames of inquiry that the researcher organises to understand the world. These frames are typically also verbal, as “[t]his framing establishes, and is established through, the language we employ to speak about our concerns” (Lund 2014:226).

In particular for the collection of qualitative data, the analytical move is composed of two main movements: during and after the data collection process. During the data collection process, when adopting a dialogical interviewing style, the researcher should mind keeping the interview theory-independent (see above). First, this means that the interviewee is not asked to get to the researcher’s analytical level. The use of jargon should be avoided, either in narrative or semi-structured interviews and questionnaires, because it would limit the narrator's representation(s) (b) within the listener's interpretative frames (a), and hence the chance for the researcher to gain in-depth case knowledge (c). Silverman ( 2017 :154) cautions against “flooding” interviewees with “social science categories, assumptions and research agendas”. Footnote 9 In example 1 above, the use of the words “governance actors” may have misled the narrator–even an expert–since its meaning might not be clear or be the same as the interviewer's.

Second, the researcher should neither sympathise with the interviewee nor judge the narrator’s statements, because this would transform the interview into another type of social interaction, such as a conversation, an interrogation or a confession (La Mendola 2009 ). The analytical move requires that the researcher does not confuse the interview as social interaction with his or her analysis of the data, because this is a specific, separate moment after the interview is concluded. Whatever material or stories a researcher receives during the interviews, it is eventually up to him or her to decide which representation(s) will be told (and how) (Stake 2005 :456). It is the job of the researcher to perform the necessary analytical work on the collected data.

After the fieldwork, the second stage of the analytical move is a change of state of the interviewees' replies and stories to subsequently “feed in” in QCA. The researcher begins to qualitatively assess and organise the in-depth knowledge, in the form of replies or stories, received by the interviewees through their narrations. This usually involves the (double-)coding of the qualitative material, manually or through the use of dedicated software. The analysis of the qualitative material organises the in-depth knowledge gained through the relational move and sustains the (re)definition of the outcome and conditions, their related attributes and sub-dimensions, for performing QCA.

In recognising the difficulty in integrating qualitative (interview) data into QCA procedures, QCA-researchers have developed templates, tables or tree diagrams to structure the analysed qualitative material into set membership scores (Basurto and Speer 2012 ; Legewie 2017 ; Tóth et al. 2017 ; see also online supplementary material). We call these different templates “ Supports for Membership Representation ” (SMeRs) because they facilitate the passage from conceptualisation (analytical move) to operationalisation into set membership values (membership move). Below, we discuss these templates by placing them along a continuum from “more theory-driven” to “more data-driven” (see Gerrits and Verweij 2018 , ch. 1). Although the studies included below did not use a dialogical approach to interviews, we also examine the SMeRs in terms of their openness towards the collected material. As explained above, we believe it is this openness–at best “dialogical”–that facilitates the development of case intimacy on the side of the researcher. In distinguishing the steps characterising both moves (see Sect.  2 above), below we differentiate the analytical and membership moves.

Basurto and Speer ( 2012 ) were the first develop and present a preliminary but modifiable list of theoretical dimensions for conditions and outcome. Their interview guideline is purposely developed to obtain responses to identify anchor points prior to the interviews and to match fuzzy sets. In our perspective, this contravenes the separation between the relational and analytical move: the researcher deals with interviewees as “objects” whose shared information is fitted to the researchers’ analytical framework. In their analytical move, Basurto and Speer define an ideal and a deviant case–both of them non-observable–to locate, by comparison, their cases and facilitate the assignment of fuzzy-set membership scores (membership move).

Legewie ( 2017 ) proposes a “grid” called Anchored Calibration (AC) by building on Goertz ( 2006 ). In the analytical move, the researcher first structures (sub-)dimensions for each condition and the outcome by means of concept trees. Each concept is then represented by a gradation, which should form conceptual continua (e.g. from low to high) and is organised in a tree diagram to include sub-dimensions of the conditions and outcome. In the membership move, to each “graded” concept, anchor points are assigned (i.e. 0, 0.25, 0.75, 1). The researcher then iteratively matches coded evidence from narrative interviews (analytical move) to the identified anchor points for calibration, thus assigning set membership scores (e.g. 0.33 or 0.67; i.e. membership move). Similar to Basurto and Speer ( 2012 ), the analytical framework of the researcher is given priority to and tightly structures the collected data. Key for anchored calibration is the conceptual neatness of the SMeR, which is advantageous for the researcher but that, following our perspective, allows a limited dialogue with the cases and hence the development of case intimacy.

An alternative route is the one proposed by Tóth et al. ( 2017 ). The authors devise the Generic Membership Evaluation Template (GMET) as a “grid” where qualitative information from the interviews (e.g. quotes) and from the researcher’s interpretative process is included. In the analytical move, their template clearly serves as a “translation support” to represent “meanings” into “numbers”: researchers included information on how they interpreted the evidence (e.g. positive/negative direction/effect on membership of a certain attribute; i.e. analytical move), as well as an explanation of why specific set membership scores have been assigned to cases (i.e. membership move). Tóth et al.’s ( 2017 ) SMeR appears more open to the interviewees’ perspective, as researchers engaged in a mixed-method research process where the moment of data collection–the relational move–is elaborated on (Tóth et al. 2015 ). We find their approach more effective for gaining in-depth knowledge of cases and for supporting the dialogue between theory and data.

Jopke and Gerrits ( 2019 ) discuss routines, concrete procedures and recommendations on how to inductively interpret and code qualitative interview material for subsequent calibration by using a grounded-theory approach. In their analytical move, the authors show how conditions can be constructed from the empirical data collected from interviews; they suggest first performing an open coding of the interview material and then continuing with a theoretical coding (or “closed coding”) that is informed by the categories identified in the previous open coding procedure, before defining set membership scores for cases (i.e. membership move). Similar to Tóth et al. ( 2017 ), Jopke and Gerrits’ ( 2019 ) SMeR engages with the data collection and the gathered qualitative material by being open to what the “data” have to “tell”, hence implementing a strategy for data analysis that is effective to gain in-depth knowledge of cases.

Another type of SMeR is the elaboration of summaries of the interview material by unit of analysis (e.g. urban transformations, participation initiatives, interviewees’ individual careers paths). Rihoux and Lobe ( 2009 ) propose the so-called short case descriptions (SCDs). Footnote 10 As a possible step within the interpretative spiral available to the researcher, short case descriptions (SCDs) are concise summaries that effectively synthesise the most important information sorted by certain identified dimensions, which will then compose the conditions, and their sub-dimensions, for QCA. As a type of SMeR, the summaries consist of a change of state of the qualitative material, because they provide “intermediate” information on the threshold between the coding of the interviews' transcripts and the subsequent assignment of membership scores (the membership move, or calibration) for the outcome and each condition. Furthermore, the writing of short summaries appears to be particularly useful to allow researchers that have already performed narrative interviews to evaluate whether to carry out QCA as a systematic method for comparative analysis. For instance, similar to what Tóth et al. ( 2017 :200) did to reduce interview bias, in our own research interviewees could cover the development of multiple cases, and the use of short summaries helped us compare information per each case across multiple interviewees and spot possible contradictions.

The overall advantage of SMeRs is helping researchers provide an overview of the quality and “patchiness” of available information about the cases per interview (or document). SMeRs can also help spot inconsistencies and contradictions, thus guiding researchers to judge if their data can provide sufficiently homogeneous information for the conditions and outcome composing their QCA-model. This is particularly relevant in case-based QCA research, where descriptive inferences are drawn from the material collected from the selected cases and the degree of its internal validity (Thomann and Maggetti 2017 :361). Additionally, the issue of the “quality” and “quantity” across the available qualitative data (de Block and Vis 2018 ) can be checked ex-ante before embarking on QCA.

For the membership move, the GMET, the AC, grounded theory coding and short summaries supports the qualitative assignment of set membership values from empirical interview data. SMeRs typically include an explanation about why a certain set membership score has been assigned to each case record, and diagrammatically arrange information about the interpretation path that researchers have followed to attribute values. They are hence a true “interface” between qualitative empirical data (“words/meaning”) and set membership values (“numbers”). Each dimension included in SMeRs can also be coupled with direct quotes from the interviews (Basurto and Speer 2012 ; Tóth et al. 2017 ).

In our own research (Pagliarin et al. 2019 ), after having coded the interview narratives, we developed concepts and conditions first by comparing the gathered information through short summaries—similar to short case descriptions (SCDs), see Rihoux and Lobe ( 2009 )—and then by structuring the conditions and indicators in a grid by adapting the template proposed by Tóth et al. ( 2017 ). One of the goals of our research was to identify “external factors or events” affecting the formulation and development of large-scale urban transformations. External (national and international) events (e.g. failed/winning bid for the Olympic Games, fall of Iron Curtain/Berlin wall) do not have an effect per se, but they stimulate actors locally to make a certain decision about project implementation. We were able to gain this knowledge because we adopted a dialogical interviewing style (see Example 3 below). As the narrator is invited to tell us about some of the most relevant projects of urban transformation in Greater Copenhagen in the past 25–30 years, the narrator is free to mention the main factors and actors impacting on Ørestad as an urban transformation.

Example 3 [L]: In this interview, I would propose that you tell me about some of the most relevant projects of urban transformation that have been materialized in Greater Copenhagen in the past 25–30 years. I would like you to tell me about their itinerary of development, step by step, and if possible from where the idea of the project emerged. [N]: Okay, I will try to start in the 80’s. In the 80’s, there was a decline in the city of Copenhagen. (…) In the end of the 80’s and the beginning of the 90’s, there was a political trend. They said, “We need to do something about Copenhagen. It is the only big city in Denmark so if we are going to compete with other cities, we have to make something for Copenhagen so it can grow and be one of the cities that can compete with Amsterdam, Hamburg, Stockholm and Berlin”. I think also it was because of the EU and the market so we need to have something that could compete and that was the wall falling in Berlin. (…) The Berlin Wall, yes. So, at that time, there was a commission to sit down with the municipality and the state and they come with a plan or report. They have 20 goals and the 20 goals was to have a bridge to Sweden, expanding of the airport, a metro in Copenhagen, investment in cultural buildings, investment in education. (…) In the next 5 years, from the beginning of the 90’s to the middle of the 90’s, there were all of these projects more or less decided. (…) The state decided to make the airport, to make the bridge to Sweden, to make… the municipality and the city of Copenhagen decides to make Ørestad and the metro together with the state. So, all these projects that were lined up on the report, it was, let’s decide in the next 5 years. [L]: So, there was a report that decided at the end of the 80’s and in the 90’s…? [N]: Yes, ‘89. (…) To make all these projects, yes. (…). [L]: Actually, one of the projects I would like you to tell me about is the Ørestad. R: Yes. It is the Ørestad. The Ørestad was a transformation… (…).

The factors mentioned by the interviewee corresponded to the main topics of interest by the researcher. In this example, we can also highlight the presence of a “prompt” (Leech 2002 ) or “clue” (La Mendola 2009 ). To keep the narrator on focus, the researcher “brings back” (the original meaning of rapporter ) the interviewee to the main issues of the inter-view by asking “So, there was a report…”.

Following the question formulation as shown in example 3, below we compare the external event(s) impacting the cases of Lyon Part-Dieu in France (Example 4 ) and Scharnhauserpark in Stuttgart in Germany (Example 5 ).

Example 4 [N]: So, Part-Dieu is a transformation of the1970s, to equip [Lyon] with a Central Business District like almost all Western cities, following an encompassing regional plan. This is however not local planning, but it is part of a major national policy. (…) To counterbalance the macrocephaly of Paris, 8 big metropolises were identified to re-balance territorial development at the national level in the face of Paris. (…) including Lyon. (…) The genesis of Part-Dieu is, in my opinion, a real-estate opportunity, and the fact to have military barracks in an area (…) 15 min away from the city centre (…) to reconvert in a business district. Footnote 11
Example 5 [N]: When the American Army left the site in 1992, the city of Ostfildern consisted of five villages. They bought the site and they said, “We plan and build a new centre for our village”, because these are five villages and this is in the very centre. It’s perfectly located, and when they started they had 30,000 inhabitants and now that it’s finished, they have 40,000, so a third of the population were added in the last 20 years by this project. For a small municipality like Ostfildern, it was a tremendous effort and they were pretty good at it. Footnote 12

In the examples above, Lyon Part-Dieu and Scharnhauserpark are unique cases and developed into an area with different functions (a business district and a mixed-use area), but we can identify a similar event: the unforeseen dismantling of military barracks. Both events were considered external factors punctually identifiable in time that triggered the redevelopment of the areas. Instead, in the following illustration about the “Confluence” urban renewal in Lyon, the identified external event relates to a global trend regarding post-industrial cities and the “patchwork” replacement of functions in urban areas:

Example 6 [N]: The Confluence district (…) the wholesale market dismantles and opens an opportunity at the south of the Presqu'Île, so an area extremely well located, we are in the city centre, with water all around because of the Saône and Rhône rivers, so offering a great potential for a high quality of life. However, I say “potential” because there is also a highway passing at the boundary of the neighbourhood. Footnote 13

Although our theoretical framework identified a set of exogenous factors affecting large-scale urban transformations locally, we used the empirical material from our interviews to conceptualise the closing of military barracks and the dismantling of the wholesale market as two different, but similar types of external events, and considered them to be part of the same “external events” condition. In set-theoretic terms, this condition is defined as a “set of projects where external (unforeseen) events or general/international trends had a large impact on project implementation”. The broader set conceptualisation of this condition is possibly not optimal, as it reflects the tension in comparative research to find a balance between capturing cases’ individual histories (case idiosyncrasies) and more concepts that are abstract “enough” to account for cross-case patterns (see Gerrits and Verwej 2018 ; Jopke and Gerrit 2019 ). This is a key challenge of the analytical move.

However, the core of the subsequent membership move is precisely to perform a qualitative assessment to capture these differences by assigning different set-membership values. In the case of Lyon Confluence, where the closing of the whole sale market as external event did happen but did only have a “general” influence on the area’s redevelopment, the case was given a set membership value of 0.33 to this condition. In contrast, the case of Lyon Part-Dieu was given a set membership score of 0.67 to the condition “external events” because a French military area was dismantled, but it was also combined with a national strategy of the French state to redistribute territorial development across France. According to our analysis of the collected qualitative material, it was an advantage that the military area was dismantled but the redevelopment of Part-Dieu would have probably been affected anyway by the overall national territorial strategy. Footnote 14 Finally, the case of Stuttgart Scharnhauserpark case was given full membership (1.00) to the condition, because the US army left the area–which is an indication of a “fully exogenous” event–that truly stimulated urban change in Scharnhauserpark. Footnote 15

Our calibration (membership move) of the three cases illustrated in Examples 4 , 5 and 6 shows that set membership values represent a concept, at times also relatively broad to allow comparison (analytical move), but that they do not replace the specific way (or “meaning”) through which the impact of external factors empirically instantiate in each of the cases discussed in the above examples.

In the interpretative spiral Fig.  1 , there is hence–despite our wishes–no perfect correspondence between meanings and numbers (quantitising) and numbers and meanings (qualitising; see Sandelowski et al. 2009 ). This is a consequence of the constructed nature of social data (see Sect.  2 ). When using qualitative data, fuzzy-sets are “ interpretive tools” to operationalise theoretical concepts (Ragin 2000 :162, original emphasis) and hence are approximations to reality. In other words, set memberships values are token s. Here, we agree with Sandelowski et al. ( 2009 ), who are critical of “the rhetorical appeal of numbers” (p. 208) and the vagaries of ordinal categories in questionnaires (p. 211ff).

Note that calibration by using qualitative data is not blurry or unreliable. On the contrary, its robustness is given by the quality of the dialogue established between researcher and interviewee and by the acknowledgement that the analytical and membership moves are types of representation –as fourth and fifth representation loops. It might hence be possible that QCA researchers using qualitative data have a different research experience of QCA as a research approach and method than QCA researchers using quantitative data.

5 Conclusion

In this study, we critically observed how, so far, qualitative data have been used in few QCA studies, and only a handful use narrative interviews (de Block and Vis 2018 ). This situation is puzzling because qualitative research methods can offer an effective route to gain access to in-depth case knowledge, or case intimacy, considered key to perform QCA.

Besides the higher malleability of quantitative data for set conceptualisation and calibration (here called “analytical” and “membership” moves), we claimed that the limited use of qualitative data in QCA applied research depends on the failure to recognise that the data collection process is a constituent part of QCA “as a research approach”. Qualitative data, such as interviews, focus groups or documents, come in verbal form–hence, less “ready” for calibration than quantitative data–and require a research phase on their own for data collection (here called the “relational move”). The relational, analytical and membership moves form an “interpretative spiral” that hence accounts for the main research phases composing QCA “as a research approach”.

In the relational move, we showed how researchers can gain access to in-depth case-based knowledge, or case intimacy, by adopting a “dialogical” interviewing style (La Mendola 2009 ). First, researchers should be aware of the discrepancy between the interviewee/narrator’s representation and the interviewer/listener’. Second, researchers should establish an “I-thou” relationship with the narrator (Buber 1923 /2010; La Mendola 2009 ). As in a dancing couple, the interviewer/listener should accompany, but not lead, the narrator in the unfolding of her story. These are fundamental routes to make the most of QCA’s qualitative potential as a “close dialogue with cases” (Ragin 2014 :81).

In the analytical and membership moves, researchers code, structure and interpret their data to assign crisp- and fuzzy-set membership values. We examined the variety of templates–what we call Supports for Membership Representation (SMeRs)–designed by QCA-researchers to facilitate the assignment of “numbers” to “words” (Rihoux and Lobe 2009 ; Basurto and Speer 2012 ; Legewie 2017 ; Tóth et al. 2015 , 2017 ; Jopke and Gerrits 2019 ).

Our study did not offer an overarching examination of the research process involved in QCA, but critically focussed on a specific aspect of QCA as a research approach. We focussed on the “translation” of data collected through qualitative research methods (“words” and “meanings”) into set membership values (“numbers”). Hence, in this study the discussion of QCA as a method has been limited.

We hope our paper has been a first contribution to identify and critically examine the “qualitative” character of QCA as a research approach. Further research could identify other relevant moves in QCA as a research approach, especially when non-numerical data are employed and regarding internal and external validity. Other moves and steps could also be identified or clearly labelled in QCA as a method, in particular when assessing limited diversity, skewedness (e.g. “data distribution” step) and the management of true logical contradictions (e.g. “solving contradictions” move). These are all different mo(ve)ments in the full-fledged application of QCA that allow researchers to make sense of their data and to connect “theory” and “evidence”.

As also noted by de Block and Vis ( 2018 ), QCA researchers are not always clear about what they exactly mean with “in-depth” or “open” interviews and how they informed the calibration process (e.g. Verweij and Gerrits, 2015 ), especially when also quantitative data and different coders were used (e.g. Chai and Schoon, 2016 ).

See online appendix.

We are aware that other studies combining narrative interviews and QCA have been carried out, but here we limit our discussion only to already published articles that we are aware of at the time of writing.

Without going into further details on this occasion, the term “dialogical” explicitly refers to the “dialogical epistemology” as discussed by Buber ( 1923 /2008) who distinguishes between an “I-thou” relation and an “I-it” experience. In this perspective, “dialogical” is considered as a synonym of “relational” (i.e. “I-thou” relation).

See footnote.4

The interviewer avoids posing evaluative and typifying questions to the narrator, but the former naturally works through evaluative and typifying research questions.

Copenhagen, Interview 5, September 1, 2016.

Barcelona, Interview 1, June 27, 2016. Translated from the original Spanish.

We take the risk to quote Silverman ( 2017 ) although in his article he warned about extracting and using quotes to support the researchers' arguments.

Gerrits and Verweij ( 2018 ) also emphasise the usefulness of thick case descriptions.

Lyon, Interview 4, October, 13 2016. Translated from the original French.

Stuttgart, Interview 1, July, 18 2016.

Lyon, Interview 1, October 11, 2016. Translated from the original French.

This consideration also relates to the interdependence, and not necessarily independence, of conditions in QCA, which is a topic that is beyond the scope of this study (see e.g. Jopke and Gerrits 2019 ).

For a discussion regarding the “absence” of possible factors from the interviewees' narrations, we refer readers to Sandelowski et al. ( 2009 ) and de Block and Vis ( 2018 ). In general, data triangulation is a good strategy to deal with partial and even contradictory information collected from multiple interviewees. For our own strategy regarding data triangulation, we also used an online questionnaire, additional literature and site visits (Pagliarin et al. 2019 ).

Basurto, X., Speer, J.: Structuring the calibration of qualitative data as sets for qualitative comparative analysis (QCA). Field Methods 24 , 155–174 (2012)

Article   Google Scholar  

Becker, H.S.: Tricks of the Trade: HOW to Think About Your Research While You’re Doing It. University Press, Chicago (1998)

Book   Google Scholar  

Brinkmann, S.: Unstructured and Semistructured Interviewing. In: Leavy, P. (ed.) The Oxford Handbook of Qualitative Research, pp. 277–300. University Press, Oxford (2014)

Google Scholar  

Buber M (1923/2010) I and Thou. Charles Scribner's Sons New York

Byrne, D.: Complexity configurations and cases. Theory Cult. Soc. 22 (5), 95–111 (2005)

Chai, Y., Schoon, M.: Institutions and government efficiency: decentralized Irrigation management in China. Int. J. Commons 10 (1), 21–44 (2016)

Chase, S.E.: Narrative inquiry: multiple lenses, approaches, voices. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 631–679. Sage, Thousand Oaks, CA (2005)

Collier, D.: Symposium: The Set-Theoretic Comparative Method—Critical Assessment and the Search for Alternatives. ID 2463329, SSRN Scholarly Paper, 1 July. Rochester, NY: Social Science Research Network. Available at: https://papers-ssrn-com.eur.idm.oclc.org/abstract=2463329 (Accessed 9 March 2021). (2014)

de Block, D., Vis, B.: Addressing the challenges related to transforming qualitative into quantitative data in qualitative comparative analysis. J. Mixed Methods Res. 13 , 503–535 (2018). https://doi.org/10.1177/1558689818770061

Fischer, M.: Institutions and coalitions in policy processes: a cross-sectoral comparison. J. Publ. Policy 35 , 245–268 (2015)

Geertz, C.: The Interpretation of Cultures. Basic Books, New York (1973)

Gerrits, L., Verweij, S.: The Evaluation of Complex Infrastructure Projects. A Guide to Qualitative Comparative Analysis. Edward Elgar, Cheltenham UK (2018)

Goertz, G.: Social Science Concepts. A User’s Guide. University Press, Princeton (2006)

Greckhamer, T., Misangyi, V.F., Fiss, P.C.: Chapter 3 the two QCAs: from a small-N to a large-N set theoretic approach, In Fiss, P.C., Cambré, B. and Marx, A. (Eds.), Configurational theory and methods in organizational research (Research in the Sociology of Organizations, Vol. 38), Emerald Group Publishing Limited, Bingley, pp. 49–75. https://doi.org/10.1108/S0733-558X(2013)0000038007 (2013)

Guba, E.G., Lincoln, Y.S.: Paradigmatic controversies, contradictions and emerging confluences. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 191–215. Sage, Thousand Oaks, CA (2005)

Harvey, D.L.: Complexity and case D. In: Byrne, Ragin, C.C. (eds.) The SAGE Handbook of Case-Based Methods, pp. 15–38. SAGE Publications Inc, London (2009)

Chapter   Google Scholar  

Henik, E.: Understanding whistle-blowing: a set-theoretic approach. J. Bus. Res. 68 , 442–450 (2015)

Jopke, N., Gerrits, L.: Constructing cases and conditions in QCA – lessons from grounded theory. Int. J. Soc. Res. Methodol. 22 (6), 599–610(2019). https://doi.org/10.1080/13645579.2019.1625236

La Mendola, S.: Centrato e Aperto: dare vita a interviste dialogiche [Centred and Open: Give life to dialogical interviews]. UTET Università, Torino (2009)

Latour, B.: When things strike back: a possible contribution of ‘science studies’ to the social sciences. Br. J. Sociol. 51 , 107–123 (2000)

Leech, B.L.: Asking questions: Techniques for semistructured interviews. Polit. Sci. Polit. 35 , 665–668 (2002)

Legard, R., Keegan, J., Ward, K.: In-depth interviews. In: Richie, J., Lewis, J. (eds.) Qualitative Research Practice, pp. 139–168. Sage, London (2003)

Legewie, N.: Anchored Calibration: From qualitative data to fuzzy sets. In: Forum Qualitative Sozialforschung / Forum: Qualitative Social Research 18 (3), 14 (2017). https://doi.org/10.17169/fqs-18.3.2790

Lucas, S.R., Szatrowski, A.: Qualitative comparative analysis in critical perspective. Sociol. Methodol. 44 (1), 1–79 (2014)

Metelits, C.M.: The consequences of rivalry: explaining insurgent violence using fuzzy sets. Polit. Res. q. 62 , 673–684 (2009)

Pagliarin, S., Hersperger, A.M., Rihoux, B.: Implementation pathways of large-scale urban development projects (lsUDPs) in Western Europe: a qualitative comparative analysis (QCA). Eur. Plan. Stud. 28 , 1242–1263 (2019). https://doi.org/10.1080/09654313.2019.1681942

Ragin, C.C.: The Comparative Method. Moving Beyond Qualitative and Quantitative Strategies. University of California Press, Berkeley and Los Angeles (1987)

Ragin, C.C.: Fuzzy-Set Social Science. University Press, Chicago (2000)

Ragin, C.C.: Redesigning Social Inquiry. Fuzzy Sets and Beyond. University Press, Chicago (2008a)

Ragin, C.C.: Fuzzy sets: calibration versus measurement. In: Collier, D., Brady, H., Box-Steffensmeier, J. (eds.) Methodology Volume of Oxford Handbooks of Political Science, pp. 174–198. University Press, Oxford (2008b)

Ragin, C.C.: Comment: Lucas and Szatrowski in Critical Perspective. Sociol. Methodol. 44 (1), 80–94 (2014)

Rapley, T.J.: The art (fulness) of open-ended interviewing: some considerations on analysing interviews. Qual. Res. 1 (3), 303–323 (2001)

Rihoux, B., Ragin, C. (eds.): Configurational Comparative Methods. Qualitative Comparative Analysis (QCA) and related Techniques. Sage, Thousand Oaks, CA (2009)

Rihoux, B., Lobe, B.: The case for qualitative comparative analysis (QCA): adding leverage for thick cross-case comparison. In: Byrne, D., Ragin, C.C. (eds.) The SAGE Handbook of Case-Based Methods, pp. 222–242. SAGE Publications Inc, London (2009)

Roulston, K.: Qualitative interviewing and epistemics. Qual. Res. 18 (3), 322–341 (2018)

Sandelowski, M., Voils, C.I., Knafl, G.: On quantitizing. J. Mixed Methods Res. 3 , 208–222 (2009)

Sayer, A.: Method in Social Science. A Realist Approach. Routledge, London (1992)

Schneider, C.Q., Wagemann, C.: Set-Theoretic Methods for the Social Sciences. A Guide to Qualitative Comparative Analysis. University Press, Cambridge (2012)

Silverman, D.: How was it for you? The Interview Society and the irresistible rise of the (poorly analyzed) interview. Qual. Res. 17 (2), 144–158 (2017)

Spencer, R., Pryce, J.M., Walsh, J.: Philosophical approaches to qualitative research. In: Leavy, P. (ed.) The Oxford Handbook of Qualitative Research, pp. 81–98. University Press, Oxford (2014)

Spradley, J.P.: The ethnographic interview. Holt Rinehart and Winston, New York (1979)

Stake, R.E.: Qualitative case studies. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 443–466. Sage, Thousand Oaks, CA (2005)

Thomann, E., Maggetti, M.: Designing research with qualitative comparative analysis (QCA): approaches, challenges, and tools. Sociol. Methods Res. 49 (2), 356–386 (2017)

Tóth, Z., Thiesbrummel, C., Henneberg, S.C., Naudé, P.: Understanding configurations of relational attractiveness of the customer firm using fuzzy set QCA. J. Bus. Res. 68 (3), 723–734 (2015)

Tóth, Z., Henneberg, S.C., Naudé, P.: Addressing the ‘qualitative’ in fuzzy set qualitative comparative analysis: the generic membership evaluation template. Ind. Mark. Manage. 63 , 192–204 (2017)

Verweij, S., Gerrits, L.M.: How satisfaction is achieved in the implementation phase of large transportation infrastructure projects: a qualitative comparative analysis into the A2 tunnel project. Public W. Manag. Policy 20 , 5–28 (2015)

Wang, W.: Exploring the determinants of network effectiveness: the case of neighborhood governance networks in Beijing. J. Public Adm. Res. Theory 26 , 375–388 (2016)

Download references

Acknowledgements

The authors would like to thank the two reviewers who provided great insights and careful remarks, thus allowing us to improve the quality of the manuscript. During a peer-review process lasting for more than 2 years, we intensely felt the pushes and slows, and at times the impasses, of a fruitful dialogue on the qualitative and quantitative aspects of comparative analysis in the social sciences.

Open Access funding enabled and organized by Projekt DEAL. This research has been partially funded through the Consolidator Grant (ID: BSCGIO 157789), held by Prof. h. c. Dr. Anna M. Hersperger, provided by the Swiss National Science Foundation.

Author information

Authors and affiliations.

Utrecht University School of Governance, Utrecht University, Utrecht, The Netherlands

Barbara Vis

Department of Philosophy, Sociology, Pedagogy and Applied Psychology, Padua University, Padua, Italy

Salvatore La Mendola

Chair for the Governance of Complex and Innovative Technological Systems, Otto-Friedrich-Universität Bamberg, Bamberg, Germany

Sofia Pagliarin

Landscape Ecology Research Unit, CONCUR Project, Swiss Federal Research Institute WSL, Birmensdorf, Zurich, Switzerland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sofia Pagliarin .

Ethics declarations

Conflict of interest.

The Authors declare not to have any conflict of interest to report.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 21 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Pagliarin, S., La Mendola, S. & Vis, B. The “qualitative” in qualitative comparative analysis (QCA): research moves, case-intimacy and face-to-face interviews. Qual Quant 57 , 489–507 (2023). https://doi.org/10.1007/s11135-022-01358-0

Download citation

Accepted : 20 February 2022

Published : 26 March 2022

Issue Date : February 2023

DOI : https://doi.org/10.1007/s11135-022-01358-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Calibration
  • Data generation
  • Interviewing
  • In-depth knowledge
  • Qualitative data
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 07 May 2021

The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions

  • Benjamin Hanckel 1 ,
  • Mark Petticrew 2 ,
  • James Thomas 3 &
  • Judith Green 4  

BMC Public Health volume  21 , Article number:  877 ( 2021 ) Cite this article

21k Accesses

42 Citations

35 Altmetric

Metrics details

Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA approaches identified in published studies, and identify implications for future research and reporting.

PubMed, Scopus and Web of Science were systematically searched for peer-reviewed studies published in English up to December 2019 that had used QCA methods to identify the conditions associated with the uptake and/or effectiveness of interventions for public health. Data relating to the interventions studied (settings/level of intervention/populations), methods (type of QCA, case level, source of data, other methods used) and reported strengths and weaknesses of QCA were extracted and synthesised narratively.

The search identified 1384 papers, of which 27 (describing 26 studies) met the inclusion criteria. Interventions evaluated ranged across: nutrition/obesity ( n  = 8); physical activity ( n  = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation ( n  = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2), and general population health ( n  = 1). The majority of studies ( n  = 24) were of interventions solely or predominantly in high income countries. Key strengths reported were that QCA provides a method for addressing causal complexity; and that it provides a systematic approach for understanding the mechanisms at work in implementation across contexts. Weaknesses reported related to data availability limitations, especially on ineffective interventions. The majority of papers demonstrated good knowledge of cases, and justification of case selection, but other criteria of methodological quality were less comprehensively met.

QCA is a promising approach for addressing the role of context in complex interventions, and for identifying causal configurations of conditions that predict implementation and/or outcomes when there is sufficiently detailed understanding of a series of comparable cases. As the use of QCA in evaluative health research increases, there may be a need to develop advice for public health researchers and journals on minimum criteria for quality and reporting.

Peer Review reports

Interest in the use of Qualitative Comparative Analysis (QCA) arises in part from growing recognition of the need to broaden methodological capacity to address causality in complex systems [ 1 , 2 , 3 ]. Guidance for researchers for evaluating complex interventions suggests process evaluations [ 4 , 5 ] can provide evidence on the mechanisms of change, and the ways in which context affects outcomes. However, this does not address the more fundamental problems with trial and quasi-experimental designs arising from system complexity [ 6 ]. As Byrne notes, the key characteristic of complex systems is ‘emergence’ [ 7 ]: that is, effects may accrue from combinations of components, in contingent ways, which cannot be reduced to any one level. Asking about ‘what works’ in complex systems is not to ask a simple question about whether an intervention has particular effects, but rather to ask: “how the intervention works in relation to all existing components of the system and to other systems and their sub-systems that intersect with the system of interest” [ 7 ]. Public health interventions are typically attempts to effect change in systems that are themselves dynamic; approaches to evaluation are needed that can deal with emergence [ 8 ]. In short, understanding the uptake and impact of interventions requires methods that can account for the complex interplay of intervention conditions and system contexts.

To build a useful evidence base for public health, evaluations thus need to assess not just whether a particular intervention (or component) causes specific change in one variable, in controlled circumstances, but whether those interventions shift systems, and how specific conditions of interventions and setting contexts interact to lead to anticipated outcomes. There have been a number of calls for the development of methods in intervention research to address these issues of complex causation [ 9 , 10 , 11 ], including calls for the greater use of case studies to provide evidence on the important elements of context [ 12 , 13 ]. One approach for addressing causality in complex systems is Qualitative Comparative Analysis (QCA): a systematic way of comparing the outcomes of different combinations of system components and elements of context (‘conditions’) across a series of cases.

The potential of qualitative comparative analysis

QCA is an approach developed by Charles Ragin [ 14 , 15 ], originating in comparative politics and macrosociology to address questions of comparative historical development. Using set theory, QCA methods explore the relationships between ‘conditions’ and ‘outcomes’ by identifying configurations of necessary and sufficient conditions for an outcome. The underlying logic is different from probabilistic reasoning, as the causal relationships identified are not inferred from the (statistical) likelihood of them being found by chance, but rather from comparing sets of conditions and their relationship to outcomes. It is thus more akin to the generative conceptualisations of causality in realist evaluation approaches [ 16 ]. QCA is a non-additive and non-linear method that emphasises diversity, acknowledging that different paths can lead to the same outcome. For evaluative research in complex systems [ 17 ], QCA therefore offers a number of benefits, including: that QCA can identify more than one causal pathway to an outcome (equifinality); that it accounts for conjectural causation (where the presence or absence of conditions in relation to other conditions might be key); and that it is asymmetric with respect to the success or failure of outcomes. That is, that specific factors explain success does not imply that their absence leads to failure (causal asymmetry).

QCA was designed, and is typically used, to compare data from a medium N (10–50) series of cases that include those with and those without the (dichotomised) outcome. Conditions can be dichotomised in ‘crisp sets’ (csQCA) or represented in ‘fuzzy sets’ (fsQCA), where set membership is calibrated (either continuously or with cut offs) between two extremes representing fully in (1) or fully out (0) of the set. A third version, multi-value QCA (mvQCA), infrequently used, represents conditions as ‘multi-value sets’, with multinomial membership [ 18 ]. In calibrating set membership, the researcher specifies the critical qualitative anchors that capture differences in kind (full membership and full non-membership), as well as differences in degree in fuzzy sets (partial membership) [ 15 , 19 ]. Data on outcomes and conditions can come from primary or secondary qualitative and/or quantitative sources. Once data are assembled and coded, truth tables are constructed which “list the logically possible combinations of causal conditions” [ 15 ], collating the number of cases where those configurations occur to see if they share the same outcome. Analysis of these truth tables assesses first whether any conditions are individually necessary or sufficient to predict the outcome, and then whether any configurations of conditions are necessary or sufficient. Necessary conditions are assessed by examining causal conditions shared by cases with the same outcome, whilst identifying sufficient conditions (or combinations of conditions) requires examining cases with the same causal conditions to identify if they have the same outcome [ 15 ]. However, as Legewie argues, the presence of a condition, or a combination of conditions in actual datasets, are likely to be “‘quasi-necessary’ or ‘quasi-sufficient’ in that the causal relation holds in a great majority of cases, but some cases deviate from this pattern” [ 20 ]. Following reduction of the complexity of the model, the final model is tested for coverage (the degree to which a configuration accounts for instances of an outcome in the empirical cases; the proportion of cases belonging to a particular configuration) and consistency (the degree to which the cases sharing a combination of conditions align with a proposed subset relation). The result is an analysis of complex causation, “defined as a situation in which an outcome may follow from several different combinations of causal conditions” [ 15 ] illuminating the ‘causal recipes’, the causally relevant conditions or configuration of conditions that produce the outcome of interest.

QCA, then, has promise for addressing questions of complex causation, and recent calls for the greater use of QCA methods have come from a range of fields related to public health, including health research [ 17 ], studies of social interventions [ 7 ], and policy evaluation [ 21 , 22 ]. In making arguments for the use of QCA across these fields, researchers have also indicated some of the considerations that must be taken into account to ensure robust and credible analyses. There is a need, for instance, to ensure that ‘contradictions’, where cases with the same configurations show different outcomes, are resolved and reported [ 15 , 23 , 24 ]. Additionally, researchers must consider the ratio of cases to conditions, and limit the number of conditions to cases to ensure the validity of models [ 25 ]. Marx and Dusa, examining crisp set QCA, have provided some guidance to the ‘ceiling’ number of conditions which can be included relative to the number of cases to increase the probability of models being valid (that is, with a low probability of being generated through random data) [ 26 ].

There is now a growing body of published research in public health and related fields drawing on QCA methods. This is therefore a timely point to map the field and assess the potential of QCA as a method for contributing to the evidence base for what works in improving public health. To inform future methodological development of robust methods for addressing complexity in the evaluation of public health interventions, we undertook a systematic review to map existing evidence, identify gaps in, and strengths and weakness of, the QCA literature to date, and identify the implications of these for conducting and reporting future QCA studies for public health evaluation. We aimed to address the following specific questions [ 27 ]:

1. How is QCA used for public health evaluation? What populations, settings, methods used in source case studies, unit/s and level of analysis (‘cases’), and ‘conditions’ have been included in QCA studies?

2. What strengths and weaknesses have been identified by researchers who have used QCA to understand complex causation in public health evaluation research?

3. What are the existing gaps in, and strengths and weakness of, the QCA literature in public health evaluation, and what implications do these have for future research and reporting of QCA studies for public health?

This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 29 April 2019 ( CRD42019131910 ). A protocol was prepared in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) 2015 statement [ 28 ], and published in 2019 [ 27 ], where the methods are explained in detail. EPPI-Reviewer 4 was used to manage the process and undertake screening of abstracts [ 29 ].

Search strategy

We searched for peer-reviewed published papers in English, which used QCA methods to examine causal complexity in evaluating the implementation, uptake and/or effects of a public health intervention, in any region of the world, for any population. ‘Public health interventions’ were defined as those which aim to promote or protect health, or prevent ill health, in the population. No date exclusions were made, and papers published up to December 2019 were included.

Search strategies used the following phrases “Qualitative Comparative Analysis” and “QCA”, which were combined with the keywords “health”, “public health”, “intervention”, and “wellbeing”. See Additional file  1 for an example. Searches were undertaken on the following databases: PubMed, Web of Science, and Scopus. Additional searches were undertaken on Microsoft Academic and Google Scholar in December 2019, where the first pages of results were checked for studies that may have been missed in the initial search. No additional studies were identified. The list of included studies was sent to experts in QCA methods in health and related fields, including authors of included studies and/or those who had published on QCA methodology. This generated no additional studies within scope, but a suggestion to check the COMPASSS (Comparative Methods for Systematic Cross-Case Analysis) database; this was searched, identifying one further study that met the inclusion criteria [ 30 ]. COMPASSS ( https://compasss.org/ ) collates publications of studies using comparative case analysis.

We excluded studies where no intervention was evaluated, which included studies that used QCA to examine public health infrastructure (i.e. staff training) without a specific health outcome, and papers that report on prevalence of health issues (i.e. prevalence of child mortality). We also excluded studies of health systems or services interventions where there was no public health outcome.

After retrieval, and removal of duplicates, titles and abstracts were screened by one of two authors (BH or JG). Double screening of all records was assisted by EPPI Reviewer 4’s machine learning function. Of the 1384 papers identified after duplicates were removed, we excluded 820 after review of titles and abstracts (Fig.  1 ). The excluded studies included: a large number of papers relating to ‘quantitative coronary angioplasty’ and some which referred to the Queensland Criminal Code (both of which are also abbreviated to ‘QCA’); papers that reported methodological issues but not empirical studies; protocols; and papers that used the phrase ‘qualitative comparative analysis’ to refer to qualitative studies that compared different sub-populations or cases within the study, but did not include formal QCA methods.

figure 1

Flow Diagram

Full texts of the 51 remaining studies were screened by BH and JG for inclusion, with 10 papers double coded by both authors, with complete agreement. Uncertain inclusions were checked by the third author (MP). Of the full texts, 24 were excluded because: they did not report a public health intervention ( n  = 18); had used a methodology inspired by QCA, but had not undertaken a QCA ( n  = 2); were protocols or methodological papers only ( n  = 2); or were not published in peer-reviewed journals ( n  = 2) (see Fig.  1 ).

Data were extracted manually from the 27 remaining full texts by BH and JG. Two papers relating to the same research question and dataset were combined, such that analysis was by study ( n  = 26) not by paper. We retrieved data relating to: publication (journal, first author country affiliation, funding reported); the study setting (country/region setting, population targeted by the intervention(s)); intervention(s) studied; methods (aims, rationale for using QCA, crisp or fuzzy set QCA, other analysis methods used); data sources drawn on for cases (source [primary data, secondary data, published analyses], qualitative/quantitative data, level of analysis, number of cases, final causal conditions included in the analysis); outcome explained; and claims made about strengths and weaknesses of using QCA (see Table  1 ). Data were synthesised narratively, using thematic synthesis methods [ 31 , 32 ], with interventions categorised by public health domain and level of intervention.

Quality assessment

There are no reporting guidelines for QCA studies in public health, but there are a number of discussions of best practice in the methodological literature [ 25 , 26 , 33 , 34 ]. These discussions suggest several criteria for strengthening QCA methods that we used as indicators of methodological and/or reporting quality: evidence of familiarity of cases; justification for selection of cases; discussion and justification of set membership score calibration; reporting of truth tables; reporting and justification of solution formula; and reporting of consistency and coverage measures. For studies using csQCA, and claiming an explanatory analysis, we additionally identified whether the number of cases was sufficient for the number of conditions included in the model, using a pragmatic cut-off in line with Marx & Dusa’s guideline thresholds, which indicate how many cases are sufficient for given numbers of conditions to reject a 10% probability that models could be generated with random data [ 26 ].

Overview of scope of QCA research in public health

Twenty-seven papers reporting 26 studies were included in the review (Table  1 ). The earliest was published in 2005, and 17 were published after 2015. The majority ( n  = 19) were published in public health/health promotion journals, with the remainder published in other health science ( n  = 3) or in social science/management journals ( n  = 4). The public health domain(s) addressed by each study were broadly coded by the main area of focus. They included nutrition/obesity ( n  = 8); physical activity (PA) (n = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation (n = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2); or general population health ( n  = 1). The majority ( n  = 24) of studies were conducted solely or predominantly in high-income countries (systematic reviews in general searched global sources, but commented that the overwhelming majority of studies were from high-income countries). Country settings included: any ( n  = 6); OECD countries ( n  = 3); USA ( n  = 6); UK ( n  = 6) and one each from Nepal, Austria, Belgium, Netherlands and Africa. These largely reflected the first author’s country affiliations in the UK ( n  = 13); USA ( n  = 9); and one each from South Africa, Austria, Belgium, and the Netherlands. All three studies primarily addressing health inequalities [ 35 , 36 , 37 ] were from the UK.

Eight of the interventions evaluated were individual-level behaviour change interventions (e.g. weight management interventions, case management, self-management for chronic conditions); eight evaluated policy/funding interventions; five explored settings-based health promotion/behaviour change interventions (e.g. schools-based physical activity intervention, store-based food choice interventions); three evaluated community empowerment/engagement interventions, and two studies evaluated networks and their impact on health outcomes.

Methods and data sets used

Fifteen studies used crisp sets (csQCA), 11 used fuzzy sets (fsQCA). No study used mvQCA. Eleven studies included additional analyses of the datasets drawn on for the QCA, including six that used qualitative approaches (narrative synthesis, case comparisons), typically to identify cases or conditions for populating the QCA; and four reporting additional statistical analyses (meta-regression, linear regression) to either identify differences overall between cases prior to conducting a QCA (e.g. [ 38 ]) or to explore correlations in more detail (e.g. [ 39 ]). One study used an additional Boolean configurational technique to reduce the number of conditions in the QCA analysis [ 40 ]. No studies reported aiming to compare the findings from the QCA with those from other techniques for evaluating the uptake or effectiveness of interventions, although some [ 41 , 42 ] were explicitly using the study to showcase the possibilities of QCA compared with other approaches in general. Twelve studies drew on primary data collected specifically for the study, with five of those additionally drawing on secondary data sets; five drew only on secondary data sets, and nine used data from systematic reviews of published research. Seven studies drew primarily on qualitative data, generally derived from interviews or observations.

Many studies were undertaken in the context of one or more trials, which provided evidence of effect. Within single trials, this was generally for a process evaluation, with cases being trial sites. Fernald et al’s study, for instance, was in the context of a trial of a programme to support primary care teams in identifying and implementing self-management support tools for their patients, which measured patient and health care provider level outcomes [ 43 ]. The QCA reported here used qualitative data from the trial to identify a set of necessary conditions for health care provider practices to implement the tools successfully. In studies drawing on data from systematic reviews, cases were always at the level of intervention or intervention component, with data included from multiple trials. Harris et al., for instance, undertook a mixed-methods systematic review of school-based self-management interventions for asthma, using meta-analysis methods to identify effective interventions and QCA methods to identify which intervention features were aligned with success [ 44 ].

The largest number of studies ( n  = 10), including all the systematic reviews, analysed cases at the level of the intervention, or a component of the intervention; seven analysed organisational level cases (e.g. school class, network, primary care practice); five analysed sub-national region level cases (e.g. state, local authority area), and two each analysed country or individual level cases. Sample sizes ranged from 10 to 131, with no study having small N (< 10) sample sizes, four having large N (> 50) sample sizes, and the majority (22) being medium N studies (in the range 10–50).

Rationale for using QCA

Most papers reported a rationale for using QCA that mentioned ‘complexity’ or ‘context’, including: noting that QCA is appropriate for addressing causal complexity or multiple pathways to outcome [ 37 , 43 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ]; noting the appropriateness of the method for providing evidence on how context impacts on interventions [ 41 , 50 ]; or the need for a method that addressed causal asymmetry [ 52 ]. Three stated that the QCA was an ‘exploratory’ analysis [ 53 , 54 , 55 ]. In addition to the empirical aims, several papers (e.g. [ 42 , 48 ]) sought to demonstrate the utility of QCA, or to develop QCA methods for health research (e.g. [ 47 ]).

Reported strengths and weaknesses of approach

There was a general agreement about the strengths of QCA. Specifically, that it was a useful tool to address complex causality, providing a systematic approach to understand the mechanisms at work in implementation across contexts [ 38 , 39 , 43 , 45 , 46 , 47 , 55 , 56 , 57 ], particularly as they relate to (in) effective intervention implementation [ 44 , 51 ] and the evaluation of interventions [ 58 ], or “where it is not possible to identify linearity between variables of interest and outcomes” [ 49 ]. Authors highlighted the strengths of QCA as providing possibilities for examining complex policy problems [ 37 , 59 ]; for testing existing as well as new theory [ 52 ]; and for identifying aspects of interventions which had not been previously perceived as critical [ 41 ] or which may have been missed when drawing on statistical methods that use, for instance, linear additive models [ 42 ]. The strengths of QCA in terms of providing useful evidence for policy were flagged in a number of studies, particularly where the causal recipes suggested that conventional assumptions about effectiveness were not confirmed. Blackman et al., for instance, in a series of studies exploring why unequal health outcomes had narrowed in some areas of the UK and not others, identified poorer outcomes in settings with ‘better’ contracting [ 35 , 36 , 37 ]; Harting found, contrary to theoretical assumptions about the necessary conditions for successful implementation of public health interventions, that a multisectoral network was not a necessary condition [ 30 ].

Weaknesses reported included the limitations of QCA in general for addressing complexity, as well as specific limitations with either the csQCA or the fsQCA methods employed. One general concern discussed across a number of studies was the problem of limited empirical diversity, which resulted in: limitations in the possible number of conditions included in each study, particularly with small N studies [ 58 ]; missing data on important conditions [ 43 ]; or limited reported diversity (where, for instance, data were drawn from systematic reviews, reflecting publication biases which limit reporting of ineffective interventions) [ 41 ]. Reported methodological limitations in small and intermediate N studies included concerns about the potential that case selection could bias findings [ 37 ].

In terms of potential for addressing causal complexity, the limitations of QCA for identifying unintended consequences, tipping points, and/or feedback loops in complex adaptive systems were noted [ 60 ], as were the potential limitations (especially in csQCA studies) of reducing complex conditions, drawn from detailed qualitative understanding, to binary conditions [ 35 ]. The impossibility of doing this was a rationale for using fsQCA in one study [ 57 ], where detailed knowledge of conditions is needed to make theoretically justified calibration decisions. However, others [ 47 ] make the case that csQCA provides more appropriate findings for policy: dichotomisation forces a focus on meaningful distinctions, including those related to decisions that practitioners/policy makers can action. There is, then, a potential trade-off in providing ‘interpretable results’, but ones which preclude potential for utilising more detailed information [ 45 ]. That QCA does not deal with probabilistic causation was noted [ 47 ].

Quality of published studies

Assessment of ‘familiarity with cases’ was made subjectively on the basis of study authors’ reports of their knowledge of the settings (empirical or theoretical) and the descriptions they provided in the published paper: overall, 14 were judged as sufficient, and 12 less than sufficient. Studies which included primary data were more likely to be judged as demonstrating familiarity ( n  = 10) than those drawing on secondary sources or systematic reviews, of which only two were judged as demonstrating familiarity. All studies justified how the selection of cases had been made; for those not using the full available population of cases, this was in general (appropriately) done theoretically: following previous research [ 52 ]; purposively to include a range of positive and negative outcomes [ 41 ]; or to include a diversity of cases [ 58 ]. In identifying conditions leading to effective/not effective interventions, one purposive strategy was to include a specified percentage or number of the most effective and least effective interventions (e.g. [ 36 , 40 , 51 , 52 ]). Discussion of calibration of set membership scores was judged adequate in 15 cases, and inadequate in 11; 10 reported raw data matrices in the paper or supplementary material; 21 reported truth tables in the paper or supplementary material. The majority ( n  = 21) reported at least some detail on the coverage (the number of cases with a particular configuration) and consistency (the percentage of similar causal configurations which result in the same outcome). The majority ( n  = 21) included truth tables (or explicitly provided details of how to obtain them); fewer ( n  = 10) included raw data. Only five studies met all six of these quality criteria (evidence of familiarity with cases, justification of case selection, discussion of calibration, reporting truth tables, reporting raw data matrices, reporting coverage and consistency); a further six met at least five of them.

Of the csQCA studies which were not reporting an exploratory analysis, four appeared to have insufficient cases for the large number of conditions entered into at least one of the models reported, with a consequent risk to the validity of the QCA models [ 26 ].

QCA has been widely used in public health research over the last decade to advance understanding of causal inference in complex systems. In this review of published evidence to date, we have identified studies using QCA to examine the configurations of conditions that lead to particular outcomes across contexts. As noted by most study authors, QCA methods have promised advantages over probabilistic statistical techniques for examining causation where systems and/or interventions are complex, providing public health researchers with a method to test the multiple pathways (configurations of conditions), and necessary and sufficient conditions that lead to desired health outcomes.

The origins of QCA approaches are in comparative policy studies. Rihoux et al’s review of peer-reviewed journal articles using QCA methods published up to 2011 found the majority of published examples were from political science and sociology, with fewer than 5% of the 313 studies they identified coming from health sciences [ 61 ]. They also reported few examples of the method being used in policy evaluation and implementation studies [ 62 ]. In the decade since their review of the field [ 61 ], there has been an emerging body of evaluative work in health: we identified 26 studies in the field of public health alone, with the majority published in public health journals. Across these studies, QCA has been used for evaluative questions in a range of settings and public health domains to identify the conditions under which interventions are implemented and/or have evidence of effect for improving population health. All studies included a series of cases that included some with and some without the outcome of interest (such as behaviour change, successful programme implementation, or good vaccination uptake). The dominance of high-income countries in both intervention settings and author affiliations is disappointing, but reflects the disproportionate location of public health research in the global north more generally [ 63 ].

The largest single group of studies included were systematic reviews, using QCA to compare interventions (or intervention components) to identify successful (and non-successful) configurations of conditions across contexts. Here, the value of QCA lies in its potential for synthesis with quantitative meta-synthesis methods to identify the particular conditions or contexts in which interventions or components are effective. As Parrott et al. note, for instance, their meta-analysis could identify probabilistic effects of weight management programmes, and the QCA analysis enabled them to address the “role that the context of the [paediatric weight management] intervention has in influencing how, when, and for whom an intervention mix will be successful” [ 50 ]. However, using QCA to identify configurations of conditions that lead to effective or non- effective interventions across particular areas of population health is an application that does move away in some significant respects from the origins of the method. First, researchers drawing on evidence from systematic reviews for their data are reliant largely on published evidence for information on conditions (such as the organisational contexts in which interventions were implemented, or the types of behaviour change theory utilised). Although guidance for describing interventions [ 64 ] advises key aspects of context are included in reports, this may not include data on the full range of conditions that might be causally important, and review research teams may have limited knowledge of these ‘cases’ themselves. Second, less successful interventions are less likely to be published, potentially limiting the diversity of cases, particularly of cases with unsuccessful outcomes. A strength of QCA is the separate analysis of conditions leading to positive and negative outcomes: this is precluded where there is insufficient evidence on negative outcomes [ 50 ]. Third, when including a range of types of intervention, it can be unclear whether the cases included are truly comparable. A QCA study requires a high degree of theoretical and pragmatic case knowledge on the part of the researcher to calibrate conditions to qualitative anchors: it is reliant on deep understanding of complex contexts, and a familiarity with how conditions interact within and across contexts. Perhaps surprising is that only seven of the studies included here clearly drew on qualitative data, given that QCA is primarily seen as a method that requires thick, detailed knowledge of cases, particularly when the aim is to understand complex causation [ 8 ]. Whilst research teams conducting QCA in the context of systematic reviews may have detailed understanding in general of interventions within their spheres of expertise, they are unlikely to have this for the whole range of cases, particularly where a diverse set of contexts (countries, organisational settings) are included. Making a theoretical case for the valid comparability of such a case series is crucial. There may, then, be limitations in the portability of QCA methods for conducting studies entirely reliant on data from published evidence.

QCA was developed for small and medium N series of cases, and (as in the field more broadly, [ 61 ]), the samples in our studies predominantly had between 10 and 50 cases. However, there is increasing interest in the method as an alternative or complementary technique to regression-oriented statistical methods for larger samples [ 65 ], such as from surveys, where detailed knowledge of cases is likely to be replaced by theoretical knowledge of relationships between conditions (see [ 23 ]). The two larger N (> 100 cases) studies in our sample were an individual level analysis of survey data [ 46 , 47 ] and an analysis of intervention arms from a systematic review [ 50 ]. Larger sample sizes allow more conditions to be included in the analysis [ 23 , 26 ], although for evaluative research, where the aim is developing a causal explanation, rather than simply exploring patterns, there remains a limit to the number of conditions that can be included. As the number of conditions included increases, so too does the number of possible configurations, increasing the chance of unique combinations and of generating spurious solutions with a high level of consistency. As a rule of thumb, once the number of conditions exceeds 6–8 (with up to 50 cases) or 10 (for larger samples), the credibility of solutions may be severely compromised [ 23 ].

Strengths and weaknesses of the study

A systematic review has the potential advantages of transparency and rigour and, if not exhaustive, our search is likely to be representative of the body of research using QCA for evaluative public health research up to 2020. However, a limitation is the inevitable difficulty in operationalising a ‘public health’ intervention. Exclusions on scope are not straightforward, given that most social, environmental and political conditions impact on public health, and arguably a greater range of policy and social interventions (such as fiscal or trade policies) that have been the subject of QCA analyses could have been included, or a greater range of more clinical interventions. However, to enable a manageable number of papers to review, and restrict our focus to those papers that were most directly applicable to (and likely to be read by) those in public health policy and practice, we operationalised ‘public health interventions’ as those which were likely to be directly impacting on population health outcomes, or on behaviours (such as increased physical activity) where there was good evidence for causal relationships with public health outcomes, and where the primary research question of the study examined the conditions leading to those outcomes. This review has, of necessity, therefore excluded a considerable body of evidence likely to be useful for public health practice in terms of planning interventions, such as studies on how to better target smoking cessation [ 66 ] or foster social networks [ 67 ] where the primary research question was on conditions leading to these outcomes, rather than on conditions for outcomes of specific interventions. Similarly, there are growing number of descriptive epidemiological studies using QCA to explore factors predicting outcomes across such diverse areas as lupus and quality of life [ 68 ]; length of hospital stay [ 69 ]; constellations of factors predicting injury [ 70 ]; or the role of austerity, crisis and recession in predicting public health outcomes [ 71 ]. Whilst there is undoubtedly useful information to be derived from studying the conditions that lead to particular public health problems, these studies were not directly evaluating interventions, so they were also excluded.

Restricting our search to publications in English and to peer reviewed publications may have missed bodies of work from many regions, and has excluded research from non-governmental organisations using QCA methods in evaluation. As this is a rapidly evolving field, with relatively recent uptake in public health (all our included studies were after 2005), our studies may not reflect the most recent advances in the area.

Implications for conducting and reporting QCA studies

This systematic review has reviewed studies that deployed an emergent methodology, which has no reporting guidelines and has had, to date, a relatively low level of awareness among many potential evidence users in public health. For this reason, many of the studies reviewed were relatively detailed on the methods used, and the rationale for utilising QCA.

We did not assess quality directly, but used indicators of good practice discussed in QCA methodological literature, largely written for policy studies scholars, and often post-dating the publication dates of studies included in this review. It is also worth noting that, given the relatively recent development of QCA methods, methodological debate is still thriving on issues such as the reliability of causal inferences [ 72 ], alongside more general critiques of the usefulness of the method for policy decisions (see, for instance, [ 73 ]). The authors of studies included in this review also commented directly on methodological development: for instance, Thomas et al. suggests that QCA may benefit from methods development for sensitivity analyses around calibration decisions [ 42 ].

However, we selected quality criteria that, we argue, are relevant for public health research> Justifying the selection of cases, discussing and justifying the calibration of set membership, making data sets available, and reporting truth tables, consistency and coverage are all good practice in line with the usual requirements of transparency and credibility in methods. When QCA studies aim to provide explanation of outcomes (rather than exploring configurations), it is also vital that they are reported in ways that enhance the credibility of claims made, including justifying the number of conditions included relative to cases. Few of the studies published to date met all these criteria, at least in the papers included here (although additional material may have been provided in other publications). To improve the future discoverability and uptake up of QCA methods in public health, and to strengthen the credibility of findings from these methods, we therefore suggest the following criteria should be considered by authors and reviewers for reporting QCA studies which aim to provide causal evidence about the configurations of conditions that lead to implementation or outcomes:

The paper title and abstract state the QCA design;

The sampling unit for the ‘case’ is clearly defined (e.g.: patient, specified geographical population, ward, hospital, network, policy, country);

The population from which the cases have been selected is defined (e.g.: all patients in a country with X condition, districts in X country, tertiary hospitals, all hospitals in X country, all health promotion networks in X province, European policies on smoking in outdoor places, OECD countries);

The rationale for selection of cases from the population is justified (e.g.: whole population, random selection, purposive sample);

There are sufficient cases to provide credible coverage across the number of conditions included in the model, and the rationale for the number of conditions included is stated;

Cases are comparable;

There is a clear justification for how choices of relevant conditions (or ‘aspects of context’) have been made;

There is sufficient transparency for replicability: in line with open science expectations, datasets should be available where possible; truth tables should be reported in publications, and reports of coverage and consistency provided.

Implications for future research

In reviewing methods for evaluating natural experiments, Craig et al. focus on statistical techniques for enhancing causal inference, noting only that what they call ‘qualitative’ techniques (the cited references for these are all QCA studies) require “further studies … to establish their validity and usefulness” [ 2 ]. The studies included in this review have demonstrated that QCA is a feasible method when there are sufficient (comparable) cases for identifying configurations of conditions under which interventions are effective (or not), or are implemented (or not). Given ongoing concerns in public health about how best to evaluate interventions across complex contexts and systems, this is promising. This review has also demonstrated the value of adding QCA methods to the tool box of techniques for evaluating interventions such as public policies, health promotion programmes, and organisational changes - whether they are implemented in a randomised way or not. Many of the studies in this review have clearly generated useful evidence: whether this evidence has had more or less impact, in terms of influencing practice and policy, or is more valid, than evidence generated by other methods is not known. Validating the findings of a QCA study is perhaps as challenging as validating the findings from any other design, given the absence of any gold standard comparators. Comparisons of the findings of QCA with those from other methods are also typically constrained by the rather different research questions asked, and the different purposes of the analysis. In our review, QCA were typically used alongside other methods to address different questions, rather than to compare methods. However, as the field develops, follow up studies, which evaluate outcomes of interventions designed in line with conditions identified as causal in prior QCAs, might be useful for contributing to validation.

This review was limited to public health evaluation research: other domains that would be useful to map include health systems/services interventions and studies used to design or target interventions. There is also an opportunity to broaden the scope of the field, particularly for addressing some of the more intractable challenges for public health research. Given the limitations in the evidence base on what works to address inequalities in health, for instance [ 74 ], QCA has potential here, to help identify the conditions under which interventions do or do not exacerbate unequal outcomes, or the conditions that lead to differential uptake or impacts across sub-population groups. It is perhaps surprising that relatively few of the studies in this review included cases at the level of country or region, the traditional level for QCA studies. There may be scope for developing international comparisons for public health policy, and using QCA methods at the case level (nation, sub-national region) of classic policy studies in the field. In the light of debate around COVID-19 pandemic response effectiveness, comparative studies across jurisdictions might shed light on issues such as differential population responses to vaccine uptake or mask use, for example, and these might in turn be considered as conditions in causal configurations leading to differential morbidity or mortality outcomes.

When should be QCA be considered?

Public health evaluations typically assess the efficacy, effectiveness or cost-effectiveness of interventions and the processes and mechanisms through which they effect change. There is no perfect evaluation design for achieving these aims. As in other fields, the choice of design will in part depend on the availability of counterfactuals, the extent to which the investigator can control the intervention, and the range of potential cases and contexts [ 75 ], as well as political considerations, such as the credibility of the approach with key stakeholders [ 76 ]. There are inevitably ‘horses for courses’ [ 77 ]. The evidence from this review suggests that QCA evaluation approaches are feasible when there is a sufficient number of comparable cases with and without the outcome of interest, and when the investigators have, or can generate, sufficiently in-depth understanding of those cases to make sense of connections between conditions, and to make credible decisions about the calibration of set membership. QCA may be particularly relevant for understanding multiple causation (that is, where different configurations might lead to the same outcome), and for understanding the conditions associated with both lack of effect and effect. As a stand-alone approach, QCA might be particularly valuable for national and regional comparative studies of the impact of policies on public health outcomes. Alongside cluster randomised trials of interventions, or alongside systematic reviews, QCA approaches are especially useful for identifying core combinations of causal conditions for success and lack of success in implementation and outcome.

Conclusions

QCA is a relatively new approach for public health research, with promise for contributing to much-needed methodological development for addressing causation in complex systems. This review has demonstrated the large range of evaluation questions that have been addressed to date using QCA, including contributions to process evaluations of trials and for exploring the conditions leading to effectiveness (or not) in systematic reviews of interventions. There is potential for QCA to be more widely used in evaluative research, to identify the conditions under which interventions across contexts are implemented or not, and the configurations of conditions associated with effect or lack of evidence of effect. However, QCA will not be appropriate for all evaluations, and cannot be the only answer to addressing complex causality. For explanatory questions, the approach is most appropriate when there is a series of enough comparable cases with and without the outcome of interest, and where the researchers have detailed understanding of those cases, and conditions. To improve the credibility of findings from QCA for public health evidence users, we recommend that studies are reported with the usual attention to methodological transparency and data availability, with key details that allow readers to judge the credibility of causal configurations reported. If the use of QCA continues to expand, it may be useful to develop more comprehensive consensus guidelines for conduct and reporting.

Availability of data and materials

Full search strategies and extraction forms are available by request from the first author.

Abbreviations

Comparative Methods for Systematic Cross-Case Analysis

crisp set QCA

fuzzy set QCA

multi-value QCA

Medical Research Council

  • Qualitative Comparative Analysis

randomised control trial

Physical Activity

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406. https://doi.org/10.1177/1356389015605205 .

Article   Google Scholar  

Craig P, Katikireddi SV, Leyland A, Popham F. Natural experiments: an overview of methods, approaches, and contributions to public health intervention research. Annu Rev Public Health. 2017;38(1):39–56. https://doi.org/10.1146/annurev-publhealth-031816-044327 .

Article   PubMed   PubMed Central   Google Scholar  

Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3. https://doi.org/10.1136/bmj.39569.510521.AD .

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258. https://doi.org/10.1136/bmj.h1258 .

Pattyn V, Álamos-Concha P, Cambré B, Rihoux B, Schalembier B. Policy effectiveness through Configurational and mechanistic lenses: lessons for concept development. J Comp Policy Anal Res Pract. 2020;0:1–18.

Google Scholar  

Byrne D. Evaluating complex social interventions in a complex world. Evaluation. 2013;19(3):217–28. https://doi.org/10.1177/1356389013495617 .

Gerrits L, Pagliarin S. Social and causal complexity in qualitative comparative analysis (QCA): strategies to account for emergence. Int J Soc Res Methodol 2020;0:1–14, doi: https://doi.org/10.1080/13645579.2020.1799636 .

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32. https://doi.org/10.1080/09581596.2017.1282603 .

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4. https://doi.org/10.1016/S0140-6736(17)31267-9 .

Article   PubMed   Google Scholar  

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95. https://doi.org/10.1186/s12916-018-1089-4 .

Craig P, Di Ruggiero E, Frohlich KL, Mykhalovskiy E and White M, on behalf of the Canadian Institutes of Health Research (CIHR)–National Institute for Health Research (NIHR) Context Guidance Authors Group. Taking account of context in population health intervention research: guidance for producers, users and funders of research. Southampton: NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Paparini S, Green J, Papoutsi C, Murdoch J, Petticrew M, Greenhalgh T, et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med. 2020;18(1):301. https://doi.org/10.1186/s12916-020-01777-6 .

Ragin. The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley: University of California Press; 1987.

Ragin C. Redesigning social inquiry: fuzzy sets and beyond - Charles C: Ragin - Google Books. The University of Chicago Press; 2008. https://doi.org/10.7208/chicago/9780226702797.001.0001 .

Book   Google Scholar  

Befani B, Ledermann S, Sager F. Realistic evaluation and QCA: conceptual parallels and an empirical application. Evaluation. 2007;13(2):171–92. https://doi.org/10.1177/1356389007075222 .

Kane H, Lewis MA, Williams PA, Kahwati LC. Using qualitative comparative analysis to understand and quantify translation and implementation. Transl Behav Med. 2014;4(2):201–8. https://doi.org/10.1007/s13142-014-0251-6 .

Cronqvist L, Berg-Schlosser D. Chapter 4: Multi-Value QCA (mvQCA). In: Rihoux B, Ragin C, editors. Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. 2455 Teller Road, Thousand Oaks California 91320 United States: SAGE Publications, Inc.; 2009. p. 69–86. doi: https://doi.org/10.4135/9781452226569 .

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225–39.

CAS   PubMed   PubMed Central   Google Scholar  

Legewie N. An introduction to applied data analysis with qualitative comparative analysis (QCA). Forum Qual Soc Res. 2013;14.  https://doi.org/10.17169/fqs-14.3.1961 .

Varone F, Rihoux B, Marx A. A new method for policy evaluation? In: Rihoux B, Grimm H, editors. Innovative comparative methods for policy analysis: beyond the quantitative-qualitative divide. Boston: Springer US; 2006. p. 213–36. https://doi.org/10.1007/0-387-28829-5_10 .

Chapter   Google Scholar  

Gerrits L, Verweij S. The evaluation of complex infrastructure projects: a guide to qualitative comparative analysis. Cheltenham: Edward Elgar Pub; 2018. https://doi.org/10.4337/9781783478422 .

Greckhamer T, Misangyi VF, Fiss PC. The two QCAs: from a small-N to a large-N set theoretic approach. In: Configurational Theory and Methods in Organizational Research. Emerald Group Publishing Ltd.; 2013. p. 49–75. https://pennstate.pure.elsevier.com/en/publications/the-two-qcas-from-a-small-n-to-a-large-n-set-theoretic-approach . Accessed 16 Apr 2021.

Rihoux B, Ragin CC. Configurational comparative methods: qualitative comparative analysis (QCA) and related techniques. SAGE; 2009, doi: https://doi.org/10.4135/9781452226569 .

Marx A. Crisp-set qualitative comparative analysis (csQCA) and model specification: benchmarks for future csQCA applications. Int J Mult Res Approaches. 2010;4(2):138–58. https://doi.org/10.5172/mra.2010.4.2.138 .

Marx A, Dusa A. Crisp-set qualitative comparative analysis (csQCA), contradictions and consistency benchmarks for model specification. Methodol Innov Online. 2011;6(2):103–48. https://doi.org/10.4256/mio.2010.0037 .

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252. https://doi.org/10.1186/s13643-019-1159-5 .

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349(1):g7647. https://doi.org/10.1136/bmj.g7647 .

EPPI-Reviewer 4.0: Software for research synthesis. UK: University College London; 2010.

Harting J, Peters D, Grêaux K, van Assema P, Verweij S, Stronks K, et al. Implementing multiple intervention strategies in Dutch public health-related policy networks. Health Promot Int. 2019;34(2):193–203. https://doi.org/10.1093/heapro/dax067 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8(1):45. https://doi.org/10.1186/1471-2288-8-45 .

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC methods Programme. 2006.

Wagemann C, Schneider CQ. Qualitative comparative analysis (QCA) and fuzzy-sets: agenda for a research approach and a data analysis technique. Comp Sociol. 2010;9:376–96.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. https://doi.org/10.1017/CBO9781139004244 .

Blackman T, Dunstan K. Qualitative comparative analysis and health inequalities: investigating reasons for differential Progress with narrowing local gaps in mortality. J Soc Policy. 2010;39(3):359–73. https://doi.org/10.1017/S0047279409990675 .

Blackman T, Wistow J, Byrne D. A Qualitative Comparative Analysis of factors associated with trends in narrowing health inequalities in England. Soc Sci Med 1982. 2011;72:1965–74.

Blackman T, Wistow J, Byrne D. Using qualitative comparative analysis to understand complex policy problems. Evaluation. 2013;19(2):126–40. https://doi.org/10.1177/1356389013484203 .

Glatman-Freedman A, Cohen M-L, Nichols KA, Porges RF, Saludes IR, Steffens K, et al. Factors affecting the introduction of new vaccines to poor nations: a comparative study of the haemophilus influenzae type B and hepatitis B vaccines. PLoS One. 2010;5(11):e13802. https://doi.org/10.1371/journal.pone.0013802 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ford EW, Duncan WJ, Ginter PM. Health departments’ implementation of public health’s core functions: an assessment of health impacts. Public Health. 2005;119(1):11–21. https://doi.org/10.1016/j.puhe.2004.03.002 .

Article   CAS   PubMed   Google Scholar  

Lucidarme S, Cardon G, Willem A. A comparative study of health promotion networks: configurations of determinants for network effectiveness. Public Manag Rev. 2016;18(8):1163–217. https://doi.org/10.1080/14719037.2015.1088567 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: re-analysis of a systematic review to identify pathways to effectiveness. Health Expect Int J Public Particip Health Care Health Policy. 2018;21:574–84.

CAS   Google Scholar  

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3(1):67. https://doi.org/10.1186/2046-4053-3-67 .

Fernald DH, Simpson MJ, Nease DE, Hahn DL, Hoffmann AE, Michaels LC, et al. Implementing community-created self-management support tools in primary care practices: multimethod analysis from the INSTTEPP study. J Patient-Centered Res Rev. 2018;5(4):267–75. https://doi.org/10.17294/2330-0698.1634 .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

Kahwati LC, Lewis MA, Kane H, Williams PA, Nerz P, Jones KR, et al. Best practices in the veterans health Administration’s MOVE! Weight management program. Am J Prev Med. 2011;41(5):457–64. https://doi.org/10.1016/j.amepre.2011.06.047 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) to evaluate a public health policy initiative in the north east of England. Polic Soc. 2013;32(4):289–301. https://doi.org/10.1016/j.polsoc.2013.10.002 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) in public health: a case study of a health improvement service for long-term incapacity benefit recipients. J Public Health. 2014;36(1):126–33. https://doi.org/10.1093/pubmed/fdt047 .

Article   CAS   Google Scholar  

Brunton G, O’Mara-Eves A, Thomas J. The “active ingredients” for successful community engagement with disadvantaged expectant and new mothers: a qualitative comparative analysis. J Adv Nurs. 2014;70(12):2847–60. https://doi.org/10.1111/jan.12441 .

McGowan VJ, Wistow J, Lewis SJ, Popay J, Bambra C. Pathways to mental health improvement in a community-led area-based empowerment initiative: evidence from the big local ‘communities in control’ study. England J Public Health. 2019;41(4):850–7. https://doi.org/10.1093/pubmed/fdy192 .

Parrott JS, Henry B, Thompson KL, Ziegler J, Handu D. Managing Complexity in Evidence Analysis: A Worked Example in Pediatric Weight Management. J Acad Nutr Diet. 2018;118:1526–1542.e3.

Kien C, Grillich L, Nussbaumer-Streit B, Schoberberger R. Pathways leading to success and non-success: a process evaluation of a cluster randomized physical activity health promotion program applying fuzzy-set qualitative comparative analysis. BMC Public Health. 2018;18(1):1386. https://doi.org/10.1186/s12889-018-6284-x .

Lubold AM. The effect of family policies and public health initiatives on breastfeeding initiation among 18 high-income countries: a qualitative comparative analysis research design. Int Breastfeed J. 2017;12(1):34. https://doi.org/10.1186/s13006-017-0122-0 .

Bianchi F, Garnett E, Dorsel C, Aveyard P, Jebb SA. Restructuring physical micro-environments to reduce the demand for meat: a systematic review and qualitative comparative analysis. Lancet Planet Health. 2018;2(9):e384–97. https://doi.org/10.1016/S2542-5196(18)30188-8 .

Bianchi F, Dorsel C, Garnett E, Aveyard P, Jebb SA. Interventions targeting conscious determinants of human behaviour to reduce the demand for meat: a systematic review with qualitative comparative analysis. Int J Behav Nutr Phys Act. 2018;15(1):102. https://doi.org/10.1186/s12966-018-0729-6 .

Hartmann-Boyce J, Bianchi F, Piernas C, Payne Riches S, Frie K, Nourse R, et al. Grocery store interventions to change food purchasing behaviors: a systematic review of randomized controlled trials. Am J Clin Nutr. 2018;107(6):1004–16. https://doi.org/10.1093/ajcn/nqy045 .

Burchett HED, Sutcliffe K, Melendez-Torres GJ, Rees R, Thomas J. Lifestyle weight management programmes for children: a systematic review using qualitative comparative analysis to identify critical pathways to effectiveness. Prev Med. 2018;106:1–12. https://doi.org/10.1016/j.ypmed.2017.08.025 .

Chiappone A. Technical assistance and changes in nutrition and physical activity practices in the National Early Care and education learning Collaboratives project, 2015–2016. Prev Chronic Dis. 2018;15. https://doi.org/10.5888/pcd15.170239 .

Kane H, Hinnant L, Day K, Council M, Tzeng J, Soler R, et al. Pathways to program success: a qualitative comparative analysis (QCA) of communities putting prevention to work case study programs. J Public Health Manag Pract JPHMP. 2017;23(2):104–11. https://doi.org/10.1097/PHH.0000000000000449 .

Roberts MC, Murphy T, Moss JL, Wheldon CW, Psek W. A qualitative comparative analysis of combined state health policies related to human papillomavirus vaccine uptake in the United States. Am J Public Health. 2018;108(4):493–9. https://doi.org/10.2105/AJPH.2017.304263 .

Breuer E, Subba P, Luitel N, Jordans M, Silva MD, Marchal B, et al. Using qualitative comparative analysis and theory of change to unravel the effects of a mental health intervention on service utilisation in Nepal. BMJ Glob Health. 2018;3(6):e001023. https://doi.org/10.1136/bmjgh-2018-001023 .

Rihoux B, Álamos-Concha P, Bol D, Marx A, Rezsöhazy I. From niche to mainstream method? A comprehensive mapping of QCA applications in journal articles from 1984 to 2011. Polit Res Q. 2013;66:175–84.

Rihoux B, Rezsöhazy I, Bol D. Qualitative comparative analysis (QCA) in public policy analysis: an extensive review. Ger Policy Stud. 2011;7:9–82.

Plancikova D, Duric P, O’May F. High-income countries remain overrepresented in highly ranked public health journals: a descriptive analysis of research settings and authorship affiliations. Crit Public Health 2020;0:1–7, DOI: https://doi.org/10.1080/09581596.2020.1722313 .

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348(mar07 3):g1687. https://doi.org/10.1136/bmj.g1687 .

Fiss PC, Sharapov D, Cronqvist L. Opposites attract? Opportunities and challenges for integrating large-N QCA and econometric analysis. Polit Res Q. 2013;66:191–8.

Blackman T. Can smoking cessation services be better targeted to tackle health inequalities? Evidence from a cross-sectional study. Health Educ J. 2008;67(2):91–101. https://doi.org/10.1177/0017896908089388 .

Haynes P, Banks L, Hill M. Social networks amongst older people in OECD countries: a qualitative comparative analysis. J Int Comp Soc Policy. 2013;29(1):15–27. https://doi.org/10.1080/21699763.2013.802988 .

Rioja EC. Valero-Moreno S, Giménez-Espert M del C, Prado-Gascó V. the relations of quality of life in patients with lupus erythematosus: regression models versus qualitative comparative analysis. J Adv Nurs. 2019;75(7):1484–92. https://doi.org/10.1111/jan.13957 .

Dy SM. Garg Pushkal, Nyberg Dorothy, Dawson Patricia B., Pronovost Peter J., Morlock Laura, et al. critical pathway effectiveness: assessing the impact of patient, hospital care, and pathway characteristics using qualitative comparative analysis. Health Serv Res. 2005;40(2):499–516. https://doi.org/10.1111/j.1475-6773.2005.0r370.x .

MELINDER KA, ANDERSSON R. The impact of structural factors on the injury rate in different European countries. Eur J Pub Health. 2001;11(3):301–8. https://doi.org/10.1093/eurpub/11.3.301 .

Saltkjel T, Holm Ingelsrud M, Dahl E, Halvorsen K. A fuzzy set approach to economic crisis, austerity and public health. Part II: How are configurations of crisis and austerity related to changes in population health across Europe? Scand J Public Health. 2017;45(18_suppl):48–55.

Baumgartner M, Thiem A. Often trusted but never (properly) tested: evaluating qualitative comparative analysis. Sociol Methods Res. 2020;49(2):279–311. https://doi.org/10.1177/0049124117701487 .

Tanner S. QCA is of questionable value for policy research. Polic Soc. 2014;33(3):287–98. https://doi.org/10.1016/j.polsoc.2014.08.003 .

Mackenbach JP. Tackling inequalities in health: the need for building a systematic evidence base. J Epidemiol Community Health. 2003;57(3):162. https://doi.org/10.1136/jech.57.3.162 .

Stern E, Stame N, Mayne J, Forss K, Davies R, Befani B. Broadening the range of designs and methods for impact evaluations. Technical report. London: DfiD; 2012.

Pattyn V. Towards appropriate impact evaluation methods. Eur J Dev Res. 2019;31(2):174–9. https://doi.org/10.1057/s41287-019-00202-w .

Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health. 2003;57(7):527–9. https://doi.org/10.1136/jech.57.7.527 .

Download references

Acknowledgements

The authors would like to thank and acknowledge the support of Sara Shaw, PI of MR/S014632/1 and the rest of the Triple C project team, the experts who were consulted on the final list of included studies, and the reviewers who provided helpful feedback on the original submission.

This study was funded by MRC: MR/S014632/1 ‘Case study, context and complex interventions (Triple C): development of guidance and publication standards to support case study research’. The funder played no part in the conduct or reporting of the study. JG is supported by a Wellcome Trust Centre grant 203109/Z/16/Z.

Author information

Authors and affiliations.

Institute for Culture and Society, Western Sydney University, Sydney, Australia

Benjamin Hanckel

Department of Public Health, Environments and Society, LSHTM, London, UK

Mark Petticrew

UCL Institute of Education, University College London, London, UK

James Thomas

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

You can also search for this author in PubMed   Google Scholar

Contributions

BH - research design, data acquisition, data extraction and coding, data interpretation, paper drafting; JT – research design, data interpretation, contributing to paper; MP – funding acquisition, research design, data interpretation, contributing to paper; JG – funding acquisition, research design, data extraction and coding, data interpretation, paper drafting. All authors approved the final version.

Corresponding author

Correspondence to Judith Green .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

All authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Example search strategy.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hanckel, B., Petticrew, M., Thomas, J. et al. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health 21 , 877 (2021). https://doi.org/10.1186/s12889-021-10926-2

Download citation

Received : 03 February 2021

Accepted : 22 April 2021

Published : 07 May 2021

DOI : https://doi.org/10.1186/s12889-021-10926-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Intervention
  • Systematic review

BMC Public Health

ISSN: 1471-2458

comparative analysis research pdf

IMAGES

  1. FREE 9+ Comparative Research Templates in PDF

    comparative analysis research pdf

  2. 15+ Comparative Analysis Templates

    comparative analysis research pdf

  3. (PDF) Comparative Studies

    comparative analysis research pdf

  4. What is Comparative Research? Definition, Types, Uses

    comparative analysis research pdf

  5. PPT

    comparative analysis research pdf

  6. (PDF) What to ask and how to answer: A comparative analysis of

    comparative analysis research pdf

VIDEO

  1. Comparative Worksheet Quiz for Kids

  2. Static, Comparative, Dynamic analysis

  3. Comparative Analysis Tools for Professionals

  4. A Comparative Analysis of Parliamentary Performance: Mehbooba Mufti vs Farooq Abdullah

  5. Comparative Analysis Presentation- Malaysia and USA

  6. Comparative Historical Analysis by Marcus Kreuzer

COMMENTS

  1. (PDF) A Short Introduction to Comparative Research

    comparative historical analysis in history, and psychological analysis (Smelser, 1973). Comparative research or analysis is a broad term that includes both quantitative and qualitative comparison.

  2. Statistical Methods for Comparative Studies

    Overview of the Book. The first five chapters discuss the main conceptual issues in the design and analysis of comparative studies. We carefully motivate the need for standards of comparison and show how biases can distort estimates of treatment effects. The relative advantages of randomized and nonrandomized studies are also presented.

  3. PDF COMPARATIVE RESEARCH

    COMPARATIVE RESEARCH Types of Comparative Research There are several methods of doing comparative analysis and Tilly (1984) distinguishes four types of comparative analysis namely: individualizing, universalizing, variation-finding and encompassing (p.82). Adding to the types of comparative analysis, May (1993, as cited in Azarian 2011, p. 117 ...

  4. Qualitative Comparative Analysis in Education Research: Its Current

    Cilesiz, Greckhamer: Qualitative Comparative Analysis in Education Research 333 relations as complex, that is, as marked by conjunction, equifinality, and asymmetry. In contrast, the correlational thinking underlying most forms of conventional quanti-tative analysis (e.g., multiple regression analysis, factor analysis, and structural equa-

  5. Comparative Analysis

    Comparative analysis is a multidisciplinary method, which spans a wide cross-section of disciplines (Azarian, 2011).It is the process of comparing multiple units of study for the purpose of scientific discovery and for informing policy decisions (Rogers, 2014).Even though there has been a renewed interest in comparative analysis as a research method over the last decade in fields such as ...

  6. Comparative Research Methods

    Research goals. Comparative communication research is a combination of substance (specific objects of investigation studied in diferent macro-level contexts) and method (identification of diferences and similarities following established rules and using equivalent concepts).

  7. Comparative Analysis

    Definition. The goal of comparative analysis is to search for similarity and variance among units of analysis. Comparative research commonly involves the description and explanation of similarities and differences of conditions or outcomes among large-scale social units, usually regions, nations, societies, and cultures.

  8. Comparative Research Methods

    Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time.

  9. PDF The Comparative approach: theory and method

    2.2 Comparative Research and case selection 2.3 The Use of Comparative analysis in political science: relating politics, polity and policy to society 2.4 End matter - Exercises & Questions - Further Reading. fruitful and viable way. In section 2.3 we shall enter into the important topic of the comparative approach, i.e. the comparative method ...

  10. PDF Qualitative Comparative Analysis

    Qualitative Comparative Analysis (QCA) is a case-based method that enables evaluators to systematically compare cases, identifying key factors which are responsible for the success of an intervention. As a comparative method, QCA doesn't work with a single case - it needs to compare factors at work across

  11. PDF Comparative Analysis

    Comparative analysis is a multidisciplinary method, which spans a wide cross-section of disciplines (Azarian, 2011). It is the process of comparing multiple units of study for the purpose of scientific discovery and for informing policy decisions (Rogers, 2014). Even though there has been a renewed interest in comparative analysis as a research ...

  12. PDF How to Write a Comparative Analysis

    Determine the focus of your piece. Determine if you will focus on the similarities, the differences, or both. Be sure you treat each individual the same; each person deserves the same amount of focus-meaning, do not place most of the emphasis on you or the other person. Find a balance.

  13. PDF Qualitative Comparative Analysis (Qca)

    Qualitative Comparative Analysis (QCA), developed by Charles Ragin in the 1970s, was originally developed as a research methodology. Lately, it has increasingly been applied within monitoring and evaluation (M&E). QCA is a methodology that enables the analysis of multiple cases in complex situations, and can help explain why change

  14. (PDF) Qualitative Comparative Analysis: An Introduction to Research

    PDF | On Jan 30, 2021, Patrick A. Mello published Qualitative Comparative Analysis: An Introduction to Research Design and Application, Research Design (Chapter 2) | Find, read and cite all the ...

  15. PDF The four varieties of comparative analysis: the case of environmental

    Since the starting point of comparative analysis as defined here is the explanation of similarities and differences, the obvious conclusions to draw are: a. that universalizing comparative analysis is used to make sense of similarities, and b. that differentiating comparative analysis is used to explain differences. This is shown in Table 1.

  16. Approaches to Qualitative Comparative Analysis and good practices: A

    L'approche QCA (Qualitative Comparative Analysis) s'est développée remarquablement au sein des sciences sociales.Pourtant, son utilisation est trop souvent en retard par rapport aux bonnes pratiques méthodologiques, ce qui constitue un obstacle sérieux à l'enrichissement de la boîte à outils méthodologique des sciences sociales.

  17. The "qualitative" in qualitative comparative analysis (QCA): research

    Qualitative Comparative Analysis (QCA) includes two main components: QCA "as a research approach" and QCA "as a method". In this study, we focus on the former and, by means of the "interpretive spiral", we critically look at the research process of QCA. We show how QCA as a research approach is composed of (1) an "analytical move", where cases, conditions and outcome(s) are ...

  18. PDF The Qualitative Comparative Analysis: An Overview of a Causal

    One of them is qualitative comparative analysis (QCA), one of approaches used for causal explanation of phenomena of cases performed in the field of international economics and global affairs. Purpose of the article: The main purpose of the article is to provide a detailed overview of the QCA method in global context, to define its methodologic ...

  19. The use of Qualitative Comparative Analysis (QCA) to address causality

    Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA ...

  20. (Pdf) Research Proposal on The Comparative Analysis of Public

    PDF | A research proposal aims at comparing and analyzing the public administration system in Turkey and Poland. ... The paper is a descriptive and comparative analysis of Nigeria and Britain ...