Suits research exploring:• Changing behaviours within health contexts to address patient and carer practices• Changing behaviours regarding environmental concerns• Barriers and enablers to behaviour/ practice/ implementation• Intervention planning and implementation• Post-evaluation• Promoting physical activity
As discussed in Chapter 3, qualitative research is not an absolute science. While not all research may need a framework or theory (particularly descriptive studies, outlined in Chapter 5), the use of a framework or theory can help to position the research questions, research processes and conclusions and implications within the relevant research paradigm. Theories and frameworks also help to bring to focus areas of the research problem that may not have been considered.
Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Tess Tsindos is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Nicola k gale, gemma heath, elaine cameron, sabina rashid, sabi redwood.
Corresponding author.
Received 2012 Dec 17; Accepted 2013 Sep 6; Collection date 2013.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations.
The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve clinicians, patients and lay people. The stages of the method are illustrated using examples from a published study.
Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research.
Keywords: Qualitative research, Qualitative content analysis, Multi-disciplinary research
The Framework Method for the management and analysis of qualitative data has been used since the 1980s [ 1 ]. The method originated in large-scale social policy research but is becoming an increasingly popular approach in medical and health research; however, there is some confusion about its potential application and limitations. In this article we discuss when it is appropriate to use the Framework Method and how it compares to other qualitative analysis methods. In particular, we explore how it can be used in multi-disciplinary health research teams. Multi-disciplinary and mixed methods studies are becoming increasingly commonplace in applied health research. As well as disciplines familiar with qualitative research, such as nursing, psychology and sociology, teams often include epidemiologists, health economists, management scientists and others. Furthermore, applied health research often has clinical representation and, increasingly, patient and public involvement [ 2 ]. We argue that while leadership is undoubtedly required from an experienced qualitative methodologist, non-specialists from the wider team can and should be involved in the analysis process. We then present a step-by-step guide to the application of the Framework Method, illustrated using a worked example (See Additional File 1 ) from a published study [ 3 ] to illustrate the main stages of the process. Technical terms are included in the glossary (below). Finally, we discuss the strengths and limitations of the approach.
Analytical framework: A set of codes organised into categories that have been jointly developed by researchers involved in analysis that can be used to manage and organise the data. The framework creates a new structure for the data (rather than the full original accounts given by participants) that is helpful to summarize/reduce the data in a way that can support answering the research questions.
Analytic memo: A written investigation of a particular concept, theme or problem, reflecting on emerging issues in the data that captures the analytic process (see Additional file 1 , Section 7).
Categories: During the analysis process, codes are grouped into clusters around similar and interrelated ideas or concepts. Categories and codes are usually arranged in a tree diagram structure in the analytical framework. While categories are closely and explicitly linked to the raw data, developing categories is a way to start the process of abstraction of the data (i.e. towards the general rather than the specific or anecdotal).
Charting: Entering summarized data into the Framework Method matrix (see Additional File 1 , Section 6).
Code: A descriptive or conceptual label that is assigned to excerpts of raw data in a process called ‘coding’ (see Additional File 1 , Section 3).
Data: Qualitative data usually needs to be in textual form before analysis. These texts can either be elicited texts (written specifically for the research, such as food diaries), or extant texts (pre-existing texts, such as meeting minutes, policy documents or weblogs), or can be produced by transcribing interview or focus group data, or creating ‘field’ notes while conducting participant-observation or observing objects or social situations.
Indexing: The systematic application of codes from the agreed analytical framework to the whole dataset (see Additional File 1 , Section 5).
Matrix: A spreadsheet contains numerous cells into which summarized data are entered by codes (columns) and cases (rows) (see Additional File 1 , Section 6).
Themes: Interpretive concepts or propositions that describe or explain aspects of the data, which are the final output of the analysis of the whole dataset. Themes are articulated and developed by interrogating data categories through comparison between and within cases. Usually a number of categories would fall under each theme or sub-theme [ 3 ].
Transcript: A written verbatim (word-for-word) account of a verbal interaction, such as an interview or conversation.
The Framework Method sits within a broad family of analysis methods often termed thematic analysis or qualitative content analysis. These approaches identify commonalities and differences in qualitative data, before focusing on relationships between different parts of the data, thereby seeking to draw descriptive and/or explanatory conclusions clustered around themes. The Framework Method was developed by researchers, Jane Ritchie and Liz Spencer, from the Qualitative Research Unit at the National Centre for Social Research in the United Kingdom in the late 1980s for use in large-scale policy research [ 1 ]. It is now used widely in other areas, including health research [ 3 - 12 ]. Its defining feature is the matrix output: rows (cases), columns (codes) and ‘cells’ of summarised data, providing a structure into which the researcher can systematically reduce the data, in order to analyse it by case and by code [ 1 ]. Most often a ‘case’ is an individual interviewee, but this can be adapted to other units of analysis, such as predefined groups or organisations. While in-depth analyses of key themes can take place across the whole data set, the views of each research participant remain connected to other aspects of their account within the matrix so that the context of the individual’s views is not lost. Comparing and contrasting data is vital to qualitative analysis and the ability to compare with ease data across cases as well as within individual cases is built into the structure and process of the Framework Method.
The Framework Method provides clear steps to follow and produces highly structured outputs of summarised data. It is therefore useful where multiple researchers are working on a project, particularly in multi-disciplinary research teams were not all members have experience of qualitative data analysis, and for managing large data sets where obtaining a holistic, descriptive overview of the entire data set is desirable. However, caution is recommended before selecting the method as it is not a suitable tool for analysing all types of qualitative data or for answering all qualitative research questions, nor is it an ‘easy’ version of qualitative research for quantitative researchers. Importantly, the Framework Method cannot accommodate highly heterogeneous data, i.e. data must cover similar topics or key issues so that it is possible to categorize it. Individual interviewees may, of course, have very different views or experiences in relation to each topic, which can then be compared and contrasted. The Framework Method is most commonly used for the thematic analysis of semi-structured interview transcripts, which is what we focus on in this article, although it could, in principle, be adapted for other types of textual data [ 13 ], including documents, such as meeting minutes or diaries [ 12 ], or field notes from observations [ 10 ].
For quantitative researchers working with qualitative colleagues or when exploring qualitative research for the first time, the nature of the Framework Method is seductive because its methodical processes and ‘spreadsheet’ approach seem more closely aligned to the quantitative paradigm [ 14 ]. Although the Framework Method is a highly systematic method of categorizing and organizing what may seem like unwieldy qualitative data, it is not a panacea for problematic issues commonly associated with qualitative data analysis such as how to make analytic choices and make interpretive strategies visible and auditable. Qualitative research skills are required to appropriately interpret the matrix, and facilitate the generation of descriptions, categories, explanations and typologies. Moreover, reflexivity, rigour and quality are issues that are requisite in the Framework Method just as they are in other qualitative methods. It is therefore essential that studies using the Framework Method for analysis are overseen by an experienced qualitative researcher, though this does not preclude those new to qualitative research from contributing to the analysis as part of a wider research team.
There are a number of approaches to qualitative data analysis, including those that pay close attention to language and how it is being used in social interaction such as discourse analysis [ 15 ] and ethnomethodology [ 16 ]; those that are concerned with experience, meaning and language such as phenomenology [ 17 , 18 ] and narrative methods [ 19 ]; and those that seek to develop theory derived from data through a set of procedures and interconnected stages such as Grounded Theory [ 20 , 21 ]. Many of these approaches are associated with specific disciplines and are underpinned by philosophical ideas which shape the process of analysis [ 22 ]. The Framework Method, however, is not aligned with a particular epistemological, philosophical, or theoretical approach. Rather it is a flexible tool that can be adapted for use with many qualitative approaches that aim to generate themes.
The development of themes is a common feature of qualitative data analysis, involving the systematic search for patterns to generate full descriptions capable of shedding light on the phenomenon under investigation. In particular, many qualitative approaches use the ‘constant comparative method’ , developed as part of Grounded Theory, which involves making systematic comparisons across cases to refine each theme [ 21 , 23 ]. Unlike Grounded Theory, the Framework Method is not necessarily concerned with generating social theory, but can greatly facilitate constant comparative techniques through the review of data across the matrix.
Perhaps because the Framework Method is so obviously systematic, it has often, as other commentators have noted, been conflated with a deductive approach to qualitative analysis [ 13 , 14 ]. However, the tool itself has no allegiance to either inductive or deductive thematic analysis; where the research sits along this inductive-deductive continuum depends on the research question. A question such as, ‘Can patients give an accurate biomedical account of the onset of their cardiovascular disease?’ is essentially a yes/no question (although it may be nuanced by the extent of their account or by appropriate use of terminology) and so requires a deductive approach to both data collection and analysis (e.g. structured or semi-structured interviews and directed qualitative content analysis [ 24 ]). Similarly, a deductive approach may be taken if basing analysis on a pre-existing theory, such as behaviour change theories, for example in the case of a research question such as ‘How does the Theory of Planned Behaviour help explain GP prescribing?’ [ 11 ]. However, a research question such as, ‘How do people construct accounts of the onset of their cardiovascular disease?’ would require a more inductive approach that allows for the unexpected, and permits more socially-located responses [ 25 ] from interviewees that may include matters of cultural beliefs, habits of food preparation, concepts of ‘fate’, or links to other important events in their lives, such as grief, which cannot be predicted by the researcher in advance (e.g. an interviewee-led open ended interview and grounded theory [ 20 ]). In all these cases, it may be appropriate to use the Framework Method to manage the data. The difference would become apparent in how themes are selected: in the deductive approach, themes and codes are pre-selected based on previous literature, previous theories or the specifics of the research question; whereas in the inductive approach, themes are generated from the data though open (unrestricted) coding, followed by refinement of themes. In many cases, a combined approach is appropriate when the project has some specific issues to explore, but also aims to leave space to discover other unexpected aspects of the participants’ experience or the way they assign meaning to phenomena. In sum, the Framework Method can be adapted for use with deductive, inductive, or combined types of qualitative analysis. However, there are some research questions where analysing data by case and theme is not appropriate and so the Framework Method should be avoided. For instance, depending on the research question, life history data might be better analysed using narrative analysis [ 19 ]; recorded consultations between patients and their healthcare practitioners using conversation analysis [ 26 ]; and documentary data, such as resources for pregnant women, using discourse analysis [ 27 ].
It is not within the scope of this paper to consider study design or data collection in any depth, but before moving on to describe the Framework Method analysis process, it is worth taking a step back to consider briefly what needs to happen before analysis begins. The selection of analysis method should have been considered at the proposal stage of the research and should fit with the research questions and overall aims of the study. Many qualitative studies, particularly ones using inductive analysis, are emergent in nature; this can be a challenge and the researchers can only provide an “imaginative rehearsal” of what is to come [ 28 ]. In mixed methods studies, the role of the qualitative component within the wider goals of the project must also be considered. In the data collection stage, resources must be allocated for properly trained researchers to conduct the qualitative interviewing because it is a highly skilled activity. In some cases, a research team may decide that they would like to use lay people, patients or peers to do the interviews [ 29 - 32 ] and in this case they must be properly trained and mentored which requires time and resources. At this early stage it is also useful to consider whether the team will use Computer Assisted Qualitative Data Analysis Software (CAQDAS), which can assist with data management and analysis.
As any form of qualitative or quantitative analysis is not a purely technical process, but influenced by the characteristics of the researchers and their disciplinary paradigms, critical reflection throughout the research process is paramount, including in the design of the study, the construction or collection of data, and the analysis. All members of the team should keep a research diary, where they record reflexive notes, impressions of the data and thoughts about analysis throughout the process. Experienced qualitative researchers become more skilled at sifting through data and analysing it in a rigorous and reflexive way. They cannot be too attached to certainty, but must remain flexible and adaptive throughout the research in order to generate rich and nuanced findings that embrace and explain the complexity of real social life and can be applied to complex social issues. It is important to remember when using the Framework Method that, unlike quantitative research where data collection and data analysis are strictly sequential and mutually exclusive stages of the research process, in qualitative analysis there is, to a greater or lesser extent depending on the project, ongoing interplay between data collection, analysis, and theory development. For example, new ideas or insights from participants may suggest potentially fruitful lines of enquiry, or close analysis might reveal subtle inconsistencies in an account which require further exploration.
Stage 1: transcription.
A good quality audio recording and, ideally, a verbatim (word for word) transcription of the interview is needed. For Framework Method analysis, it is not necessarily important to include the conventions of dialogue transcriptions which can be difficult to read (e.g. pauses or two people talking simultaneously), because the content is what is of primary interest. Transcripts should have large margins and adequate line spacing for later coding and making notes. The process of transcription is a good opportunity to become immersed in the data and is to be strongly encouraged for new researchers. However, in some projects, the decision may be made that it is a better use of resources to outsource this task to a professional transcriber.
Becoming familiar with the whole interview using the audio recording and/or transcript and any contextual or reflective notes that were recorded by the interviewer is a vital stage in interpretation. It can also be helpful to re-listen to all or parts of the audio recording. In multi-disciplinary or large research projects, those involved in analysing the data may be different from those who conducted or transcribed the interviews, which makes this stage particularly important. One margin can be used to record any analytical notes, thoughts or impressions.
After familiarization, the researcher carefully reads the transcript line by line, applying a paraphrase or label (a ‘code’) that describes what they have interpreted in the passage as important. In more inductive studies, at this stage ‘open coding’ takes place, i.e. coding anything that might be relevant from as many different perspectives as possible. Codes could refer to substantive things (e.g. particular behaviours, incidents or structures), values (e.g. those that inform or underpin certain statements, such as a belief in evidence-based medicine or in patient choice), emotions (e.g. sorrow, frustration, love) and more impressionistic/methodological elements (e.g. interviewee found something difficult to explain, interviewee became emotional, interviewer felt uncomfortable) [ 33 ]. In purely deductive studies, the codes may have been pre-defined (e.g. by an existing theory, or specific areas of interest to the project) so this stage may not be strictly necessary and you could just move straight onto indexing, although it is generally helpful even if you are taking a broadly deductive approach to do some open coding on at least a few of the transcripts to ensure important aspects of the data are not missed. Coding aims to classify all of the data so that it can be compared systematically with other parts of the data set. At least two researchers (or at least one from each discipline or speciality in a multi-disciplinary research team) should independently code the first few transcripts, if feasible. Patients, public involvement representatives or clinicians can also be productively involved at this stage, because they can offer alternative viewpoints thus ensuring that one particular perspective does not dominate. It is vital in inductive coding to look out for the unexpected and not to just code in a literal, descriptive way so the involvement of people from different perspectives can aid greatly in this. As well as getting a holistic impression of what was said, coding line-by-line can often alert the researcher to consider that which may ordinarily remain invisible because it is not clearly expressed or does not ‘fit’ with the rest of the account. In this way the developing analysis is challenged; to reconcile and explain anomalies in the data can make the analysis stronger. Coding can also be done digitally using CAQDAS, which is a useful way to keep track automatically of new codes. However, some researchers prefer to do the early stages of coding with a paper and pen, and only start to use CAQDAS once they reach Stage 5 (see below).
After coding the first few transcripts, all researchers involved should meet to compare the labels they have applied and agree on a set of codes to apply to all subsequent transcripts. Codes can be grouped together into categories (using a tree diagram if helpful), which are then clearly defined. This forms a working analytical framework. It is likely that several iterations of the analytical framework will be required before no additional codes emerge. It is always worth having an ‘other’ code under each category to avoid ignoring data that does not fit; the analytical framework is never ‘final’ until the last transcript has been coded.
The working analytical framework is then applied by indexing subsequent transcripts using the existing categories and codes. Each code is usually assigned a number or abbreviation for easy identification (and so the full names of the codes do not have to be written out each time) and written directly onto the transcripts. Computer Assisted Qualitative Data Analysis Software (CAQDAS) is particularly useful at this stage because it can speed up the process and ensures that, at later stages, data is easily retrievable. It is worth noting that unlike software for statistical analyses, which actually carries out the calculations with the correct instruction, putting the data into a qualitative analysis software package does not analyse the data; it is simply an effective way of storing and organising the data so that they are accessible for the analysis process.
Qualitative data are voluminous (an hour of interview can generate 15–30 pages of text) and being able to manage and summarize (reduce) data is a vital aspect of the analysis process. A spreadsheet is used to generate a matrix and the data are ‘charted’ into the matrix. Charting involves summarizing the data by category from each transcript. Good charting requires an ability to strike a balance between reducing the data on the one hand and retaining the original meanings and ‘feel’ of the interviewees’ words on the other. The chart should include references to interesting or illustrative quotations. These can be tagged automatically if you are using CAQDAS to manage your data (N-Vivo version 9 onwards has the capability to generate framework matrices), or otherwise a capital ‘Q’, an (anonymized) transcript number, page and line reference will suffice. It is helpful in multi-disciplinary teams to compare and contrast styles of summarizing in the early stages of the analysis process to ensure consistency within the team. Any abbreviations used should be agreed by the team. Once members of the team are familiar with the analytical framework and well practised at coding and charting, on average, it will take about half a day per hour-long transcript to reach this stage. In the early stages, it takes much longer.
It is useful throughout the research to have a separate note book or computer file to note down impressions, ideas and early interpretations of the data. It may be worth breaking off at any stage to explore an interesting idea, concept or potential theme by writing an analytic memo [ 20 , 21 ] to then discuss with other members of the research team, including lay and clinical members. Gradually, characteristics of and differences between the data are identified, perhaps generating typologies, interrogating theoretical concepts (either prior concepts or ones emerging from the data) or mapping connections between categories to explore relationships and/or causality. If the data are rich enough, the findings generated through this process can go beyond description of particular cases to explanation of, for example, reasons for the emergence of a phenomena, predicting how an organisation or other social actor is likely to instigate or respond to a situation, or identifying areas that are not functioning well within an organisation or system. It is worth noting that this stage often takes longer than anticipated and that any project plan should ensure that sufficient time is allocated to meetings and individual researcher time to conduct interpretation and writing up of findings (see Additional file 1 , Section 7).
The Framework Method has been developed and used successfully in research for over 25 years, and has recently become a popular analysis method in qualitative health research. The issue of how to assess quality in qualitative research has been highly debated [ 20 , 34 - 40 ], but ensuring rigour and transparency in analysis is a vital component. There are, of course, many ways to do this but in the Framework Method the following are helpful:
•Summarizing the data during charting, as well as being a practical way to reduce the data, means that all members of a multi-disciplinary team, including lay, clinical and (quantitative) academic members can engage with the data and offer their perspectives during the analysis process without necessarily needing to read all the transcripts or be involved in the more technical parts of analysis.
•Charting also ensures that researchers pay close attention to describing the data using each participant’s own subjective frames and expressions in the first instance, before moving onto interpretation.
•The summarized data is kept within the wider context of each case, thereby encouraging thick description that pays attention to complex layers of meaning and understanding [ 38 ].
•The matrix structure is visually straightforward and can facilitate recognition of patterns in the data by any member of the research team, including through drawing attention to contradictory data, deviant cases or empty cells.
•The systematic procedure (described in this article) makes it easy to follow, even for multi-disciplinary teams and/or with large data sets.
•It is flexible enough that non-interview data (such as field notes taken during the interview or reflexive considerations) can be included in the matrix.
•It is not aligned with a particular epistemological viewpoint or theoretical approach and therefore can be adapted for use in inductive or deductive analysis or a combination of the two (e.g. using pre-existing theoretical constructs deductively, then revising the theory with inductive aspects; or using an inductive approach to identify themes in the data, before returning to the literature and using theories deductively to help further explain certain themes).
•It is easy to identify relevant data extracts to illustrate themes and to check whether there is sufficient evidence for a proposed theme.
•Finally, there is a clear audit trail from original raw data to final themes, including the illustrative quotes.
There are also a number of potential pitfalls to this approach:
•The systematic approach and matrix format, as we noted in the background, is intuitively appealing to those trained quantitatively but the ‘spreadsheet’ look perhaps further increases the temptation for those without an in-depth understanding of qualitative research to attempt to quantify qualitative data (e.g. “13 out of 20 participants said X). This kind of statement is clearly meaningless because the sampling in qualitative research is not designed to be representative of a wider population, but purposive to capture diversity around a phenomenon [ 41 ].
•Like all qualitative analysis methods, the Framework Method is time consuming and resource-intensive. When involving multiple stakeholders and disciplines in the analysis and interpretation of the data, as is good practice in applied health research, the time needed is extended. This time needs to be factored into the project proposal at the pre-funding stage.
•There is a high training component to successfully using the method in a new multi-disciplinary team. Depending on their role in the analysis, members of the research team may have to learn how to code, index, and chart data, to think reflexively about how their identities and experience affect the analysis process, and/or they may have to learn about the methods of generalisation (i.e. analytic generalisation and transferability, rather than statistical generalisation [ 41 ]) to help to interpret legitimately the meaning and significance of the data.
While the Framework Method is amenable to the participation of non-experts in data analysis, it is critical to the successful use of the method that an experienced qualitative researcher leads the project (even if the overall lead for a large mixed methods study is a different person). The qualitative lead would ideally be joined by other researchers with at least some prior training in or experience of qualitative analysis. The responsibilities of the lead qualitative researcher are: to contribute to study design, project timelines and resource planning; to mentor junior qualitative researchers; to train clinical, lay and other (non-qualitative) academics to contribute as appropriate to the analysis process; to facilitate analysis meetings in a way that encourages critical and reflexive engagement with the data and other team members; and finally to lead the write-up of the study.
We have argued that Framework Method studies can be conducted by multi-disciplinary research teams that include, for example, healthcare professionals, psychologists, sociologists, economists, and lay people/service users. The inclusion of so many different perspectives means that decision-making in the analysis process can be very time consuming and resource-intensive. It may require extensive, reflexive and critical dialogue about how the ideas expressed by interviewees and identified in the transcript are related to pre-existing concepts and theories from each discipline, and to the real ‘problems’ in the health system that the project is addressing. This kind of team effort is, however, an excellent forum for driving forward interdisciplinary collaboration, as well as clinical and lay involvement in research, to ensure that ‘the whole is greater than the sum of the parts’, by enhancing the credibility and relevance of the findings.
The Framework Method is appropriate for thematic analysis of textual data, particularly interview transcripts, where it is important to be able to compare and contrast data by themes across many cases, while also situating each perspective in context by retaining the connection to other aspects of each individual’s account. Experienced qualitative researchers should lead and facilitate all aspects of the analysis, although the Framework Method’s systematic approach makes it suitable for involving all members of a multi-disciplinary team. An open, critical and reflexive approach from all team members is essential for rigorous qualitative analysis.
Acceptance of the complexity of real life health systems and the existence of multiple perspectives on health issues is necessary to produce high quality qualitative research. If done well, qualitative studies can shed explanatory and predictive light on important phenomena, relate constructively to quantitative parts of a larger study, and contribute to the improvement of health services and development of health policy. The Framework Method, when selected and implemented appropriately, can be a suitable tool for achieving these aims through producing credible and relevant findings.
•The Framework Method is an excellent tool for supporting thematic (qualitative content) analysis because it provides a systematic model for managing and mapping the data.
•The Framework Method is most suitable for analysis of interview data, where it is desirable to generate themes by making comparisons within and between cases.
•The management of large data sets is facilitated by the Framework Method as its matrix form provides an intuitively structured overview of summarised data.
•The clear, step-by-step process of the Framework Method makes it is suitable for interdisciplinary and collaborative projects.
•The use of the method should be led and facilitated by an experienced qualitative researcher.
The authors declare that they have no competing interests.
All authors were involved in the development of the concept of the article and drafting the article. NG wrote the first draft of the article, GH and EC prepared the text and figures related to the illustrative example, SRa did the literature search to identify if there were any similar articles currently available and contributed to drafting of the article, and SRe contributed to drafting of the article and the illustrative example. All authors read and approved the final manuscript.
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2288/13/117/prepub
Illustrative Example of the use of the Framework Method.
Nicola K Gale, Email: [email protected].
Gemma Heath, Email: [email protected].
Elaine Cameron, Email: [email protected].
Sabina Rashid, Email: [email protected].
Sabi Redwood, Email: [email protected].
All authors were funded by the National Institute for Health Research (NIHR) through the Collaborations for Leadership in Applied Health Research and Care for Birmingham and Black Country (CLAHRC-BBC) programme. The views in this publication expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Cited by other articles, links to ncbi databases.
You have full access to this open access article
108k Accesses
53 Citations
69 Altmetric
Explore all metrics
This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.
Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.
Avoid common mistakes on your manuscript.
“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)
To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.
Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).
In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.
The rest of this review article is structured in the following fashion: Sect. Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect. Improving Quality: Strategies . Section How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect. Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect. Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect. Conclusions, Future Directions and Outlook .
For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.
From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.
Figure 1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.
PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records
Fundamental criteria: general research quality.
Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.
All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.
Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.
Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .
It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.
Figure 2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.
Essential elements of a conceptual framework
In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.
The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .
Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:
The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.
Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).
Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.
Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.
This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.
Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:
In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.
There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).
Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.
Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.
It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.
Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.
To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.
Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.
Article Google Scholar
Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.
Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.
Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.
CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/
Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.
Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.
Google Scholar
Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.
Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445
Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.
Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.
Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.
Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.
Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.
Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.
Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.
Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.
Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.
Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.
Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.
Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.
Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.
Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.
Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15
McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.
Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.
Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.
Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.
Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.
Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.
Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.
O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.
O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.
Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.
Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959
Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.
Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.
Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.
Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.
Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.
Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.
Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.
Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.
Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.
Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.
Download references
Open access funding provided by TU Wien (TUW).
Authors and affiliations.
Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria
Drishti Yadav
You can also search for this author in PubMed Google Scholar
Correspondence to Drishti Yadav .
Conflict of interest.
The author declares no conflict of interest.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0
Download citation
Accepted : 28 August 2021
Published : 18 September 2021
Issue Date : December 2022
DOI : https://doi.org/10.1007/s40299-021-00619-0
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
IMAGES
VIDEO