Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

limitations of qualitative evaluation research

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

78k Accesses

28 Citations

72 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

limitations of qualitative evaluation research

Good Qualitative Research: Opening up the Debate

Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.

limitations of qualitative evaluation research

What is Qualitative in Research

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

Jump to navigation

Home

Cochrane Training

Chapter 21: qualitative evidence.

Jane Noyes, Andrew Booth, Margaret Cargo, Kate Flemming, Angela Harden, Janet Harris, Ruth Garside, Karin Hannes, Tomás Pantoja, James Thomas

Key Points:

  • A qualitative evidence synthesis (commonly referred to as QES) can add value by providing decision makers with additional evidence to improve understanding of intervention complexity, contextual variations, implementation, and stakeholder preferences and experiences.
  • A qualitative evidence synthesis can be undertaken and integrated with a corresponding intervention review; or
  • Undertaken using a mixed-method design that integrates a qualitative evidence synthesis with an intervention review in a single protocol.
  • Methods for qualitative evidence synthesis are complex and continue to develop. Authors should always consult current methods guidance at methods.cochrane.org/qi .

This chapter should be cited as: Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, Garside R, Hannes K, Pantoja T, Thomas J. Chapter 21: Qualitative evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

21.1 Introduction

The potential contribution of qualitative evidence to decision making is well-established (Glenton et al 2016, Booth 2017, Carroll 2017). A synthesis of qualitative evidence can inform understanding of how interventions work by:

  • increasing understanding of a phenomenon of interest (e.g. women’s conceptualization of what good antenatal care looks like);
  • identifying associations between the broader environment within which people live and the interventions that are implemented;
  • increasing understanding of the values and attitudes toward, and experiences of, health conditions and interventions by those who implement or receive them; and
  • providing a detailed understanding of the complexity of interventions and implementation, and their impacts and effects on different subgroups of people and the influence of individual and contextual characteristics within different contexts.

The aim of this chapter is to provide authors (who already have experience of undertaking qualitative research and qualitative evidence synthesis) with additional guidance on undertaking a qualitative evidence synthesis that is subsequently integrated with an intervention review. This chapter draws upon guidance presented in a series of six papers published in the Journal of Clinical Epidemiology (Cargo et al 2018, Flemming et al 2018, Harden et al 2018, Harris et al 2018, Noyes et al 2018a, Noyes et al 2018b) and from a further World Health Organization series of papers published in BMJ Global Health, which extend guidance to qualitative evidence syntheses conducted within a complex intervention and health systems and decision making context (Booth et al 2019a, Booth et al 2019b, Flemming et al 2019, Noyes et al 2019, Petticrew et al 2019).The qualitative evidence synthesis and integration methods described in this chapter supplement Chapter 17 on methods for addressing intervention complexity. Authors undertaking qualitative evidence syntheses should consult these papers and chapters for more detailed guidance.

21.2 Designs for synthesizing and integrating qualitative evidence with intervention reviews

There are two main designs for synthesizing qualitative evidence with evidence of the effects of interventions:

  • Sequential reviews: where one or more existing intervention review(s) has been published on a similar topic, it is possible to do a sequential qualitative evidence synthesis and then integrate its findings with those of the intervention review to create a mixed-method review. For example, Lewin and colleagues (Lewin et al (2010) and Glenton and colleagues (Glenton et al (2013) undertook sequential reviews of lay health worker programmes using separate protocols and then integrated the findings.  
  • Convergent mixed-methods review: where no pre-existing intervention review exists, it is possible to do a full convergent ‘mixed-methods’ review where the trials and qualitative evidence are synthesized separately, creating opportunities for them to ‘speak’ to each other during development, and then integrated within a third synthesis. For example, Hurley and colleagues (Hurley et al (2018) undertook an intervention review and a qualitative evidence synthesis following a single protocol.

It is increasingly common for sequential and convergent reviews to be conducted by some or all of the same authors; if not, it is critical that authors working on the qualitative evidence synthesis and intervention review work closely together to identify and create sufficient points of integration to enable a third synthesis that integrates the two reviews, or the conduct of a mixed-method review (Noyes et al 2018a) (see Figure 21.2.a ). This consideration also applies where an intervention review has already been published and there is no prior relationship with the qualitative evidence synthesis authors. We recommend that at least one joint author works across both reviews to facilitate development of the qualitative evidence synthesis protocol, conduct of the synthesis, and subsequent integration of the qualitative evidence synthesis with the intervention review within a mixed-methods review.

Figure 21.2.a Considering context and points of contextual integration with the intervention review or within a mixed-method review

limitations of qualitative evaluation research

21.3 Defining qualitative evidence and studies

We use the term ‘qualitative evidence synthesis’ to acknowledge that other types of qualitative evidence (or data) can potentially enrich a synthesis, such as narrative data derived from qualitative components of mixed-method studies or free text from questionnaire surveys. We would not, however, consider a questionnaire survey to be a qualitative study and qualitative data from questionnaires should not usually be privileged over relevant evidence from qualitative studies. When thinking about qualitative evidence, specific terminology is used to describe the level of conceptual and contextual detail. Qualitative evidence that includes higher or lower levels of conceptual detail is described as ‘rich’ or ‘poor’. Associated terms ‘thick’ or ‘thin’ are best used to refer to higher or lower levels of contextual detail. Review authors can potentially develop a stronger synthesis using rich and thick qualitative evidence but, in reality, they will identify diverse conceptually rich and poor and contextually thick and thin studies. Developing a clear picture of the type and conceptual richness of available qualitative evidence strongly influences the choice of methodology and subsequent methods. We recommend that authors undertake scoping searches to determining the type and richness of available qualitative evidence before selecting their methodology and methods.

A qualitative study is a research study that uses a qualitative method of data collection and analysis. Review authors should include the studies that enable them to answer their review question. When selecting qualitative studies in a review about intervention effects, two types of qualitative study are available: those that collect data from the same participants as the included trials, known as ‘trial siblings’; and those that address relevant issues about the intervention, but as separate items of research – not connected to any included trials. Both can provide useful information, with trial sibling studies obviously closer in terms of their precise contexts to the included trials (Moore et al 2015), and non-sibling studies possibly contributing perspectives not present in the trials (Noyes et al 2016b).

21.4 Planning a qualitative evidence synthesis linked to an intervention review

The Cochrane Qualitative and Implementation Methods Group (QIMG) website provides links to practical guidance and key steps for authors who are considering a qualitative evidence synthesis ( methods.cochrane.org/qi ). The RETREAT framework outlines seven key considerations that review authors should systematically work through when planning a review (Booth et al 2016, Booth et al 2018) ( Box 21.4.a ). Flemming and colleagues (Flemming et al (2019) further explain how to factor in such considerations when undertaking a qualitative evidence synthesis within a complex intervention and decision making context when complexity is an important consideration.

Box 21.4.a RETREAT considerations when selecting an appropriate method for qualitative synthesis

21.5 Question development

The review question is critical to development of the qualitative evidence synthesis (Harris et al 2018). Question development affords a key point for integration with the intervention review. Complementary guidance supports novel thinking about question development, application of question development frameworks and the types of questions to be addressed by a synthesis of qualitative evidence (Cargo et al 2018, Harris et al 2018, Noyes et al 2018a, Booth et al 2019b, Flemming et al 2019).

Research questions for quantitative reviews are often mapped using structures such as PICO. Some qualitative reviews adopt this structure, or use an adapted variation of such a structure (e.g. SPICE (Setting, Perspective, Intervention or Phenomenon of Interest, Comparison, Evaluation) or SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type); (Cooke et al 2012). Booth and colleagues (Booth et al (2019b) propose an extended question framework (PerSPecTIF) to describe both wider context and immediate setting that is particularly suited to qualitative evidence synthesis and complex intervention reviews (see Table 21.5.a ).

Detailed attention to the question and specification of context at an early stage is critical to many aspects of qualitative synthesis (see Petticrew et al (2019) and Booth et al (2019a) for a more detailed discussion). By specifying the context a review team is able to identify opportunities for integration with the intervention review, or opportunities for maximizing use and interpretation of evidence as a mixed-method review progresses (see Figure 21.2.a ), and informs both the interpretation of the observed effects and assessment of the strength of the evidence available in addressing the review question (Noyes et al 2019). Subsequent application of GRADE CERQual (Lewin et al 2015, Lewin et al 2018), an approach to assess the confidence in synthesized qualitative findings, requires further specification of context in the review question.

Table 21.5.a PerSPecTIF Question formulation framework for qualitative evidence syntheses (Booth et al (2019b). Reproduced with permission of BMJ Publishing Group

21.6 Questions exploring intervention implementation

Additional guidance is available on formulation of questions to understand and assess intervention implementation (Cargo et al 2018). A strong understanding of how an intervention is thought to work, and how it should be implemented in practice, will enable a critical consideration of whether any observed lack of effect might be due to a poorly conceptualized intervention (i.e. theory failure) or a poor intervention implementation (i.e. implementation failure). Heterogeneity needs to be considered for both the underlying theory and the ways in which the intervention was implemented. An a priori scoping review (Levac et al 2010), concept analysis (Walker and Avant 2005), critical review (Grant and Booth 2009) or textual narrative synthesis (Barnett-Page and Thomas 2009) can be undertaken to classify interventions and/or to identify the programme theory, logic model or implementation measures and processes. The intervention Complexity Assessment Tool for Systematic Reviews iCAT_SR (Lewin et al 2017) may be helpful in classifying complexity in interventions and developing associated questions.

An existing intervention model or framework may be used within a new topic or context. The ‘best-fit framework’ approach to synthesis (Carroll et al 2013) can be used to establish the degree to which the source context (from where the framework was derived) resembles the new target context (see Figure 21.2.a ). In the absence of an explicit programme theory and detail of how implementation relates to outcomes, an a priori realist review, meta-ethnography or meta-interpretive review can be undertaken (Booth et al 2016). For example, Downe and colleagues (Downe et al (2016) undertook an initial meta-ethnography review to develop an understanding of the outcomes of importance to women receiving antenatal care.

However, these additional activities are very resource-intensive and are only recommended when the review team has sufficient resources to supplement the planned qualitative evidence syntheses with an additional explanatory review. Where resources are less plentiful a review team could engage with key stakeholders to articulate and develop programme theory (Kelly et al 2017, De Buck et al 2018).

21.6.1 Using logic models and theories to support question development

Review authors can develop a more comprehensive representation of question features through use of logic models, programme theories, theories of change, templates and pathways (Anderson et al 2011, Kneale et al 2015, Noyes et al 2016a) (see also Chapter 17, Section 17.2.1  and Chapter 2, Section 2.5.1 ). These different forms of social theory can be used to visualize and map the research question, its context, components, influential factors and possible outcomes (Noyes et al 2016a, Rehfuess et al 2018).

21.6.2 Stakeholder engagement

Finally, review authors need to engage stakeholders, including consumers affected by the health issue and interventions, or likely users of the review from clinical or policy contexts. From the preparatory stage, this consultation can ensure that the review scope and question is appropriate and resulting products address implementation concerns of decision makers (Kelly et al 2017, Harris et al 2018).

21.7 Searching for qualitative evidence

In comparison with identification of quantitative studies (see also Chapter 4 ), procedures for retrieval of qualitative research remain relatively under-developed. Particular challenges in retrieval are associated with non-informative titles and abstracts, diffuse terminology, poor indexing and the overwhelming prevalence of quantitative studies within data sources (Booth et al 2016).

Principal considerations when planning a search for qualitative studies, and the evidence that underpins them, have been characterized using a 7S framework from Sampling and Sources through Structured questions, Search procedures, Strategies and filters and Supplementary strategies to Standards for Reporting (Booth et al 2016).

A key decision, aligned to the purpose of the qualitative evidence synthesis is whether to use the comprehensive, exhaustive approaches that characterize quantitative searches or whether to use purposive sampling that is more sensitive to the qualitative paradigm (Suri 2011). The latter, which is used when the intent is to generate an interpretative understanding, for example, when generating theory, draws upon a versatile toolkit that includes theoretical sampling, maximum variation sampling and intensity sampling. Sources of qualitative evidence are more likely to include book chapters, theses and grey literature reports than standard quantitative study reports, and so a search strategy should place extra emphasis on these sources. Local databases may be particularly valuable given the criticality of context (Stansfield et al 2012).

Another key decision is whether to use study filters or simply to conduct a topic-based search where qualitative studies are identified at the study selection stage. Search filters for qualitative studies lack the specificity of their quantitative counterparts. Nevertheless, filters may facilitate efficient retrieval by study type (e.g. qualitative (Rogers et al 2018) or mixed methods (El Sherif et al 2016) or by perspective (e.g. patient preferences (Selva et al 2017)) particularly where the quantitative literature is overwhelmingly large and thus increases the number needed to retrieve. Poor indexing of qualitative studies makes citation searching (forward and backward) and the Related Articles features of electronic databases particularly useful (Cooper et al 2017). Further guidance on searching for qualitative evidence is available (Booth et al 2016, Noyes et al 2018a). The CLUSTER method has been proposed as a specific named method for tracking down associated or sibling reports (Booth et al 2013). The BeHEMoTh approach has been developed for identifying explicit use of theory (Booth and Carroll 2015).

21.7.1 Searching for process evaluations and implementation evidence

Four potential approaches are available to identify process evaluations.

  • Identify studies at the point of study selection rather than through tailored search strategies. This involves conducting a sensitive topic search without any study design filter (Harden et al 1999), and identifying all study designs of interest during the screening process. This approach can be feasible when a review question involves multiple publication types (e.g. randomized trial, qualitative research and economic evaluations), which then do not require separate searches.  
  • Restrict included process evaluations to those conducted within randomized trials, which can be identified using standard search filters (see Chapter 4, Section 4.4.7 ). This method relies on reports of process evaluations also describing the surrounding randomized trial in enough detail to be identified by the search filter.  
  • Use unevaluated filter terms (such as ‘process evaluation’, ‘program(me) evaluation’, ‘feasibility study’, ‘implementation’ or ‘proof of concept’ etc) to retrieve process evaluations or implementation data. Approaches using strings of terms associated with the study type or purpose are considered experimental. There is a need to develop and test such filters. It is likely that such filters may be derived from the study type (process evaluation), the data type (process data) or the application (implementation) (Robbins et al 2011).  
  • Minimize reliance on topic-based searching and rely on citations-based approaches to identify linked reports, published or unpublished, of a particular study (Booth et al 2013) which may provide implementation or process data (Bonell et al 2013).

More detailed guidance is provided by Cargo and colleagues (Cargo et al (2018).

21.8 Assessing methodological strengths and limitations of qualitative studies

Assessment of the methodological strengths and limitations of qualitative research remains contested within the primary qualitative research community (Garside 2014). However, within systematic reviews and evidence syntheses it is considered essential, even when studies are not to be excluded on the basis of quality (Carroll et al 2013). One review found almost 100 appraisal tools for assessing primary qualitative studies (Munthe-Kaas et al 2019). Limitations included a focus on reporting rather than conduct and the presence of items that are separate from, or tangential to, consideration of study quality (e.g. ethical approval).

Authors should distinguish between assessment of study quality and assessment of risk of bias by focusing on assessment of methodological strengths and limitations as a marker of study rigour (what we term a ‘risk to rigour’ approach (Noyes et al 2019)). In the absence of a definitive risk to rigour tool, we recommend that review authors select from published, commonly used and validated tools that focus on the assessment of the methodological strengths and limitations of qualitative studies (see Box 21.8.a ). Pragmatically, we consider a ‘validated’ tool as one that has been subjected to evaluation. Issues such as inter-rater reliability are afforded less importance given that identification of complementary or conflicting perspectives on risk to rigour is considered more useful than achievement of consensus per se (Noyes et al 2019).

The CASP tool for qualitative research (as one example) maps onto the domains in Box 21.8.a (CASP 2013). Tools not meeting the criterion of focusing on assessment of methodological strengths and limitations include those that integrate assessment of the quality of reporting (such as scoring of the title and abstract, etc) into an overall assessment of methodological strengths and limitations. As with other risk of bias assessment tools, we strongly recommend against the application of scores to domains or calculation of total quality scores. We encourage review authors to discuss the studies and their assessments of ‘risk to rigour’ for each paper and how the study’s methodological limitations may affect review findings (Noyes et al 2019). We further advise that qualitative ‘sensitivity analysis’, exploring the robustness of the synthesis and its vulnerability to methodologically limited studies, be routinely applied regardless of the review authors’ overall confidence in synthesized findings (Carroll et al 2013). Evidence suggests that qualitative sensitivity analysis is equally advisable for mixed methods studies from which the qualitative component is extracted (Verhage and Boels 2017).

Box 21.8.a Example domains that provide an assessment of methodological strengths and limitations to determine study rigour

Adapted from Noyes et al (2019) and Alvesson and Sköldberg (2009)

21.8.1 Additional assessment of methodological strengths and limitations of process evaluation and intervention implementation evidence

Few assessment tools explicitly address rigour in process evaluation or implementation evidence. For qualitative primary studies, the 8-item process evaluation tool developed by the EPPI-Centre (Rees et al 2009, Shepherd et al 2010) can be used to supplement tools selected to assess methodological strengths and limitations and risks to rigour in primary qualitative studies. One of these items, a question on usefulness (framed as ‘how well the intervention processes were described and whether or not the process data could illuminate why or how the interventions worked or did not work’ ) offers a mechanism for exploring process mechanisms (Cargo et al 2018).

21.9 Selecting studies to synthesize

Decisions about inclusion or exclusion of studies can be more complex in qualitative evidence syntheses compared to reviews of trials that aim to include all relevant studies. Decisions on whether to include all studies or to select a sample of studies depend on a range of general and review specific criteria that Noyes and colleagues (Noyes et al (2019) outline in detail. The number of qualitative studies selected needs to be consistent with a manageable synthesis, and the contexts of the included studies should enable integration with the trials in the effectiveness analysis (see Figure 21.2.a ). The guiding principle is transparency in the reporting of all decisions and their rationale.

21.10 Selecting a qualitative evidence synthesis and data extraction method

Authors will typically find that they cannot select an appropriate synthesis method until the pool of available qualitative evidence has been thoroughly scoped. Flexible options concerning choice of method may need to be articulated in the protocol.

The INTEGRATE-HTA guidance on selecting methodology and methods for qualitative evidence synthesis and health technology assessment offers a useful starting point when selecting a method of synthesis (Booth et al 2016, Booth et al 2018). Some methods are designed primarily to develop findings at a descriptive level and thus directly feed into lines of action for policy and practice. Others hold the capacity to develop new theory (e.g. meta-ethnography and theory building approaches to thematic synthesis). Noyes and colleagues (Noyes et al (2019) and Flemming and colleagues (Flemming et al (2019) elaborate on key issues for consideration when selecting a method that is particularly suited to a Cochrane Review and decision making context (see Table 21.10.a ). Three qualitative evidence synthesis methods (thematic synthesis, framework synthesis and meta-ethnography) are recommended to produce syntheses that can subsequently be integrated with an intervention review or analysis.

Table 21.10.a Recommended methods for undertaking a qualitative evidence synthesis for subsequent integration with an intervention review, or as part of a mixed-method review (adapted from an original source developed by convenors (Flemming et al 2019, Noyes et al 2019))

21.11 Data extraction

Qualitative findings may take the form of quotations from participants, subthemes and themes identified by the study’s authors, explanations, hypotheses or new theory, or observational excerpts and author interpretations of these data (Sandelowski and Barroso 2002). Findings may be presented as a narrative, or summarized and displayed as tables, infographics or logic models and potentially located in any part of the paper (Noyes et al 2019).

Methods for qualitative data extraction vary according to the synthesis method selected. Data extraction is not sequential and linear; often, it involves moving backwards and forwards between review stages. Review teams will need regular meetings to discuss and further interrogate the evidence and thereby achieve a shared understanding. It may be helpful to draw on a key stakeholder group to help in interpreting the evidence and in formulating key findings. Additional approaches (such as subgroup analysis) can be used to explore evidence from specific contexts further.

Irrespective of the review type and choice of synthesis method, we consider it best practice to extract detailed contextual and methodological information on each study and to report this information in a table of ‘Characteristics of included studies’ (see Table 21.11.a ). The template for intervention description and replication TIDieR checklist (Hoffmann et al 2014) and ICAT_SR tool may help with specifying key information for extraction (Lewin et al 2017). Review authors must ensure that they preserve the context of the primary study data during the extraction and synthesis process to prevent misinterpretation of primary studies (Noyes et al 2019).

Table 21.11.a Contextual and methodological information for inclusion within a table of ‘Characteristics of included studies’. From Noyes et al (2019). Reproduced with permission of BMJ Publishing Group

Noyes and colleagues (Noyes et al (2019) provide additional guidance and examples of the various methods of data extraction. It is usual for review authors to select one method. In summary, extraction methods can be grouped as follows.

  • Using a bespoke universal, standardized or adapted data extraction template Review authors can develop their own review-specific data extraction template, or select a generic data extraction template by study type (e.g. templates developed by the National Institute for Health and Clinical Excellence (National Institute for Health Care Excellence 2012).
  • Using an a priori theory or predetermined framework to extract data Framework synthesis, and its subvariant ‘Best Fit’ Framework approach, involve extracting data from primary studies against an a priori framework in order to better understand a phenomenon of interest (Carroll et al 2011, Carroll et al 2013). For example, Glenton and colleagues (Glenton et al (2013) extracted data against a modified SURE Framework (2011) to synthesize factors affecting the implementation of lay health worker interventions. The SURE framework enumerates possible factors that may influence the implementation of health system interventions (SURE (Supporting the Use of Research Evidence) Collaboration 2011, Glenton et al 2013). Use of the ‘PROGRESS’ (place of residence, race/ethnicity/culture/language, occupation, gender/sex, religion, education, socioeconomic status, and social capital) framework also helps to ensure that data extraction maintains an explicit equity focus (O'Neill et al 2014). A logic model can also be used as a framework for data extraction.
  • Using a software program to code original studies inductively A wide range of software products have been developed by systematic review organizations (such as EPPI-Reviewer (Thomas et al 2010)). Most software for the analysis of primary qualitative data – such as NVivo ( www.qsrinternational.com/nvivo/home ) and others – can be used to code studies in a systematic review (Houghton et al 2017). For example, one method of data extraction and thematic synthesis involves coding the original studies using a software program to build inductive descriptive themes and a theoretical explanation of phenomena of interest (Thomas and Harden 2008). Thomas and Harden (2008) provide a worked example to demonstrate coding and developing a new understanding of children’s choices and motivations to eating fruit and vegetables from included primary studies.

21.12 Assessing the confidence in qualitative synthesized findings

The GRADE system has long featured in assessing the certainty of quantitative findings and application of its qualitative counterpart, GRADE-CERQual, is recommended for Cochrane qualitative evidence syntheses (Lewin et al 2015). CERQual has four components (relevance, methodological limitations, adequacy and coherence) which are used to formulate an overall assessment of confidence in the synthesized qualitative finding. Guidance on its components and reporting requirements have been published in a series in Implementation Science (Lewin et al 2018).

21.13 Methods for integrating the qualitative evidence synthesis with an intervention review

A range of methods and tools is available for data integration or mixed-method synthesis (Harden et al 2018, Noyes et al 2019). As noted at the beginning of this chapter, review authors can integrate a qualitative evidence synthesis with an existing intervention review published on a similar topic (sequential approach), or conduct a new intervention review and qualitative evidence syntheses in parallel before integration (convergent approach). Irrespective of whether the qualitative synthesis is sequential or convergent to the intervention review, we recommend that qualitative and quantitative evidence be synthesized separately using appropriate methods before integration (Harden et al 2018). The scope for integration can be more limited with a pre-existing intervention review unless review authors have access to the data underlying the intervention review report.

Harden and colleagues and Noyes and colleagues outline the following methods and tools for integration with an intervention review (Harden et al 2018, Noyes et al 2019):

  • Juxtaposing findings in a matrix Juxtaposition is driven by the findings from the qualitative evidence synthesis (e.g. intervention components related to the acceptability or feasibility of the interventions) and these findings form one side of the matrix. Findings on intervention effects (e.g. improves outcome, no difference in outcome, uncertain effects) form the other side of the matrix. Quantitative studies are grouped according to findings on intervention effects and the presence or absence of features specified by the hypotheses generated from the qualitative synthesis (Candy et al 2011). Observed patterns in the matrix are used to explain differences in the findings of the quantitative studies and to identify gaps in research (van Grootel et al 2017). (See, for example, (Ames et al 2017, Munabi-Babigumira et al 2017, Hurley et al 2018)
  • Analysing programme theory Theories articulating how interventions are expected to work are analysed. Findings from quantitative studies, testing the effects of interventions, and from qualitative and process evaluation evidence are used together to examine how the theories work in practice (Greenhalgh et al 2007). The value of different theories is assessed or new/revised theory developed. Factors that enhance or reduce intervention effectiveness are also identified.
  • Using logic models or other types of conceptual framework A logic model (Glenton et al 2013) or other type of conceptual framework, which represents the processes by which an intervention produces change provides a common scaffold for integrating findings across different types of evidence (Booth and Carroll 2015). Frameworks can be specified a priori from the literature or through stakeholder engagement or newly developed during the review. Findings from quantitative studies testing the effects of interventions and those from qualitative evidence are used to develop and/or further refine the model.
  • Testing hypotheses derived from syntheses of qualitative evidence Quantitative studies are grouped according to the presence or absence of the proposition specified by the hypotheses to be tested and subgroup analysis is used to explore differential findings on the effects of interventions (Thomas et al 2004).
  • Qualitative comparative analysis (QCA) Findings from a qualitative synthesis are used to identify the range of features that are important for successful interventions, and the mechanisms through which these features operate. A QCA then tests whether or not the features are associated with effective interventions (Kahwati et al 2016). The analysis unpicks multiple potential pathways to effectiveness accommodating scenarios where the same intervention feature is associated both with effective and less effective interventions, depending on context. QCA offers potential for use in integration; unlike the other methods and tools presented here it does not yet have sufficient methodological guidance available. However, exemplar reviews using QCA are available (Thomas et al 2014, Harris et al 2015, Kahwati et al 2016).

Review authors can use the above methods in combination (e.g. patterns observed through juxtaposing findings within a matrix can be tested using subgroup analysis or QCA). Analysing programme theory, using logic models and QCA would require members of the review team with specific skills in these methods. Using subgroup analysis and QCA are not suitable when limited evidence is available (Harden et al 2018, Noyes et al 2019). (See also Chapter 17 on intervention complexity.)

21.14 Reporting the protocol and qualitative evidence synthesis

Reporting standards and tools designed for intervention reviews (such as Cochrane’s MECIR standards ( http://methods.cochrane.org/mecir ) or the PRISMA Statement (Liberati et al 2009), may not be appropriate for qualitative evidence syntheses or an integrated mixed-method review. Additional guidance on how to choose, adapt or create a hybrid reporting tool is provided as a 5-point ‘decision flowchart’ ( Figure 21.14.a ) (Flemming et al 2018). Review authors should consider whether: a specific set of reporting guidance is available (e.g. eMERGe for meta-ethnographies (France et al 2015)); whether generic guidance (e.g. ENTREQ (Tong et al 2012)) is suitable; or whether additional checklists or tools are appropriate for reporting a specific aspect of the review.

Figure 21.14.a Decision flowchart for choice of reporting approach for syntheses of qualitative, implementation or process evaluation evidence (Flemming et al 2018). Reproduced with permission of Elsevier

limitations of qualitative evaluation research

21.15 Chapter information

Authors: Jane Noyes, Andrew Booth, Margaret Cargo, Kate Flemming, Angela Harden, Janet Harris, Ruth Garside, Karin Hannes, Tomás Pantoja, James Thomas

Acknowledgements: This chapter replaces Chapter 20 in the first edition of this Handbook (2008) and subsequent Version 5.2. We would like to thank the previous Chapter 20 authors Jennie Popay and Alan Pearson. Elements of this chapter draw on previous supplemental guidance produced by the Cochrane Qualitative and Implementation Methods Group Convenors, to which Simon Lewin contributed.

Funding: JT is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

21.16 References

Ames HM, Glenton C, Lewin S. Parents' and informal caregivers' views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence. Cochrane Database of Systematic Reviews 2017; 2 : CD011787.

Anderson LM, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis D, Tugwell P. Using logic models to capture complexity in systematic reviews. Research Synthesis Methods 2011; 2 : 33-42.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Medical Research Methodology 2009; 9 : 59.

Benoot C, Hannes K, Bilsen J. The use of purposeful sampling in a qualitative evidence synthesis: a worked example on sexual adjustment to a cancer trajectory. BMC Medical Research Methodology 2016; 16 : 21.

Bonell C, Jamal F, Harden A, Wells H, Parry W, Fletcher A, Petticrew M, Thomas J, Whitehead M, Campbell R, Murphy S, Moore L. Public Health Research. Systematic review of the effects of schools and school environment interventions on health: evidence mapping and synthesis . Southampton (UK): NIHR Journals Library; 2013.

Booth A, Harris J, Croot E, Springett J, Campbell F, Wilkins E. Towards a methodology for cluster searching to provide conceptual and contextual "richness" for systematic reviews of complex interventions: case study (CLUSTER). BMC Medical Research Methodology 2013; 13 : 118.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of 'best fit' framework synthesis for studies of improvement in healthcare. BMJ Quality and Safety 2015; 24 : 700-708.

Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessment for complex interventions 2016. https://www.integrate-hta.eu/wp-content/uploads/2016/02/Guidance-on-choosing-qualitative-evidence-synthesis-methods-for-use-in-HTA-of-complex-interventions.pdf

Booth A. Qualitative evidence synthesis. In: Facey K, editor. Patient involvement in Health Technology Assessment . Singapore: Springer; 2017. p. 187-199.

Booth A, Noyes J, Flemming K, Gehardus A, Wahlster P, Jan van der Wilt G, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. Journal of Clinical Epidemiology 2018; 99 : 41-52.

Booth A, Moore G, Flemming K, Garside R, Rollins N, Tuncalp Ö, Noyes J. Taking account of context in systematic reviews and guidelines considering a complexity perspective. BMJ Global Health 2019a; 4 : e000840.

Booth A, Noyes J, Flemming K, Moore G, Tuncalp Ö, Shakibazadeh E. Formulating questions to address the acceptability and feasibility of complex interventions in qualitative evidence synthesis. BMJ Global Health 2019b; 4 : e001107.

Candy B, King M, Jones L, Oliver S. Using qualitative synthesis to explore heterogeneity of complex interventions. BMC Medical Research Methodology 2011; 11 : 124.

Cargo M, Harris J, Pantoja T, Booth A, Harden A, Hannes K, Thomas J, Flemming K, Garside R, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 4: methods for assessing evidence on intervention implementation. Journal of Clinical Epidemiology 2018; 97 : 59-69.

Carroll C, Booth A, Cooper K. A worked example of "best fit" framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Medical Research Methodology 2011; 11 : 29.

Carroll C, Booth A, Leaviss J, Rick J. "Best fit" framework synthesis: refining the method. BMC Medical Research Methodology 2013; 13 : 37.

Carroll C. Qualitative evidence synthesis to improve implementation of clinical guidelines. BMJ 2017; 356 : j80.

CASP. Making sense of evidence: 10 questions to help you make sense of qualitative research: Public Health Resource Unit, England; 2013. http://media.wix.com/ugd/dded87_29c5b002d99342f788c6ac670e49f274.pdf .

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qualitative Health Research 2012; 22 : 1435-1443.

Cooper C, Booth A, Britten N, Garside R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review. Systematic Reviews 2017; 6 : 234.

De Buck E, Hannes K, Cargo M, Van Remoortel H, Vande Veegaete A, Mosler HJ, Govender T, Vandekerckhove P, Young T. Engagement of stakeholders in the development of a Theory of Change for handwashing and sanitation behaviour change. International Journal of Environmental Research and Public Health 2018; 28 : 8-22.

Dixon-Woods M. Using framework-based synthesis for conducting reviews of qualitative studies. BMC Medicine 2011; 9 : 39.

Downe S, Finlayson K, Tuncalp, Metin Gulmezoglu A. What matters to women: a systematic scoping review to identify the processes and outcomes of antenatal care provision that are important to healthy pregnant women. BJOG: An International Journal of Obstetrics and Gynaecology 2016; 123 : 529-539.

El Sherif R, Pluye P, Gore G, Granikov V, Hong QN. Performance of a mixed filter to identify relevant studies for mixed studies reviews. Journal of the Medical Library Association 2016; 104 : 47-51.

Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. Journal of Clinical Epidemiology 2018; 97 : 79-85.

Flemming K, Booth A, Garside R, Tuncalp O, Noyes J. Qualitative evidence synthesis for complex interventions and guideline development: clarification of the purpose, designs and relevant methods. BMJ Global Health 2019; 4 : e000882.

France EF, Ring N, Noyes J, Maxwell M, Jepson R, Duncan E, Turley R, Jones D, Uny I. Protocol-developing meta-ethnography reporting guidelines (eMERGe). BMC Medical Research Methodology 2015; 15 : 103.

France EF, Cunningham M, Ring N, Uny I, Duncan EAS, Jepson RG, Maxwell M, Roberts RJ, Turley RL, Booth A, Britten N, Flemming K, Gallagher I, Garside R, Hannes K, Lewin S, Noblit G, Pope C, Thomas J, Vanstone M, Higginbottom GMA, Noyes J. Improving reporting of Meta-Ethnography: The eMERGe Reporting Guidance BMC Medical Research Methodology 2019; 19 : 25.

Garside R. Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how? Innovation: The European Journal of Social Science Research 2014; 27 : 67-79.

Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2013; 10 : CD010414.

Glenton C, Lewin S, Norris S. Chapter 15: Using evidence from qualitative research to develop WHO guidelines. In: Norris S, editor. World Health Organization Handbook for Guideline Development . 2nd. ed. Geneva: WHO; 2016.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal 2009; 26 : 91-108.

Greenhalgh T, Kristjansson E, Robinson V. Realist review to understand the efficacy of school feeding programmes. BMJ 2007; 335 : 858.

Harden A, Oakley A, Weston R. A review of the effectiveness and appropriateness of peer-delivered health promotion for young people. London: Institute of Education, University of London; 1999.

Harden A, Thomas J, Cargo M, Harris J, Pantoja T, Flemming K, Booth A, Garside R, Hannes K, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. Journal of Clinical Epidemiology 2018; 97 : 70-78.

Harris JL, Booth A, Cargo M, Hannes K, Harden A, Flemming K, Garside R, Pantoja T, Thomas J, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. Journal of Clinical Epidemiology 2018; 97 : 39-48.

Harris KM, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self management interventions for asthma in children and adolescents: a mixed methods systematic review (Protocol). Cochrane Database of Systematic Reviews 2015; 4 : CD011651.

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, Altman DG, Barbour V, Macdonald H, Johnston M, Lamb SE, Dixon-Woods M, McCulloch P, Wyatt JC, Chan AW, Michie S. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014; 348 : g1687.

Houghton C, Murphy K, Meehan B, Thomas J, Brooker D, Casey D. From screening to synthesis: using nvivo to enhance transparency in qualitative evidence synthesis. Journal of Clinical Nursing 2017; 26 : 873-881.

Hurley M, Dickson K, Hallett R, Grant R, Hauari H, Walsh N, Stansfield C, Oliver S. Exercise interventions and patient beliefs for people with hip, knee or hip and knee osteoarthritis: a mixed methods review. Cochrane Database of Systematic Reviews 2018; 4 : CD010842.

Kahwati L, Jacobs S, Kane H, Lewis M, Viswanathan M, Golin CE. Using qualitative comparative analysis in a systematic review of a complex intervention. Systematic Reviews 2016; 5 : 82.

Kelly MP, Noyes J, Kane RL, Chang C, Uhl S, Robinson KA, Springs S, Butler ME, Guise JM. AHRQ series on complex intervention systematic reviews-paper 2: defining complexity, formulating scope, and questions. Journal of Clinical Epidemiology 2017; 90 : 11-18.

Kneale D, Thomas J, Harris K. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PloS One 2015; 10 : e0142187.

Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implementation Science 2010; 5 : 69.

Lewin S, Munabi-Babigumira S, Glenton C, Daniels K, Bosch-Capblanch X, van Wyk BE, Odgaard-Jensen J, Johansen M, Aja GN, Zwarenstein M, Scheel IB. Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database of Systematic Reviews 2010; 3 : CD004015.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Medicine 2015; 12 : e1001895.

Lewin S, Hendry M, Chandler J, Oxman AD, Michie S, Shepperd S, Reeves BC, Tugwell P, Hannes K, Rehfuess EA, Welch V, McKenzie JE, Burford B, Petkovic J, Anderson LM, Harris J, Noyes J. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR). BMC Medical Research Methodology 2017; 17 : 76.

Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, Bohren MA, Tuncalp O, Colvin CJ, Garside R, Carlsen B, Langlois EV, Noyes J. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implementation Science 2018; 13 : 2.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009; 339 : b2700.

Moore G, Audrey S, Barker M, Bond L, Bonell C, Harderman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015; 350 : h1258.

Munabi-Babigumira S, Glenton C, Lewin S, Fretheim A, Nabudere H. Factors that influence the provision of intrapartum and postnatal care by skilled birth attendants in low- and middle-income countries: a qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2017; 11 : CD011558.

Munthe-Kaas H, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Medical Research Methodology 2019; 19 : 113.

National Institute for Health Care Excellence. NICE Process and Methods Guides. Methods for the Development of NICE Public Health Guidance . London: National Institute for Health and Care Excellence (NICE); 2012.

Newton BJ, Rothlingova Z, Gutteridge R, LeMarchand K, Raphael JH. No room for reflexivity? Critical reflections following a systematic review of qualitative research. Journal of Health Psychology 2012; 17 : 866-885.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies . Newbury Park: Sage Publications, Inc; 1988.

Noyes J, Hendry M, Booth A, Chandler J, Lewin S, Glenton C, Garside R. Current use was established and Cochrane guidance on selection of social theories for systematic reviews of complex interventions was developed. Journal of Clinical Epidemiology 2016a; 75 : 78-92.

Noyes J, Hendry M, Lewin S, Glenton C, Chandler J, Rashidian A. Qualitative "trial-sibling" studies and "unrelated" qualitative studies contributed to complex intervention reviews. Journal of Clinical Epidemiology 2016b; 74 : 133-143.

Noyes J, Booth A, Flemming K, Garside R, Harden A, Lewin S, Pantoja T, Hannes K, Cargo M, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. Journal of Clinical Epidemiology 2018a; 97 : 49-58.

Noyes J, Booth A, Cargo M, Flemming K, Garside R, Hannes K, Harden A, Harris J, Lewin S, Pantoja T, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 1: introduction. Journal of Clinical Epidemiology 2018b; 97 : 35-38.

Noyes J, Booth A, Moore G, Flemming K, Tuncalp O, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Global Health 2019; 4 (Suppl 1) : e000893.

O'Neill J, Tabish H, Welch V, Petticrew M, Pottie K, Clarke M, Evans T, Pardo Pardo J, Waters E, White H, Tugwell P. Applying an equity lens to interventions: using PROGRESS ensures consideration of socially stratifying factors to illuminate inequities in health. Journal of Clinical Epidemiology 2014; 67 : 56-64.

Oliver S, Rees R, Clarke-Jones L, Milne R, Oakley A, Gabbay J, Stein K, Buchanan P, Gyte G. A multidimensional conceptual framework for analysing public involvement in health services research. Health Expectations 2008; 11 : 72-84.

Petticrew M, Knai C, Thomas J, Rehfuess E, Noyes J, Gerhardus A, Grimshaw J, Rutter H. Implications of a complexity perspective for systematic reviews and guideline development in health decision making. BMJ Global Health 2019; 4 (Suppl 1) : e000899.

Rees R, Oliver K, Woodman J, Thomas J. Children's views about obesity, body size, shape and weight. A systematic review. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2009.

Rehfuess EA, Booth A, Brereton L, Burns J, Gerhardus A, Mozygemba K, Oortwijn W, Pfadenhauer LM, Tummers M, van der Wilt GJ, Rohwer A. Towards a taxonomy of logic models in systematic reviews and health technology assessments: A priori, staged, and iterative approaches. Research Synthesis Methods 2018; 9 : 13-24.

Robbins SCC, Ward K, Skinner SR. School-based vaccination: a systematic review of process evaluations. Vaccine 2011; 29 : 9588-9599.

Rogers M, Bethel A, Abbott R. Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: a comparison of search strategies. Research Synthesis Methods 2018; 9 : 579-586.

Sandelowski M, Barroso J. Finding the findings in qualitative studies. Journal of Nursing Scholarship 2002; 34 : 213-219.

Selva A, Sola I, Zhang Y, Pardo-Hernandez H, Haynes RB, Martinez Garcia L, Navarro T, Schünemann H, Alonso-Coello P. Development and use of a content search strategy for retrieving studies on patients' views and preferences. Health and Quality of Life Outcomes 2017; 15 : 126.

Shepherd J, Kavanagh J, Picot J, Cooper K, Harden A, Barnett-Page E, Jones J, Clegg A, Hartwell D, Frampton GK, Price A. The effectiveness and cost-effectiveness of behavioural interventions for the prevention of sexually transmitted infections in young people aged 13-19: a systematic review and economic evaluation. Health Technology Assessment 2010; 14 : 1-206, iii-iv.

Squires JE, Valentine JC, Grimshaw JM. Systematic reviews of complex interventions: framing the review question. Journal of Clinical Epidemiology 2013; 66 : 1215-1222.

Stansfield C, Kavanagh J, Rees R, Gomersall A, Thomas J. The selection of search sources influences the findings of a systematic review of people's views: a case study in public health. BMC Medical Research Methodology 2012; 12 : 55.

SURE (Supporting the Use of Research Evidence) Collaboration. SURE Guides for Preparing and Using Evidence-based Policy Briefs: 5 Identifying and Addressing Barriers to Implementing the Policy Options. Version 2.1, updated November 2011.  https://epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/SURE-Guides-v2.1/Collectedfiles/sure_guides.html

Suri H. Purposeful sampling in qualitative research synthesis. Qualitative Research Journal 2011; 11 : 63-75.

Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J. Integrating qualitative research with trials in systematic reviews. BMJ 2004; 328 : 1010-1012.

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology 2008; 8 : 45.

Thomas J, Brunton J, Graziosi S. EPPI-Reviewer 4.0: software for research synthesis [Software]. EPPI-Centre Software. Social Science Research Unit, Institute of Education, University of London UK; 2010. https://eppi.ioe.ac.uk/CMS/Default.aspx?alias=eppi.ioe.ac.uk/cms/er4& .

Thomas J, O'Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Systematic Reviews 2014; 3 : 67.

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Medical Research Methodology 2012; 12 : 181.

van Grootel L, van Wesel F, O'Mara-Eves A, Thomas J, Hox J, Boeije H. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: broadening the matrix approach. Research Synthesis Methods 2017; 8 : 303-311.

Verhage A, Boels D. Critical appraisal of mixed methods research studies in a systematic scoping review on plural policing: assessing the impact of excluding inadequately reported studies by means of a sensitivity analysis. Quality & Quantity 2017; 51 : 1449-1468.

Walker LO, Avant KC. Strategies for theory construction in nursing . Upper Saddle River (NJ): Pearson Prentice Hall; 2005.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 18, Issue 2
  • Issues of validity and reliability in qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Helen Noble 1 ,
  • Joanna Smith 2
  • 1 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
  • 2 School of Human and Health Sciences, University of Huddersfield , Huddersfield , UK
  • Correspondence to Dr Helen Noble School of Nursing and Midwifery, Queens's University Belfast, Medical Biology Centre, 97 Lisburn Rd, Belfast BT9 7BL, UK; helen.noble{at}qub.ac.uk

https://doi.org/10.1136/eb-2015-102054

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evaluating the quality of research is essential if findings are to be utilised in practice and incorporated into care delivery. In a previous article we explored ‘bias’ across research designs and outlined strategies to minimise bias. 1 The aim of this article is to further outline rigour, or the integrity in which a study is conducted, and ensure the credibility of findings in relation to qualitative research. Concepts such as reliability, validity and generalisability typically associated with quantitative research and alternative terminology will be compared in relation to their application to qualitative research. In addition, some of the strategies adopted by qualitative researchers to enhance the credibility of their research are outlined.

Are the terms reliability and validity relevant to ensuring credibility in qualitative research?

Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research. 2–4 In the broadest context these terms are applicable, with validity referring to the integrity and application of the methods undertaken and the precision in which the findings accurately reflect the data, while reliability describes consistency within the employed analytical procedures. 4 However, if qualitative methods are inherently different from quantitative methods in terms of philosophical positions and purpose, then alterative frameworks for establishing rigour are appropriate. 3 Lincoln and Guba 5 offer alternative criteria for demonstrating rigour within qualitative research namely truth value, consistency and neutrality and applicability. Table 1 outlines the differences in terminology and criteria used to evaluate qualitative research.

  • View inline

Terminology and criteria used to evaluate the credibility of research findings

What strategies can qualitative researchers adopt to ensure the credibility of the study findings?

Unlike quantitative researchers, who apply statistical methods for establishing validity and reliability of research findings, qualitative researchers aim to design and incorporate methodological strategies to ensure the ‘trustworthiness’ of the findings. Such strategies include:

Accounting for personal biases which may have influenced findings; 6

Acknowledging biases in sampling and ongoing critical reflection of methods to ensure sufficient depth and relevance of data collection and analysis; 3

Meticulous record keeping, demonstrating a clear decision trail and ensuring interpretations of data are consistent and transparent; 3 , 4

Establishing a comparison case/seeking out similarities and differences across accounts to ensure different perspectives are represented; 6 , 7

Including rich and thick verbatim descriptions of participants’ accounts to support findings; 7

Demonstrating clarity in terms of thought processes during data analysis and subsequent interpretations 3 ;

Engaging with other researchers to reduce research bias; 3

Respondent validation: includes inviting participants to comment on the interview transcript and whether the final themes and concepts created adequately reflect the phenomena being investigated; 4

Data triangulation, 3 , 4 whereby different methods and perspectives help produce a more comprehensive set of findings. 8 , 9

Table 2 provides some specific examples of how some of these strategies were utilised to ensure rigour in a study that explored the impact of being a family carer to patients with stage 5 chronic kidney disease managed without dialysis. 10

Strategies for enhancing the credibility of qualitative research

In summary, it is imperative that all qualitative researchers incorporate strategies to enhance the credibility of a study during research design and implementation. Although there is no universally accepted terminology and criteria used to evaluate qualitative research, we have briefly outlined some of the strategies that can enhance the credibility of study findings.

  • Sandelowski M
  • Lincoln YS ,
  • Barrett M ,
  • Mayan M , et al
  • Greenhalgh T
  • Lingard L ,

Twitter Follow Joanna Smith at @josmith175 and Helen Noble at @helnoble

Competing interests None.

Read the full text or download the PDF:

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Limitations of the Study
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. Study limitations are the constraints placed on the ability to generalize from the results, to further describe applications to practice, and/or related to the utility of findings that are the result of the ways in which you initially chose to design the study or the method used to establish internal and external validity or the result of unanticipated challenges that emerged during the study.

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Theofanidis, Dimitrios and Antigoni Fountouki. "Limitations and Delimitations in the Research Process." Perioperative Nursing 7 (September-December 2018): 155-163. .

Importance of...

Always acknowledge a study's limitations. It is far better that you identify and acknowledge your study’s limitations than to have them pointed out by your professor and have your grade lowered because you appeared to have ignored them or didn't realize they existed.

Keep in mind that acknowledgment of a study's limitations is an opportunity to make suggestions for further research. If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study.

Acknowledgment of a study's limitations also provides you with opportunities to demonstrate that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but also to confront assumptions and explore what we don't know.

Claiming limitations is a subjective process because you must evaluate the impact of those limitations . Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the results and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent?

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations . However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in the introduction of your paper.

Here are examples of limitations related to methodology and the research process you may need to describe and discuss how they possibly impacted your results. Note that descriptions of limitations should be stated in the past tense because they were discovered after you completed your research.

Possible Methodological Limitations

  • Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred. Note that sample size is generally less relevant in qualitative research if explained in the context of the research problem.
  • Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but provide cogent reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe a need for future research based on designing a different method for gathering data.
  • Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, though, consult with a librarian! In cases when a librarian has confirmed that there is little or no prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design ]. Note again that discovering a limitation can serve as an important opportunity to identify new gaps in the literature and to describe the need for further research.
  • Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need for future researchers to revise the specific method for gathering data.
  • Self-reported data -- whether you are relying on pre-existing data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you have to the accuracy of what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data can contain several potential sources of bias that you should be alert to and note as limitations. These biases become apparent if they are incongruent with data from other sources. These are: (1) selective memory [remembering or not remembering experiences or events that occurred at some point in the past]; (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency, but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data].

Possible Limitations of the Researcher

  • Access -- if your study depends on having access to people, organizations, data, or documents and, for whatever reason, access is denied or limited in some way, the reasons for this needs to be described. Also, include an explanation why being denied or limited access did not prevent you from following through on your study.
  • Longitudinal effects -- unlike your professor, who can literally devote years [even a lifetime] to studying a single topic, the time available to investigate a research problem and to measure change or stability over time is constrained by the due date of your assignment. Be sure to choose a research problem that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure whether you can complete your research within the confines of the assignment's due date, talk to your professor.
  • Cultural and other type of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, event, or thing is viewed or shown in a consistently inaccurate way. Bias is usually negative, though one can have a positive bias as well, especially if that bias reflects your reliance on research that only support your hypothesis. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the manner in which you have ordered events, people, or places, how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation. NOTE :   If you detect bias in prior research, it must be acknowledged and you should explain what measures were taken to avoid perpetuating that bias. For example, if a previous study only used boys to examine how music education supports effective math skills, describe how your research expands the study to include girls.
  • Fluency in a language -- if your research focuses , for example, on measuring the perceived value of after-school tutoring among Mexican-American ESL [English as a Second Language] students and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic or to speak with these students in their primary language. This deficiency should be acknowledged.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Senunyeme, Emmanuel K. Business Research Methods. Powerpoint Presentation. Regent University of Science and Technology; ter Riet, Gerben et al. “All That Glitters Isn't Gold: A Survey on Acknowledgment of Limitations in Biomedical Studies.” PLOS One 8 (November 2013): 1-6.

Structure and Writing Style

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section.

If you determine that your study is seriously flawed due to important limitations , such as, an inability to acquire critical data, consider reframing it as an exploratory study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in a new study.

But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic . If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to revise your study.

When discussing the limitations of your research, be sure to:

  • Describe each limitation in detailed but concise terms;
  • Explain why each limitation exists;
  • Provide the reasons why each limitation could not be overcome using the method(s) chosen to acquire or gather the data [cite to other studies that had similar problems when possible];
  • Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and,
  • If appropriate, describe how these limitations could point to the need for further research.

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't interview a group of people that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in a future study. A underlying goal of scholarly research is not only to show what works, but to demonstrate what doesn't work or what needs further clarification.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. "Limitations are not Properly Acknowledged in the Scientific Literature." Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed. January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Writing Tip

Don't Inflate the Importance of Your Findings!

After all the hard work and long hours devoted to writing your research paper, it is easy to get carried away with attributing unwarranted importance to what you’ve done. We all want our academic work to be viewed as excellent and worthy of a good grade, but it is important that you understand and openly acknowledge the limitations of your study. Inflating the importance of your study's findings could be perceived by your readers as an attempt hide its flaws or encourage a biased interpretation of the results. A small measure of humility goes a long way!

Another Writing Tip

Negative Results are Not a Limitation!

Negative evidence refers to findings that unexpectedly challenge rather than support your hypothesis. If you didn't get the results you anticipated, it may mean your hypothesis was incorrect and needs to be reformulated. Or, perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations, particularly in experimental research designs. In any case, your results may very well be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected is a limitation to your study. If you carried out the research well, they are simply your results and only require additional interpretation.

Lewis, George H. and Jonathan F. Lewis. “The Dog in the Night-Time: Negative Evidence in Social Research.” The British Journal of Sociology 31 (December 1980): 544-558.

Yet Another Writing Tip

Sample Size Limitations in Qualitative Research

Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework. However, it remains true that sample sizes that are too small cannot adequately support claims of having achieved valid conclusions and sample sizes that are too large do not permit the deep, naturalistic, and inductive analysis that defines qualitative inquiry. Determining adequate sample size in qualitative research is ultimately a matter of judgment and experience in evaluating the quality of the information collected against the uses to which it will be applied and the particular research method and purposeful sampling strategy employed. If the sample size is found to be a limitation, it may reflect your judgment about the methodological technique chosen [e.g., single life history study versus focus group interviews] rather than the number of respondents used.

Boddy, Clive Roland. "Sample Size for Qualitative Research." Qualitative Market Research: An International Journal 19 (2016): 426-432; Huberman, A. Michael and Matthew B. Miles. "Data Management and Analysis Methods." In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 428-444; Blaikie, Norman. "Confounding Issues Related to Determining Sample Size in Qualitative Research." International Journal of Social Research Methodology 21 (2018): 635-641; Oppong, Steward Harrison. "The Problem of Sampling in qualitative Research." Asian Journal of Management Sciences and Education 2 (2013): 202-210.

  • << Previous: 8. The Discussion
  • Next: 9. The Conclusion >>
  • Last Updated: Apr 24, 2024 10:51 AM
  • URL: https://libguides.usc.edu/writingguide
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

limitations of qualitative evaluation research

Home Market Research

Evaluation Research: Definition, Methods and Examples

Evaluation Research

Content Index

  • What is evaluation research
  • Why do evaluation research

Quantitative methods

Qualitative methods.

  • Process evaluation research question examples
  • Outcome evaluation research question examples

What is evaluation research?

Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal.

Evaluation research is closely related to but slightly different from more conventional social research . It uses many of the same methods used in traditional social research, but because it takes place within an organizational context, it requires team skills, interpersonal skills, management skills, political smartness, and other research skills that social research does not need much. Evaluation research also requires one to keep in mind the interests of the stakeholders.

Evaluation research is a type of applied research, and so it is intended to have some real-world effect.  Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.

LEARN ABOUT: Action Research

Why do evaluation research?

The common goal of most evaluations is to extract meaningful information from the audience and provide valuable insights to evaluators such as sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived value as useful if it helps in decision-making. However, evaluation research does not always create an impact that can be applied anywhere else, sometimes they fail to influence short-term decisions. It is also equally true that initially, it might seem to not have any influence, but can have a delayed impact when the situation is more favorable. In spite of this, there is a general agreement that the major goal of evaluation research should be to improve decision-making through the systematic utilization of measurable feedback.

Below are some of the benefits of evaluation research

  • Gain insights about a project or program and its operations

Evaluation Research lets you understand what works and what doesn’t, where we were, where we are and where we are headed towards. You can find out the areas of improvement and identify strengths. So, it will help you to figure out what do you need to focus more on and if there are any threats to your business. You can also find out if there are currently hidden sectors in the market that are yet untapped.

  • Improve practice

It is essential to gauge your past performance and understand what went wrong in order to deliver better services to your customers. Unless it is a two-way communication, there is no way to improve on what you have to offer. Evaluation research gives an opportunity to your employees and customers to express how they feel and if there’s anything they would like to change. It also lets you modify or adopt a practice such that it increases the chances of success.

  • Assess the effects

After evaluating the efforts, you can see how well you are meeting objectives and targets. Evaluations let you measure if the intended benefits are really reaching the targeted audience and if yes, then how effectively.

  • Build capacity

Evaluations help you to analyze the demand pattern and predict if you will need more funds, upgrade skills and improve the efficiency of operations. It lets you find the gaps in the production to delivery chain and possible ways to fill them.

Methods of evaluation research

All market research methods involve collecting and analyzing the data, making decisions about the validity of the information and deriving relevant inferences from it. Evaluation research comprises of planning, conducting and analyzing the results which include the use of data collection techniques and applying statistical methods.

Some of the evaluation methods which are quite popular are input measurement, output or performance measurement, impact or outcomes assessment, quality assessment, process evaluation, benchmarking, standards, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. There are also a few types of evaluations that do not always result in a meaningful assessment such as descriptive studies, formative evaluations, and implementation analysis. Evaluation research is more about information-processing and feedback functions of evaluation.

These methods can be broadly classified as quantitative and qualitative methods.

The outcome of the quantitative research methods is an answer to the questions below and is used to measure anything tangible.

  • Who was involved?
  • What were the outcomes?
  • What was the price?

The best way to collect quantitative data is through surveys , questionnaires , and polls . You can also create pre-tests and post-tests, review existing documents and databases or gather clinical data.

Surveys are used to gather opinions, feedback or ideas of your employees or customers and consist of various question types . They can be conducted by a person face-to-face or by telephone, by mail, or online. Online surveys do not require the intervention of any human and are far more efficient and practical. You can see the survey results on dashboard of research tools and dig deeper using filter criteria based on various factors such as age, gender, location, etc. You can also keep survey logic such as branching, quotas, chain survey, looping, etc in the survey questions and reduce the time to both create and respond to the donor survey . You can also generate a number of reports that involve statistical formulae and present data that can be readily absorbed in the meetings. To learn more about how research tool works and whether it is suitable for you, sign up for a free account now.

Create a free account!

Quantitative data measure the depth and breadth of an initiative, for instance, the number of people who participated in the non-profit event, the number of people who enrolled for a new course at the university. Quantitative data collected before and after a program can show its results and impact.

The accuracy of quantitative data to be used for evaluation research depends on how well the sample represents the population, the ease of analysis, and their consistency. Quantitative methods can fail if the questions are not framed correctly and not distributed to the right audience. Also, quantitative data do not provide an understanding of the context and may not be apt for complex issues.

Learn more: Quantitative Market Research: The Complete Guide

Qualitative research methods are used where quantitative methods cannot solve the research problem , i.e. they are used to measure intangible values. They answer questions such as

  • What is the value added?
  • How satisfied are you with our service?
  • How likely are you to recommend us to your friends?
  • What will improve your experience?

LEARN ABOUT: Qualitative Interview

Qualitative data is collected through observation, interviews, case studies, and focus groups. The steps for creating a qualitative study involve examining, comparing and contrasting, and understanding patterns. Analysts conclude after identification of themes, clustering similar data, and finally reducing to points that make sense.

Observations may help explain behaviors as well as the social context that is generally not discovered by quantitative methods. Observations of behavior and body language can be done by watching a participant, recording audio or video. Structured interviews can be conducted with people alone or in a group under controlled conditions, or they may be asked open-ended qualitative research questions . Qualitative research methods are also used to understand a person’s perceptions and motivations.

LEARN ABOUT:  Social Communication Questionnaire

The strength of this method is that group discussion can provide ideas and stimulate memories with topics cascading as discussion occurs. The accuracy of qualitative data depends on how well contextual data explains complex issues and complements quantitative data. It helps get the answer of “why” and “how”, after getting an answer to “what”. The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret.

Learn more: Qualitative Market Research: The Complete Guide

Survey software can be used for both the evaluation research methods. You can use above sample questions for evaluation research and send a survey in minutes using research software. Using a tool for research simplifies the process right from creating a survey, importing contacts, distributing the survey and generating reports that aid in research.

Examples of evaluation research

Evaluation research questions lay the foundation of a successful evaluation. They define the topics that will be evaluated. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to collect, how to analyze it, and how to report it.

Evaluation research questions must be developed and agreed on in the planning stage, however, ready-made research templates can also be used.

Process evaluation research question examples:

  • How often do you use our product in a day?
  • Were approvals taken from all stakeholders?
  • Can you report the issue from the system?
  • Can you submit the feedback from the system?
  • Was each task done as per the standard operating procedure?
  • What were the barriers to the implementation of each task?
  • Were any improvement areas discovered?

Outcome evaluation research question examples:

  • How satisfied are you with our product?
  • Did the program produce intended outcomes?
  • What were the unintended outcomes?
  • Has the program increased the knowledge of participants?
  • Were the participants of the program employable before the course started?
  • Do participants of the program have the skills to find a job after the course ended?
  • Is the knowledge of participants better compared to those who did not participate in the program?

MORE LIKE THIS

NPS Survey Platform

NPS Survey Platform: Types, Tips, 11 Best Platforms & Tools

Apr 26, 2024

user journey vs user flow

User Journey vs User Flow: Differences and Similarities

gap analysis tools

Best 7 Gap Analysis Tools to Empower Your Business

Apr 25, 2024

employee survey tools

12 Best Employee Survey Tools for Organizational Excellence

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Business Intelligence Reporting
  • Data Driven
  • Data Analysis Method
  • Business Model
  • Business Analysis
  • Quantitative Research
  • Business Analytics
  • Marketing Analytics
  • Data Integration
  • Digital Transformation Strategy
  • Online Training
  • Local Training Events

5 Strengths and 5 Limitations of Qualitative Research

Lauren Christiansen

Lauren Christiansen

Insight into qualitative research.

Anyone who reviews a bunch of numbers knows how impersonal that feels. What do numbers really reveal about a person's beliefs, motives, and thoughts? While it's critical to collect statistical information to identify business trends and inefficiencies, stats don't always tell the full story. Why does the customer like this product more than the other one? What motivates them to post this particular hashtag on social media? How do employees actually feel about the new supply chain process? To answer more personal questions that delve into the human experience, businesses often employ a qualitative research process.

10 Key Strengths and Limitations of Qualitative Research

Qualitative research helps entrepreneurs and established companies understand the many factors that drive consumer behavior. Because most organizations collect and analyze quantitative data, they don't always know exactly how a target market feels and what it wants. It helps researchers when they can observe a small sample size of consumers in a comfortable environment, ask questions, and let them speak. Research methodology varies depending on the industry and type of business needs. Many companies employ mixed methods to extract the insights they require to improve decision-making. While both quantitative research and qualitative methods are effective, there are limitations to both. Quantitative research is expensive, time-consuming, and presents a limited understanding of consumer needs. However, qualitative research methods generate less verifiable information as all qualitative data is based on experience. Businesses should use a combination of both methods to overcome any associated limitations.

Strengths of Qualitative Research

strengths of qualitative research 1615326031 1948

  • Captures New Beliefs - Qualitative research methods extrapolate any evolving beliefs within a market. This may include who buys a product/service, or how employees feel about their employers.
  • Fewer Limitations - Qualitative studies are less stringent than quantitative ones. Outside the box answers to questions, opinions, and beliefs are included in data collection and data analysis.
  • More Versatile - Qualitative research is much easier at times for researchers. They can adjust questions, adapt to circumstances that change or change the environment to optimize results.
  • Greater Speculation - Researchers can speculate more on what answers to drill down into and how to approach them. They can use instinct and subjective experience to identify and extract good data.
  • More Targeted - This research process can target any area of the business or concern it may have. Researchers can concentrate on specific target markets to collect valuable information. This takes less time and requires fewer resources than quantitative studies.

Limitations of Qualitative Research

limitations of qualitative research 1615326031 6006

  • Sample Sizes - Businesses need to find a big enough group of participants to ensure results are accurate. A sample size of 15 people is not enough to show a reliable picture of how consumers view a product. If it is not possible to find a large enough sample size, the data collected may be insufficient.
  • Bias - For internal qualitative studies, employees may be biased. For example, workers may give a popular answer that colleagues agree with rather than a true opinion. This can negatively influence the outcome of the study.
  • Self-Selection Bias - Businesses that call on volunteers to answer questions worry that the people who respond are not reflective of the greater group. It is better if the company selects individuals at random for research studies, particularly if they are employees. However, this changes the process from qualitative to quantitative methods.
  • Artificial - It isn't typical to observe consumers in stores, gather a focus group together, or ask employees about their experiences at work. This artificiality may impact the findings, as it is outside the norm of regular behavior and interactions.
  • Quality - Questions It's hard to know whether researcher questions are quality or not because they are all subjective. Researchers need to ask how and why individuals feel the way they do to receive the most accurate answers.

Key Takeaways on Strengths and Limitations of Qualitative Research

  • Qualitative research helps entrepreneurs and small businesses understand what drives human behavior. It is also used to see how employees feel about workflows and tasks.
  • Companies can extract insights from qualitative research to optimize decision-making and improve products or services.
  • Qualitative research captures new beliefs, has fewer limitations, is more versatile, and is more targeted. It also allows researchers to speculate and insert themselves more into the research study.
  • Qualitative research has many limitations which include possible small sample sizes, potential bias in answers, self-selection bias, and potentially poor questions from researchers. It also can be artificial because it isn't typical to observe participants in focus groups, ask them questions at work, or invite them to partake in this type of research method.

Must-Read Content

the top qualitative research methods for business success 1614973632 2872

The Top Qualitative Research Methods for Business Success

5 qualitative research examples in action 1615229352 8092

5 Qualitative Research Examples in Action

7 types of qualitative research to look out for 1615316589 8331

7 Types of Qualitative Research to Look Out For

what is qualitative research really 1615241204 2538

What is Qualitative Research, Really?

How to Write Limitations of the Study (with examples)

This blog emphasizes the importance of recognizing and effectively writing about limitations in research. It discusses the types of limitations, their significance, and provides guidelines for writing about them, highlighting their role in advancing scholarly research.

Updated on August 24, 2023

a group of researchers writing their limitation of their study

No matter how well thought out, every research endeavor encounters challenges. There is simply no way to predict all possible variances throughout the process.

These uncharted boundaries and abrupt constraints are known as limitations in research . Identifying and acknowledging limitations is crucial for conducting rigorous studies. Limitations provide context and shed light on gaps in the prevailing inquiry and literature.

This article explores the importance of recognizing limitations and discusses how to write them effectively. By interpreting limitations in research and considering prevalent examples, we aim to reframe the perception from shameful mistakes to respectable revelations.

What are limitations in research?

In the clearest terms, research limitations are the practical or theoretical shortcomings of a study that are often outside of the researcher’s control . While these weaknesses limit the generalizability of a study’s conclusions, they also present a foundation for future research.

Sometimes limitations arise from tangible circumstances like time and funding constraints, or equipment and participant availability. Other times the rationale is more obscure and buried within the research design. Common types of limitations and their ramifications include:

  • Theoretical: limits the scope, depth, or applicability of a study.
  • Methodological: limits the quality, quantity, or diversity of the data.
  • Empirical: limits the representativeness, validity, or reliability of the data.
  • Analytical: limits the accuracy, completeness, or significance of the findings.
  • Ethical: limits the access, consent, or confidentiality of the data.

Regardless of how, when, or why they arise, limitations are a natural part of the research process and should never be ignored . Like all other aspects, they are vital in their own purpose.

Why is identifying limitations important?

Whether to seek acceptance or avoid struggle, humans often instinctively hide flaws and mistakes. Merging this thought process into research by attempting to hide limitations, however, is a bad idea. It has the potential to negate the validity of outcomes and damage the reputation of scholars.

By identifying and addressing limitations throughout a project, researchers strengthen their arguments and curtail the chance of peer censure based on overlooked mistakes. Pointing out these flaws shows an understanding of variable limits and a scrupulous research process.

Showing awareness of and taking responsibility for a project’s boundaries and challenges validates the integrity and transparency of a researcher. It further demonstrates the researchers understand the applicable literature and have thoroughly evaluated their chosen research methods.

Presenting limitations also benefits the readers by providing context for research findings. It guides them to interpret the project’s conclusions only within the scope of very specific conditions. By allowing for an appropriate generalization of the findings that is accurately confined by research boundaries and is not too broad, limitations boost a study’s credibility .

Limitations are true assets to the research process. They highlight opportunities for future research. When researchers identify the limitations of their particular approach to a study question, they enable precise transferability and improve chances for reproducibility. 

Simply stating a project’s limitations is not adequate for spurring further research, though. To spark the interest of other researchers, these acknowledgements must come with thorough explanations regarding how the limitations affected the current study and how they can potentially be overcome with amended methods.

How to write limitations

Typically, the information about a study’s limitations is situated either at the beginning of the discussion section to provide context for readers or at the conclusion of the discussion section to acknowledge the need for further research. However, it varies depending upon the target journal or publication guidelines. 

Don’t hide your limitations

It is also important to not bury a limitation in the body of the paper unless it has a unique connection to a topic in that section. If so, it needs to be reiterated with the other limitations or at the conclusion of the discussion section. Wherever it is included in the manuscript, ensure that the limitations section is prominently positioned and clearly introduced.

While maintaining transparency by disclosing limitations means taking a comprehensive approach, it is not necessary to discuss everything that could have potentially gone wrong during the research study. If there is no commitment to investigation in the introduction, it is unnecessary to consider the issue a limitation to the research. Wholly consider the term ‘limitations’ and ask, “Did it significantly change or limit the possible outcomes?” Then, qualify the occurrence as either a limitation to include in the current manuscript or as an idea to note for other projects. 

Writing limitations

Once the limitations are concretely identified and it is decided where they will be included in the paper, researchers are ready for the writing task. Including only what is pertinent, keeping explanations detailed but concise, and employing the following guidelines is key for crafting valuable limitations:

1) Identify and describe the limitations : Clearly introduce the limitation by classifying its form and specifying its origin. For example:

  • An unintentional bias encountered during data collection
  • An intentional use of unplanned post-hoc data analysis

2) Explain the implications : Describe how the limitation potentially influences the study’s findings and how the validity and generalizability are subsequently impacted. Provide examples and evidence to support claims of the limitations’ effects without making excuses or exaggerating their impact. Overall, be transparent and objective in presenting the limitations, without undermining the significance of the research. 

3) Provide alternative approaches for future studies : Offer specific suggestions for potential improvements or avenues for further investigation. Demonstrate a proactive approach by encouraging future research that addresses the identified gaps and, therefore, expands the knowledge base.

Whether presenting limitations as an individual section within the manuscript or as a subtopic in the discussion area, authors should use clear headings and straightforward language to facilitate readability. There is no need to complicate limitations with jargon, computations, or complex datasets.

Examples of common limitations

Limitations are generally grouped into two categories , methodology and research process .

Methodology limitations

Methodology may include limitations due to:

  • Sample size
  • Lack of available or reliable data
  • Lack of prior research studies on the topic
  • Measure used to collect the data
  • Self-reported data

methodology limitation example

The researcher is addressing how the large sample size requires a reassessment of the measures used to collect and analyze the data.

Research process limitations

Limitations during the research process may arise from:

  • Access to information
  • Longitudinal effects
  • Cultural and other biases
  • Language fluency
  • Time constraints

research process limitations example

The author is pointing out that the model’s estimates are based on potentially biased observational studies.

Final thoughts

Successfully proving theories and touting great achievements are only two very narrow goals of scholarly research. The true passion and greatest efforts of researchers comes more in the form of confronting assumptions and exploring the obscure.

In many ways, recognizing and sharing the limitations of a research study both allows for and encourages this type of discovery that continuously pushes research forward. By using limitations to provide a transparent account of the project's boundaries and to contextualize the findings, researchers pave the way for even more robust and impactful research in the future.

Charla Viera, MS

See our "Privacy Policy"

Ensure your structure and ideas are consistent and clearly communicated

Pair your Premium Editing with our add-on service Presubmission Review for an overall assessment of your manuscript.

  • Open access
  • Published: 23 April 2024

Non-dispensing pharmacists integrated into general practices as a new interprofessional model: a qualitative evaluation of general practitioners’ experiences and views

  • A.C.M. Hazen   ORCID: orcid.org/0000-0001-8769-1278 1   na1 ,
  • V.M. Sloeserwij   ORCID: orcid.org/0000-0003-3521-8109 1   na1 ,
  • E. de Groot   ORCID: orcid.org/0000-0003-0388-385X 1 ,
  • J.J. de Gier   ORCID: orcid.org/0000-0001-5189-2705 2 ,
  • N.J. de Wit   ORCID: orcid.org/0000-0002-0273-8290 1 ,
  • A.A. de Bont   ORCID: orcid.org/0000-0002-0745-4537 3 &
  • D.L.M. Zwart   ORCID: orcid.org/0000-0003-0098-4882 1  

BMC Health Services Research volume  24 , Article number:  502 ( 2024 ) Cite this article

49 Accesses

Metrics details

A new interprofessional model incorporating non-dispensing pharmacists in general practice teams can improve the quality of pharmaceutical care. However, results of the model are dependent on the context. Understanding when, why and how the model works may increase chances of successful broader implementation in other general practices. Earlier theories suggested that the results of the model are achieved by bringing pharmacotherapeutic knowledge into general practices. This mechanism may not be enough for successful implementation of the model. We wanted to understand better how establishing new interprofessional models in existing healthcare organisations takes place.

An interview study, with a realist informed evaluation was conducted. This qualitative study was part of the Pharmacotherapy Optimisation through Integration of a Non-dispensing pharmacist in primary care Teams (POINT) project. We invited the general practitioners of the 9 general practices who (had) worked closely with a non-dispensing pharmacist for an interview. Interview data were analysed through discussions about the coding with the research team where themes were developed over time.

We interviewed 2 general practitioners in each general practice (18 interviews in total). In a context where general practitioners acknowledge the need for improvement and are willing to work with a non-dispensing pharmacist as a new team member, the following mechanisms are triggered. Non-dispensing pharmacists add new knowledge to current general practice. Through everyday talk (discursive actions) both general practitioners and non-dispensing pharmacists evolve in what they consider appropriate, legitimate and imaginable in their work situations. They align their professional identities.

Conclusions

Not only the addition of new knowledge of non-dispensing pharmacist to the general practice team is crucial for the success of this interprofessional healthcare model, but also alignment of the general practitioners’ and non-dispensing pharmacists’ professional identities. This is essentially different from traditional pharmaceutical care models, in which pharmacists and GPs work in separate organisations. To induce the process of identity alignment, general practitioners need to acknowledge the need to improve the quality of pharmaceutical care interprofessionally. By acknowledging the aspect of interprofessionality, both general practitioners and non-dispensing pharmacists will explore and reflect on what they consider appropriate, legitimate and imaginable in carrying out their professional roles.

Trial registration

The POINT project was pre-registered in The Netherlands National Trial Register, with Trial registration number NTR-4389.

Peer Review reports

New models are emerging worldwide to organise and deliver pharmaceutical care. In Canada, Australia, the United Kingdom, Ireland and the Netherlands non-dispensing clinical pharmacists (NDPs) have been integrated in general practice teams, providing pharmaceutical care in close collaboration with the general practitioner (GP) [ 1 , 2 , 3 , 4 , 5 ]. This new interprofessional model appears to improve quality and safety of pharmaceutical care: in practices with fully integrated NDPs, drug therapy problems are adequately addressed and less medication-related hospitalisations occur [ 6 , 7 , 8 ].

It has been recognised that implementing promising interventions in a new context does not automatically improve the quality and safety of pharmaceutical care in the same way,, as their success can be highly dependent on the context in which the intervention is introduced [ 9 ]. Understanding the breadth of contextual influences are vital in conducting complex interventions [ 10 ] that require social interaction between professionals: such interventions could work well in one context, but not at all in another [ 11 ]. In addition to answering the question whether the interprofessional model improves quality of care with quantitative studies, we need to understand how and why this improvement works (so-called working mechanisms) and when (so-called context elements). Understanding how, why and when are concepts of realist evaluation that came about due to challenges in implementing interventions in other contexts [ 9 ] These understandings could help to better interpret results found so far and could increase chances of success with broader implementation of the model in other practice settings.

Earlier theories on how and why new interprofessional models in healthcare could improve quality of care show the importance of the addition of new knowledge to existing organisations [ 12 , 13 , 14 ]. This has also been recognised for the introduction of clinical pharmacists in general practice teams [ 15 ] However, establishing new interprofessional models in existing healthcare organisations is challenging and interprofessional collaboration is not self-evident [ 16 ]. Hence, addition of new professional knowledge alone may not be enough for successful implementation.

When we introduced the interprofessional model in the Netherlands in the Pharmacotherapy Optimisation through Integration of a Non-dispensing pharmacist integrated in primary care Teams (POINT) project [ 5 ], our initial programme theory was that the addition of new knowledge was key. These general practice pharmacists add specific knowledge about the pharmacotherapeutic treatment of elderly patients with polypharmacy and multimorbidity, who often have complex pharmaceutical care needs [ 17 ]. To make optimum use of this additional knowledge brought into the practices by NDPs, it was considered essential that the NDP had a patient-centred approach in applying pharmaceutical knowledge. NDPs were additionally trained in communication, consultation and clinical reasoning skills [ 17 , 18 ]. In the present study, we challenge and refine our theory on when, why and how the interprofessional model of integrating pharmacists into general practice works, using a realist informed evaluation.

In the Netherlands, general practice is provided by a team, consisting of GPs, practice assistants and practice nurse(s). These general practice teams are increasingly located in multidisciplinary health centres, with other disciplines such as physiotherapists, dietitians, dentists and social services. A community pharmacy is often available on-site. Community pharmacists and GPs work together to ensure the safe use and timely dispensing of medication. They have structural pharmacotherapeutic consultation meetings. The level of collaboration between community pharmacists and GPs varies throughout the country.

Although already implemented in other countries, the interprofessional model with an NDP integrated in general practice teams is a novel approach in the Netherlands (box 1). In the POINT project, where this study was part of, outcomes of the model were measured in ten general practices [ 19 ]. The practices could take part in the POINT study when they were explicitly willing to host an NDP and to cooperate in the development and evaluation of the new role of NDP. The practices needed to have a consultation room available for the NDP and to provide the NDP access to the GPs’ electronic medical records.

After an induction period of three months, the NDPs worked full time in the practices from June 2014 until May 2015, while concurrently being trained in a 15-month Clinical Pharmacy Training Program based on interprofessional workplace learning, to develop skills in communication and clinical reasoning [ 18 ]. One NDP was unable to finish the training program. Of the remaining nine NDPs, five continued working as an NDP in the general practice after the intervention period.

The training and professional identity development of the NDPs were described earlier [ 18 , 20 ]. Quantitative evaluations demonstrated that implementation of the NDPs in general practice teams resulted in improved quality and safety of pharmaceutical care: we found a lower risk of medication-related hospitalisations amongst elderly patients with polypharmacy, compared to usual care [ 8 ] and that NDPs identified and adequately addressed drug therapy problems [ 21 ].

Methodological approach

To evaluate the interprofessional model with an NDP integrated in general practice, which can be considered a complex intervention, we chose a realist informed evaluation, to explain how and why an intervention works, for whom and under what circumstances [ 22 ]. Thoroughly focussing on the context contributes to a better understanding of the (social) intervention effects. The combination of the intervention and its specific context are then thought to trigger mechanisms, which in turn produce both intended and unintended outcomes.

Elements on context, mechanisms and outcomes were inferred from the interviews. Context-elements were defined as “actors or factors that are external to the intervention, present or occurring even if the intervention does not lead to an outcome, and which may have influence on the outcome” [ 23 ] and mechanism-elements as underlying processes or structures, usually hidden, which operate in particular contexts and generate outcomes [ 24 ]. Combining these elements, we formulated theories on when, why and how the interprofessional model of an NDP integrated in general practice improves pharmaceutical patient care– an outcome based on findings of earlier quantitative analyses in the POINT project [ 8 , 21 ].

Recruitment and data collection

We interviewed GPs from the practices where an NDP had worked during the POINT project. To guarantee information-rich interviews, we used ‘intensity sampling’ (selecting extreme cases to uncover unique understandings) and ‘snowball sampling’ (selecting participants through referrals from existing participants) methods: we invited the nine GPs who supervised the NDPs at the workplace (intensity sampling) and asked them which colleague GP (still) had a close working-relationship with the NDP in daily practice, and/or a distinct opinion on the NDP (snowball sampling) [ 25 ].

Semi-structured interviews were conducted between March and June 2018 by one researcher who had experience with doing interviews (VMS, a PhD student and GP trainee, who alternates between periods of doing research and following general practice training) accompanied by a 6th year medical student (AnH, FW). The researchers had the skills to build rapport and the interviewees were familiar with the POINT project as a whole. Each medical student joined 9 interviews. Before the interviews, senior researchers (EdG, DZ) provided advise to the medical students about carrying out interviews and provided feedback after the first interviews were carried out. Interviews lasted between 30 and 45 min and took place in a private room at the GPs’ practices. All interviews were audio-recorded with a Sony audio recording device that was available through our organisation and used by researchers within our department to carry out interview studies.Some interviews were transcribed verbatim by the medical students and some by an agency with whom we have made arrangements on privacy. Interviews were anonymised by removing personal identifiers and were saved by using an encryption key, which was kept secure by one of the lead investigators.

The topic guide used for the interviews consisted of open questions, and was inspired by the realist evaluation framework [ 26 , 27 ]. Within realist evaluation, the aim is to gather insight into how people react to the intervention (understanding how, why and when outcomes come about), rather than evaluating opinions about the intervention. Participants were asked to describe their experiences of the intervention, rather than asked for their opinions about the NDP. The interviewees were participating in the overarching POINT project voluntarily and were well aware that their opinion, negative or positive, was essential for us as researchers. We assumed that power dynamics were not at play in this study as the interviewees were GPs and most of the researchers involved in this study were GPs as well. The topic guide was adjusted after a pilot interview with a GP, to ensure that each topic was properly highlighted (see Online Supplement S1 for the topic guide).

Data analysis

We analysed the interviews by focussing on the context, mechanism and outcome. We used a combined deductive and inductive approach as coding was guided by the initial ideas on how the context and the intervention contributed to the outcomes (deductive) and followed by looking for new, additional mechanisms within our data set (inductive).

All interview transcripts were coded independently by two researchers (VMS, and a 6th year medical student AnH or FW), using NVivo version 11.13. In the inductive coding, first, the exact words or phrases that were used by the interviewees were used as codes. Then, a more interpretative approach was used to capture the essence of the codes. Codes were regularly discussed within the research team (VMS, EdG, DZ, AdB), resulting in refined coding and suggestions for the identification of additional codes and themes. Discrepancies and ambiguities in coding and interpretation were resolved in discussion. In these regular discussion meetings, the senior researcher in qualitative research (EdG) gave feedback. She was not familiar with the interviewees and could take a more distant position during coding and interpretation. With this iterative, cyclical analysis we identified elements on context and mechanisms that, combined with outcomes previously found in our quantitative analyses [ 8 , 21 ] resulted in a refined and deepened programme theory. The SQUIRE checklist was followed while writing the manuscript [ 28 ].

In total, 18 GPs were interviewed, with a mean age of 50 and on average 19 years of work experience (Table  1 ).

We conceptualised our interpretations of the interviews into context, mechanisms and outcome (Table  2 ) which we will discuss in more detail below.

GPs acknowledge the need to improve and are willing to engage

Two context elements were essential for the new interprofessional model. First, whether GPs acknowledged the need to improve the quality of pharmaceutical care both in their practices, and on a personal level. One GP described it as follows:

‘’I felt unsafe and I feared the possibility of being brought into court because of prescribing errors. (…) And now, I feel better, I am much closer to delivering good pharmaceutical care instead of wondering: ‘what don’t I know, what do I know, lots of things are happening [with this patient] and I don’t know what is going on’. Pharmaceutical care is such a complex domain. ’’(Participant 8).

Second, whether GPs were willing to engage in this improvement, whilst general practice is becoming more complex. As one GP said:

“[as a GP] you need to know of a lot. Especially as secondary care is increasingly transferred to primary care, it is all just becoming very specialised, so yes we do need extra knowledge.” (Participant 2).

The need for improvement was further endorsed by the decline in quality of pharmaceutical care that was experienced by the GP after the NDP left their practice. Returning to the traditional model with only the community pharmacist, often instigated by a lack of financial reimbursement for the interprofessional model with the NDP by healthcare insurers, felt like “ it all collapsed in ruins ” (Participant 1).

Aligning professional identities through discursive actions

Conforming with the initial programme theory, the GPs recognised that NDPs brought new knowledge into their clinical practice:

“Through their background, [the NDP] provides depth, they can explain in detail on what the medicine does with the body, or interactions with other medicines, and I think… as a GP your pharmaceutical knowledge is more shallow.” (Participant 3).
“They [the NDP] linked that to their knowledge on medications, to taper medications or to find an alternative, so they of course have the knowledge that I as a GP have not.” (Participant 1).

In addition to this anticipated mechanism (adding new knowledge to the general practice team) we found that over time GPs and NDPs changed their professional identities.

By working as an interprofessional team, the GPs and the NDPs better understood what the other found appropriate (fitting behaviour in a certain context),legitimate (conforming to principles they recognize to be valid) and imaginable (collectively considering a broad range of possibilities and solutions to address issues). An example was that GPs increasingly valued differences in work approaches: NDPs work pro-actively while GPs work mainly reactively. In daily practice, an NDP pro-actively invites patients to their clinic with potential health concerns due to the combination of medication that they are using and their comorbidities. The NDP works together with the patient to prevent medication harm. A GP on the other hand reacts to the current presenting complaint of the patient. Another example of the GP and NDP better understanding what the other found appropriate and legitimate was observed by a GP who noticed that the NDP, over time, increasingly pursued patient-centric approaches:

“I think that a pharmacist really has to get used to being located in general practice. General practitioners are kind of strange people, doctors think differently. (…) You know, in the beginning you have to get used to that and then (…) what a real difference is, is whether you see a list with medications or whether you see the patient using them. (…) That is the translation from practice to the medications, and that is the translation a pharmacist needs to complete, mainly in the beginning.” (Participant 7).

The person-centred approach was also reflected in the communication between the GP and NDP:

I notice that we come to the point much faster and quickly identify what the important issues are (about the care for the patient). They (NDP) often preselects what they need to discuss (…). It is also the experience they have gained over time . (Participant 5)

Alignment of identities took place through discursive actions: as the GPs and NDPs talked in the corridors or during short daily meetings, about specific patients, their context and pharmacotherapeutic considerations. Alignment took time and was highly supported by the communication and clinical reasoning skills acquired by the NDPs during the training program, as NDPs learned to transition from drug-centred to patient-centred care, facilitating the NDPs’ identity changes. One GP described:

They (NDP) were quite good at communicating as a community pharmacist, but they also learned that in general practice communication has a slightly different allure. (…) That you connect with what moves people and not just fire your questions at someone. An ongoing conversation instead of a (…) barrage of questions. (Participant 6)

Alignment of identities between GPs and NDPs occurred through everyday talk which does not occur with community pharmacists. Discursively aligning what is appropriate, legitimate and imaginable did not occur with pharmacists who work in another organisation. One GP illustrated how he experienced the discourse with the NDP and with the community pharmacist differently:

“ To collaborate with someone with both medical knowledge as well as pharmaceutical background, that results in a nice cooperation. I sometimes visit the community pharmacist but that is different. You still have… then it is often all about logistics, while with [NDP] you notice that they just do much more with patients and, well, they have a much more medical background. ” (Participant 8).
Another GP explained: “Because [name of NDP] is situated in the general practice, I can very easily walk over to their desk or put a memo in their work agenda and vice versa. It’s very easy to make contact with each other.” (Participant 7).

Providing shared care for specific and complex patient problems enabled different opportunities for aligning identities as both enable GPs and NDPs to recognise, acknowledge and utilise the others’ expertise. For specific care, such as patients with a single drug therapy problem,GPs entrusted patients to the NDP or consulted the NDP for advice. In these situations, the GP easily agreed with the NDP’s recommendations, without much discussion. For example, in a patient with persistent pain who was referred to the NDP by the GP, the GP stated:

“ you know, when [NDP] says that starting pregabalin is the best option now, then I think Oh, good idea, well, let’s do that; as I do trust them, yes. ” (Participant 9).

For complex care– patients with multimorbidity, polypharmacy and multiple pharmaceutical care problems– the GPs recognised the importance of combining both their own and the NDPs expertise. The GPs experienced the need for face-to-face collaborative care meetings with the NDP to optimise the pharmaceutical care of these patients, for example when jointly carrying out clinical medication reviews in elderly patients. One GP described the importance of interprofessional collaboration in discussing such complex clinical medication review outcomes:

“ Sometimes, you [as a GP] think, what else is going on there? And then I wonder… they [the NDP] are trained as a pharmacist, and they looks through certain glasses. And I look through slightly different glasses. I definitely think these glasses are complementary, but I feel that sometimes there is a need for my broader scope, […] to see the wider picture, not focusing solely on the medication. Sometimes it is priority to make sure someone can stay home, or has a good quality of life, rather than to control the blood pressure; there is more to life than a controlled blood pressure.” (Participant 5).

As a consequence of frequent and successful joint care meetings between GPs and NDPs, the following mechanism occurred: GPs started to think and feel differently about sharing (part) of their responsibility with the NDPs, amid the complexity of patients’ care needs. The GPs gradually entrusted parts of the provided care to NDPs, but meanwhile remained convinced that they should be able to provide the NDP-led care themselves– even though the GP recognised they actually would not be able to, especially in the complexity of care:

“They [NDP] are better at it [providing pharmaceutical care] than I am. But I, as a GP, should be able to do it, too. […] That is how it always has been. Whether it stays like that, I don’t know.” (Participant 9).
“No, I think I cannot do all that what they [NDPs] can. […] I think that I should be able to do clinical medication reviews, but I am not sure whether I would be able to do it that good, no.” (Participant 6).
“So, I wouldn’t be able to do the same [as the NDP], even if I could take the same amount of time for the patient, because I do not have the knowledge.” (Participant 1).

Improved quality of care

The main outcome of the intervention was improved quality of care. A new, interprofessional model of pharmaceutical care, in which GPs and NDPs work closely together, through different mechanisms resulted in both perceived improvement of pharmaceutical care, as well as, as we have shown in our other studies, objectively demonstrated improvement outcomes [ 8 , 21 ]. The newly formed model is not just another model of providing pharmaceutical care, but one in which interprofessional collaboration between GP and NDP becomes well established, as recognised by this GP:

“ Together, they make a very strong team.” (Participant 1).

Our findings indicate that in a context where GPs acknowledge the need for improvement and are willing to engage in this improvement, working mechanisms are triggered. NDPs add new knowledge to current general practice. Through discursive actions both GPs and NDPs change in what they consider appropriate, legitimate and imaginable in their work situations. They align their professional identities. This contributes to the formation of an interprofessional healthcare model, in which shared care is provided by GPs and NDPs, resulting in improved quality of care.

Especially as the number of elderly patients with chronic conditions rises, GPs generally recognise that general practice requires a more diverse skill mix, and pharmacists’ expertise is suggested to be of additional value here [ 29 , 30 ]. Although all GPs acknowledged there was a need for improvement of the pharmaceutical care provided in their practices, some GPs considered lack of time to provide pharmaceutical care as the main problem. Other GPs acknowledged that a gap in their own knowledge may hinder the provision of better pharmaceutical care. In our model, this element of the context clarifies whether the mechanism “willing to share their responsibility with NDPs” occurred. GPs who are acknowledging a personal knowledge gap seem more willing to share their responsibility with NDPs as opposed to GPs who only acknowledge a need for improvement on a practice level (related to lack of time). Recognising this difference between the contexts might be key for the implemented intervention to result in success [ 31 ].

When designing the interprofessional healthcare model, we assumed the addition of new knowledge by the NDPs to general practice to be a key working mechanism [ 17 ]– in line with previous theories [ 12 , 13 , 14 ]. A previous UK study, investigating stakeholder experiences of this interprofessional model with NDPs integrated into general practices, also found GPs appreciating the additional knowledge brought by NDPs [ 15 ]. However, additional knowledge is not always optimally utilised, because of existing boundaries between and within health professions that can lead to interprofessional conflicts [ 16 , 32 ]. Ryan and colleagues describe challenges in realising effective interprofessional collaboration [ 15 ]. In their study, a GP compared the “perceived threat to professional boundaries and identity to that observed during the introduction of nurse practitioners, although suggested that this sentiment might be stronger since everything a nurse can do a GP can probably do, whereas anything a pharmacist can do the GP probably can’t ” [ 15 , p. 8]. Perhaps this limited degree of overlap between the two professions (GP and NDP) explains that professional boundaries between GPs and NDPs needed to be redrawn, and (interprofessional) identity work was needed.

A professional identity can be defined in terms of ‘spaces of action’, which are “what professional actors find appropriate, legitimate and imaginable in their work situations, given the existing cultural conditions” [ 33 ]. Spaces of action are not fixed. New spaces of action can be co-constructed and boundaries between interprofessional spaces of action can be redrawn, as spaces of action are the result of everyday work interactions, so-called discursive actions. In these discursive actions, the professional identities of both GPs and NDPs started to change. This process of aligning professional identities took place both explicitly and implicitly: GPs and NDPs became aware of what the other considers appropriate, legitimate and imaginable and started sharing ideas about these considerations, thereby re-drawing and aligning their professional identities.

We found that discursive actions were the means for the identity aligning process to take place; for example: knocking on each other’s door for ad hoc consultations during the day, coffee break meetings, asking the other to shortly pop over during a patient consultation to assess the patients’ pharmacotherapy directly together, or quick questions via digital notes in the patient records system. During these interactions, GPs and NDPs discussed specific patients and their context or pharmacotherapeutic considerations, thereby questioning each other’s routine, asking questions like “why do you do what you do?”. These discursive actions made GPs and NDPs both explicitly and implicitly reconsider what they thought appropriate, legitimate and imaginable in their work situations: the identity aligning process could take place, allowing for effective interprofessional collaboration.

Earlier studies already recognised ‘proximity’ between GPs and pharmacists, and them both working ‘on-site’ as important elements to enable interprofessional collaboration [ 15 , 34 ]. A Canadian study on GPs’ experiences of prescribing pharmacists (both community pharmacists and NDPs, the latter being described as ‘team pharmacists’) reported that “the proximity of team pharmacists allowed physicians to develop trust and mutual respect with pharmacists; however, proximity alone did not facilitate collaboration.…All participants were hesitant to trust pharmacists with whom they were unfamiliar, especially in community settings.” [ 34 , p.92] Besides the need for proximity, our study stressed that GPs and pharmacists need to be familiar with each other, i.e. work collaboratively to learn to speak the same language. Another study, in the United Kingdom, reported that “a strong preference was expressed [by GPs] for the pharmacy team to be located in house all day . In practices where the pharmacy team was located on-site, participants reported easy personal access and the ability to ask informal questions.” [ 15 ] That same study reported that ”where the pharmacy team was located off-site, however, they were viewed as a separate entity and aspects of communication were lost.” [ 15 ] We agree with those studies that proximity and working on-site are important, but we think that in this proximity GPs and NDPs not only need to get familiar but need to align their professional identities , which, in our view, incorporates deeper underlying mechanisms taking place than simply getting to know each other: it requires both parties to change. We believe that proximity and working on-site describe the essential conditions that are needed for this alignment process to take place: they allow for discursive actions to occur. ‘GPs in our study reported a difference between collaboration with NDPs and collaboration with community pharmacists, so, we hypothesise that despite (frequent) proximity between GPs and community pharmacists and (frequent) mutual relationships, community pharmacists and GPs may not have aligned their professional identities , whereas NDPs and GPs may have.

Over time, in the process of aligning identities, GPs may start to reconsider responsibilities. GPs feel responsible for the integral care provided to patients, including pharmaceutical care. This seems to be a core aspect of the GPs’ professional identity. For GPs, reconsidering responsibilities with other professionals is a delicate balance. On the one hand, the GP wants to be responsible for and in control of patient care, as becomes clear in the following quote of a Canadian GP: “ If they [pharmacists] are going to make clinical decisions about a patient, and they [pharmacists] don’t call me [to get my consent], that’s inappropriate”. [ 34 , p.91] On the other hand, the same GP had less problems with an NDP making clinical decisions as confirmed by the following quote: “[The NDP] did not need to seek approval prior to prescribing whereas community pharmacists should.” [ 34 , p.91] It is possible to adjust the GPs identity by aligning with the NDPs’ professional identity. Our results indicate that mutual interactions between GPs and NDPs is an essential first step.

Strengths and limitations

While many evaluations of interventions ignore the context in which the intervention is implemented, we instead aimed to better understand this context by using a realist informed approach. Taking the context into account provided additional insight into what could help to successfully implementat the interprofessional model with an NDP in other general practices.

This study had several limitations. Firstly, due to logistic reasons there was quite a large time frame between the ending of the intervention period (May 2015) and the interviews (March until June 2018). During this time, in five of the nine practices the NDPs continued working after the intervention period ended, while in the rest of the practices the NDPs stopped working, bringing different experiences to the fore. Secondly, there was little opposition to NDPs amongst the interviewees. This might be related to the selection of GPs and practices willing to engage to begin with. This limits possibilities to mirror different contexts. Lastly, we chose to include GPs only, for feasibility reasons. To obtain a broader overview of the context in which the NDPs were integrated, insights in the perspective of the full general practice team, including practice assistants and nurses would have added value.

Implications for future pharmaceutical care and future research

The process of professional identity alignment is essential to make the interprofessional healthcare model work. It is important to highlight that this process is difficult, for both GPs and NDPs, and that it takes time. Understanding how the process takes place could help to optimise broader implementation of our interprofessional healthcare model.

Future follow-up research should investigate the health innovation sustainability of this interprofessional model on quality of pharmacotherapy in general practice, With the introduction of pharmacist prescribers in other countries like the United Kingdom, this may also be important with regards to roles and responsibilities of an NDP in general practice. Future research is needed to evaluate how and when this advanced role fits within the interprofessional model with an NDP. Also, the perspectives on the model of other stakeholders, such as policy makers, governmental bodies, professional organisations and healthcare insurers needs further investigation. We suggest that the relationship between financial sustainability and the importance of social factors (e.g. social interaction between professionals) should be the focus of future research too.

The need for additional training of pharmacists to work in general practice was recognised before [ 15 , 35 ], yet we would like to specifically stress the importance of additional interprofessional training to further facilitate the process of identity alignment. Interprofessional training is a collaborative educational approach that involves both GPs and NDPs, working together to learn with, from and about each other. This training aims to improve teamwork, communication and the quality of patient care by fostering a deeper understanding of each other’s professional roles and responsibilities. It assists GPs and NDPs to develop the skills needed to work together effectively in a person-centred healthcare environment. We used a work-place based learning approach to develop these skills through practical experience and on-the-job activities [ 18 ]. In literature on interprofessional teams it was recognised that professional identity formation is a social activity, and professional identities are explored in relation with others [ 36 ]. Interprofessional training, like workplace learning, offer ample opportunities for informal conversations and reflections that will accelerate the process of identity alignment [ 37 , 38 ].

The new interprofessional healthcare model with the NDP integrated in general practice teams not only works through addition of new knowledge in general practice, but also via NDPs and GPs aligning their professional identities. This is essentially different from traditional pharmaceutical care models, in which pharmacists and GPs work in separate organisations. When broader implementation of the interprofessional model with NDPs in general practice is sought, GPs need to acknowledge that the need for improvement of the quality of pharmaceutical care requires focussing on interprofessional teamwork. Then, both GPs and NDPs will explore and reflect on what they consider appropriate, legitimate and imaginable in carrying out their professional roles for collaboratively providing the best pharmacotherapy to their patients.

Data availability

The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.

Abbreviations

  • Non-dispensing pharmacist

General practitioner

Pharmacotherapy optimisation through integration of a non-dispensing pharmacist integrated in primary care teams

Realist evaluation

Dolovich L, Pottie K, Kaczorowski J, Farrell B, Austin Z, Rodriguez C, et al. Integrating family medicine and pharmacy to advance primary care therapeutics. Clin Pharmacol Ther. 2008;83(6):913–7.

Article   CAS   PubMed   Google Scholar  

Tan E, Stewart K, Elliott RA, George J. An exploration of the role of pharmacists within general practice clinics: the protocol for the pharmacists in practice study (PIPS). BMC Health Serv Res. 2012;12:246.

Article   PubMed   PubMed Central   Google Scholar  

Jameson JP, VanNoord GR. Pharmacotherapy consultation on polypharmacy patients in ambulatory care. Annals Pharmacotherapy. 2001;35:835–40.

Article   CAS   Google Scholar  

Cardwell K, Clyne B, Moriarty F, Wallace E, Fahey T, Boland F, et al. Supporting prescribing in Irish primary care: protocol for a non-randomised pilot study of a general practice pharmacist (GPP) intervention to optimise prescribing in primary care. Pilot Feasibility Stud. 2018;4:122.

Hazen ACM, Sloeserwij VM, Zwart DLM, de Bont AA, Bouvy ML, de Gier JJ, et al. Design of the POINT study: Pharmacotherapy Optimisation through Integration of a non-dispensing pharmacist in a primary care team (POINT). BMC Fam Pract. 2015;16:76.

Tan ECK, Stewart K, Elliott RA, George J. Pharmacist services provided in general pratice clinics: a systematic review and a meta-analysis. Res Social Adm Pharm. 2014;10:608–22.

Article   PubMed   Google Scholar  

Hazen AC, de Bont AA, Boelman L, Zwart DL, de Gier JJ, de Wit NJ, et al. The degree of integration of non-dispensing pharmacists in primary care practice and the impact on health outcomes: a systematic review. Res Social Administrative Pharm. 2018;14:228–40.

Article   Google Scholar  

Sloeserwij VM, Hazen ACM, Zwart DLM, Leendertse AJ, Poldervaart JM, de Bont AA et al. Effects of non-dispensing pharmacists integrated in general practice on medication-related hospitalisations. Br J Clin Pharmacol. 2019;1–11.

Pawson R, Tilly N. Realistic evaluation. London: SAGE; 1997. p. 235.

Google Scholar  

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

Wong G, Westhorp G, Manzano A, Greenhalgh J, Jagosh J, Greenhalgh T. RAMESES II reporting standards for realist evaluations. BMC Med. 2016;14(96):1–18.

CAS   Google Scholar  

Williams KY, O’Reilly CA. Demography and diversity in organizations: a review of 40 years of research. Res Organizational Behav. 1998;20:77–140.

Ancona DG, Caldwell DF. Demography and design: predictors of New Product Team performance. Organ Sci. 1992;3(3):321–41.

DeDreu C, West M. Minority dissent and team innovation: the importance of participation in decision-making. J Appl Psychol. 2001;86(6):1191–201.

Ryan K, Patel N, Lau WM, Abu-Elmagd H, Stretch G, Pinney H. Pharmacists in general practice: a qualitative interview case study of stakeholders’ experiences in a West London GP federation. BMC Health Serv Res. 2018;18(1):1–13.

Mitchell RJ, Parker V, Giles M. When do interprofessional teams succeed? Investigating the moderating roles of team and professional identity in interprofessional effectiveness. Hum Relat. 2011;64(10):1321–43.

Hazen ACM, de Bont AA, Leendertse AJ, Zwart DLM, de Wit NJ, de Gier JJ, et al. How clinical integration of pharmacists in General Practice has impact on Medication Therapy Management: a theory-oriented evaluation. Int J Integr Care. 2019;19(1):1–8.

Hazen A, de Groot E, de Gier H, Damoiseaux R, Zwart D, Leendertse A. Design of a 15-month interprofessional workplace learning program to expand the added value of clinical pharmacists in primary care. Currents Pharm Teach Learn. 2018;10:618–26.

Hazen A, Sloeserwij V, Pouls B, Leendertse A, de Gier H, Bouvy M, et al. Clinical pharmacists in Dutch general practice: an integrated care model to provide optimal pharmaceutical care. Int J Clin Pharm. 2021;43(5):1155–62.

Hazen ACM, de Groot E, de Bont AA, de Vocht S, de Gier JJ, Bouvy ML, et al. Learning through Boundary Crossing: professional identity formation of pharmacists transitioning to General Practice in the Netherlands. Acad Med. 2018;93(10):1531–8.

Hazen ACM, Zwart DLM, Poldervaart JM, de Gier JJ, de Wit NJ, de Bont AA et al. Non-dispensing pharmacists’ actions and solutions of drug therapy problems among elderly polypharmacy patients in primary care. Fam Pract. 2019;1–8.

Salter KL, Kothari A. Using realist evaluation to open the black box of knowledge translation: a state-of-the-art review. Implement Sci. 2014;9(115):1–14.

Marchal B, van Belle S, van Olmen J, Hoerée T, Kegels G. Is realist evaluation keeping its promise? A review of published empirical studies in the field of health systems research. Evaluation. 2012;18(2):192–212.

Astbury B, Leeuw FL. Unpacking Black boxes: mechanisms and theory building in evaluation. Am J Evaluation. 2010;31(3):363–81.

Patton M. Purposeful sampling. Qualitative evaluation and research methods. Beverly Hills: SAGE; 1990. pp. 169–86.

Pawson R. The science of evaluation: a realist manifesto. 1st ed. London: SAGE; 2013.

Book   Google Scholar  

Hewitt G, Sims S, Harris R. Using realist synthesis to understand the mechanisms of interprofessional teamwork in health and social care. J Interprof Care. 2014;28(6):501–6.

Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2016;25(12):986–92.

Anderson C, Zhan K, Boyd M, Mann C. The role of pharmacists in general practice: a realist review. Res Social Administrative Pharm. 2019;15:338–45.

Löffler C, Koudmani C, Böhmer F, Paschka SD, Höck J, Drewelow E, et al. Perceptions of interprofessional collaboration of general practitioners and community pharmacists - a qualitative study. BMC Health Serv Res. 2017;17:224.

Norman AC, Elg M, Nordin A, Gäre BA, Algurén B. The role of professional logics in quality register use: a realist evaluation. BMC Health Serv Res. 2020;20(1):1–11.

McNeil KA, Mitchell RJ, Parker V. Interprofessional practice and professional identity threat. Health Sociol Rev. 2013;22(3):291–307.

Lokatt E, Holgersson C, Lindgren M, Packendorff J, Hagander L. An interprofessional perspective on healthcare work: Physicians and nurses co-constructing identities and spaces of action. J Manage Organ. 2019;2019:1–17.

Faruquee CF, Khera AS, Guirguis LM. Family physicians’ perceptions of pharmacists prescribing in Alberta. J Interprof Care. 2019;1:10.

Freeman C, Cottrell WN, Kyle G, Williams I, Nissen L. Integrating a pharmacist into the general practice environment: opinions of pharmacist’s, general practitioner’s, health care consumer’s, and practice manager’s. BMC Health Serv Res. 2012;12:229.

Best S, Williams S. Professional identity in interprofessional teams: findings from a scoping review. J Interprof Care. 2019;33(2):170–81.

Pecukonis E, Interprofessional Education. A theoretical Orientation Incorporating Profession-Centrism and Social Identity Theory. J Law Med Ethics. 2014;42(s2):60–4.

Thomson K. When I say… informal conversations. Med Educ. 2020;54(4):287–8.

Download references

Acknowledgements

We thank all 18 GPs participating in this study, and Annemiek Heijne and Fokeline Weerheim for their contributions. We also thank Matthew Grant for proofreading the manuscript.

For the POINT study, a research grant was obtained from the Netherlands Organisation for Health Research and Development (grant agreement number 80-833600-98-10206). Implementation of NDPs during the study was financed by an unconditional grant of the Foundation Achmea Healthcare, a Dutch health insurance company (project number Z456). Both study sponsors had no role in the design of the study, nor in the data collection, analyses, interpretation of the data, in the writing of the report or in the decision to submit the manuscript for publication.

Author information

A.C.M. Hazen and V.M. Sloeserwij contributed equally.

Authors and Affiliations

Department of General Practice, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht (UMCU), Utrecht University, Universiteitsweg 100 3584 CG Utrecht. Postal address STR 6.131, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands

A.C.M. Hazen, V.M. Sloeserwij, E. de Groot, N.J. de Wit & D.L.M. Zwart

Department of Pharmacotherapy, - Epidemiology and – Economics, University of Groningen, Antonius Deusinglaan 1, Building 3214, 9713 AV, Groningen, The Netherlands

J.J. de Gier

Tilburg School of Social and Behavioral Sciences, Warandelaan 2, 5037 AB, Tilburg, The Netherlands

A.A. de Bont

You can also search for this author in PubMed   Google Scholar

Contributions

All authors (AH, VMS, EdG, AdB, JdG, NdW and DZ) participated in the design of the study. VMS was engaged in the data collection. VMS, AH, EdG, AdB and DLMZ were involved in the data analyses. VMS and AH drafted the paper. All authors (AH, VMS, EdG, AdB, JdG, NdW and DZ) contributed to the interpretation of the findings and critical review of the paper.

Corresponding author

Correspondence to A.C.M. Hazen .

Ethics declarations

Ethics approval and consent to participate.

All the procedures were followed in accordance with the Declaration of Helsinki. All interviewees provided written informed consent for participation in the present sub study of the POINT project. The POINT project was exempted of formal medical-ethical approval by the Medical Ethical Committee University Medical Centre Utrecht (METC protocol number 13-432 C).

Consent for publication

Not applicable, since no identifying information/images of the participants are present in the manuscript.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hazen, A., Sloeserwij, V., de Groot, E. et al. Non-dispensing pharmacists integrated into general practices as a new interprofessional model: a qualitative evaluation of general practitioners’ experiences and views. BMC Health Serv Res 24 , 502 (2024). https://doi.org/10.1186/s12913-024-10703-y

Download citation

Received : 31 December 2023

Accepted : 09 February 2024

Published : 23 April 2024

DOI : https://doi.org/10.1186/s12913-024-10703-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Interprofessional model
  • Primary care
  • General practice
  • Interview study
  • Quality improvement

BMC Health Services Research

ISSN: 1472-6963

limitations of qualitative evaluation research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Prev Med

Qualitative Methods in Health Care Research

Vishnu renjith.

School of Nursing and Midwifery, Royal College of Surgeons Ireland - Bahrain (RCSI Bahrain), Al Sayh Muharraq Governorate, Bahrain

Renjulal Yesodharan

1 Department of Mental Health Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India

Judith A. Noronha

2 Department of OBG Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India

Elissa Ladd

3 School of Nursing, MGH Institute of Health Professions, Boston, USA

Anice George

4 Department of Child Health Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India

Healthcare research is a systematic inquiry intended to generate robust evidence about important issues in the fields of medicine and healthcare. Qualitative research has ample possibilities within the arena of healthcare research. This article aims to inform healthcare professionals regarding qualitative research, its significance, and applicability in the field of healthcare. A wide variety of phenomena that cannot be explained using the quantitative approach can be explored and conveyed using a qualitative method. The major types of qualitative research designs are narrative research, phenomenological research, grounded theory research, ethnographic research, historical research, and case study research. The greatest strength of the qualitative research approach lies in the richness and depth of the healthcare exploration and description it makes. In health research, these methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings.

Introduction

Healthcare research is a systematic inquiry intended to generate trustworthy evidence about issues in the field of medicine and healthcare. The three principal approaches to health research are the quantitative, the qualitative, and the mixed methods approach. The quantitative research method uses data, which are measures of values and counts and are often described using statistical methods which in turn aids the researcher to draw inferences. Qualitative research incorporates the recording, interpreting, and analyzing of non-numeric data with an attempt to uncover the deeper meanings of human experiences and behaviors. Mixed methods research, the third methodological approach, involves collection and analysis of both qualitative and quantitative information with an objective to solve different but related questions, or at times the same questions.[ 1 , 2 ]

In healthcare, qualitative research is widely used to understand patterns of health behaviors, describe lived experiences, develop behavioral theories, explore healthcare needs, and design interventions.[ 1 , 2 , 3 ] Because of its ample applications in healthcare, there has been a tremendous increase in the number of health research studies undertaken using qualitative methodology.[ 4 , 5 ] This article discusses qualitative research methods, their significance, and applicability in the arena of healthcare.

Qualitative Research

Diverse academic and non-academic disciplines utilize qualitative research as a method of inquiry to understand human behavior and experiences.[ 6 , 7 ] According to Munhall, “Qualitative research involves broadly stated questions about human experiences and realities, studied through sustained contact with the individual in their natural environments and producing rich, descriptive data that will help us to understand those individual's experiences.”[ 8 ]

Significance of Qualitative Research

The qualitative method of inquiry examines the 'how' and 'why' of decision making, rather than the 'when,' 'what,' and 'where.'[ 7 ] Unlike quantitative methods, the objective of qualitative inquiry is to explore, narrate, and explain the phenomena and make sense of the complex reality. Health interventions, explanatory health models, and medical-social theories could be developed as an outcome of qualitative research.[ 9 ] Understanding the richness and complexity of human behavior is the crux of qualitative research.

Differences between Quantitative and Qualitative Research

The quantitative and qualitative forms of inquiry vary based on their underlying objectives. They are in no way opposed to each other; instead, these two methods are like two sides of a coin. The critical differences between quantitative and qualitative research are summarized in Table 1 .[ 1 , 10 , 11 ]

Differences between quantitative and qualitative research

Qualitative Research Questions and Purpose Statements

Qualitative questions are exploratory and are open-ended. A well-formulated study question forms the basis for developing a protocol, guides the selection of design, and data collection methods. Qualitative research questions generally involve two parts, a central question and related subquestions. The central question is directed towards the primary phenomenon under study, whereas the subquestions explore the subareas of focus. It is advised not to have more than five to seven subquestions. A commonly used framework for designing a qualitative research question is the 'PCO framework' wherein, P stands for the population under study, C stands for the context of exploration, and O stands for the outcome/s of interest.[ 12 ] The PCO framework guides researchers in crafting a focused study question.

Example: In the question, “What are the experiences of mothers on parenting children with Thalassemia?”, the population is “mothers of children with Thalassemia,” the context is “parenting children with Thalassemia,” and the outcome of interest is “experiences.”

The purpose statement specifies the broad focus of the study, identifies the approach, and provides direction for the overall goal of the study. The major components of a purpose statement include the central phenomenon under investigation, the study design and the population of interest. Qualitative research does not require a-priori hypothesis.[ 13 , 14 , 15 ]

Example: Borimnejad et al . undertook a qualitative research on the lived experiences of women suffering from vitiligo. The purpose of this study was, “to explore lived experiences of women suffering from vitiligo using a hermeneutic phenomenological approach.” [ 16 ]

Review of the Literature

In quantitative research, the researchers do an extensive review of scientific literature prior to the commencement of the study. However, in qualitative research, only a minimal literature search is conducted at the beginning of the study. This is to ensure that the researcher is not influenced by the existing understanding of the phenomenon under the study. The minimal literature review will help the researchers to avoid the conceptual pollution of the phenomenon being studied. Nonetheless, an extensive review of the literature is conducted after data collection and analysis.[ 15 ]

Reflexivity

Reflexivity refers to critical self-appraisal about one's own biases, values, preferences, and preconceptions about the phenomenon under investigation. Maintaining a reflexive diary/journal is a widely recognized way to foster reflexivity. According to Creswell, “Reflexivity increases the credibility of the study by enhancing more neutral interpretations.”[ 7 ]

Types of Qualitative Research Designs

The qualitative research approach encompasses a wide array of research designs. The words such as types, traditions, designs, strategies of inquiry, varieties, and methods are used interchangeably. The major types of qualitative research designs are narrative research, phenomenological research, grounded theory research, ethnographic research, historical research, and case study research.[ 1 , 7 , 10 ]

Narrative research

Narrative research focuses on exploring the life of an individual and is ideally suited to tell the stories of individual experiences.[ 17 ] The purpose of narrative research is to utilize 'story telling' as a method in communicating an individual's experience to a larger audience.[ 18 ] The roots of narrative inquiry extend to humanities including anthropology, literature, psychology, education, history, and sociology. Narrative research encompasses the study of individual experiences and learning the significance of those experiences. The data collection procedures include mainly interviews, field notes, letters, photographs, diaries, and documents collected from one or more individuals. Data analysis involves the analysis of the stories or experiences through “re-storying of stories” and developing themes usually in chronological order of events. Rolls and Payne argued that narrative research is a valuable approach in health care research, to gain deeper insight into patient's experiences.[ 19 ]

Example: Karlsson et al . undertook a narrative inquiry to “explore how people with Alzheimer's disease present their life story.” Data were collected from nine participants. They were asked to describe about their life experiences from childhood to adulthood, then to current life and their views about the future life. [ 20 ]

Phenomenological research

Phenomenology is a philosophical tradition developed by German philosopher Edmond Husserl. His student Martin Heidegger did further developments in this methodology. It defines the 'essence' of individual's experiences regarding a certain phenomenon.[ 1 ] The methodology has its origin from philosophy, psychology, and education. The purpose of qualitative research is to understand the people's everyday life experiences and reduce it into the central meaning or the 'essence of the experience'.[ 21 , 22 ] The unit of analysis of phenomenology is the individuals who have had similar experiences of the phenomenon. Interviews with individuals are mainly considered for the data collection, though, documents and observations are also useful. Data analysis includes identification of significant meaning elements, textural description (what was experienced), structural description (how was it experienced), and description of 'essence' of experience.[ 1 , 7 , 21 ] The phenomenological approach is further divided into descriptive and interpretive phenomenology. Descriptive phenomenology focuses on the understanding of the essence of experiences and is best suited in situations that need to describe the lived phenomenon. Hermeneutic phenomenology or Interpretive phenomenology moves beyond the description to uncover the meanings that are not explicitly evident. The researcher tries to interpret the phenomenon, based on their judgment rather than just describing it.[ 7 , 21 , 22 , 23 , 24 ]

Example: A phenomenological study conducted by Cornelio et al . aimed at describing the lived experiences of mothers in parenting children with leukemia. Data from ten mothers were collected using in-depth semi-structured interviews and were analyzed using Husserl's method of phenomenology. Themes such as “pivotal moment in life”, “the experience of being with a seriously ill child”, “having to keep distance with the relatives”, “overcoming the financial and social commitments”, “responding to challenges”, “experience of faith as being key to survival”, “health concerns of the present and future”, and “optimism” were derived. The researchers reported the essence of the study as “chronic illness such as leukemia in children results in a negative impact on the child and on the mother.” [ 25 ]

Grounded Theory Research

Grounded theory has its base in sociology and propagated by two sociologists, Barney Glaser, and Anselm Strauss.[ 26 ] The primary purpose of grounded theory is to discover or generate theory in the context of the social process being studied. The major difference between grounded theory and other approaches lies in its emphasis on theory generation and development. The name grounded theory comes from its ability to induce a theory grounded in the reality of study participants.[ 7 , 27 ] Data collection in grounded theory research involves recording interviews from many individuals until data saturation. Constant comparative analysis, theoretical sampling, theoretical coding, and theoretical saturation are unique features of grounded theory research.[ 26 , 27 , 28 ] Data analysis includes analyzing data through 'open coding,' 'axial coding,' and 'selective coding.'[ 1 , 7 ] Open coding is the first level of abstraction, and it refers to the creation of a broad initial range of categories, axial coding is the procedure of understanding connections between the open codes, whereas selective coding relates to the process of connecting the axial codes to formulate a theory.[ 1 , 7 ] Results of the grounded theory analysis are supplemented with a visual representation of major constructs usually in the form of flow charts or framework diagrams. Quotations from the participants are used in a supportive capacity to substantiate the findings. Strauss and Corbin highlights that “the value of the grounded theory lies not only in its ability to generate a theory but also to ground that theory in the data.”[ 27 ]

Example: Williams et al . conducted a grounded theory research to explore the nature of relationship between the sense of self and the eating disorders. Data were collected form 11 women with a lifetime history of Anorexia Nervosa and were analyzed using the grounded theory methodology. Analysis led to the development of a theoretical framework on the nature of the relationship between the self and Anorexia Nervosa. [ 29 ]

Ethnographic research

Ethnography has its base in anthropology, where the anthropologists used it for understanding the culture-specific knowledge and behaviors. In health sciences research, ethnography focuses on narrating and interpreting the health behaviors of a culture-sharing group. 'Culture-sharing group' in an ethnography represents any 'group of people who share common meanings, customs or experiences.' In health research, it could be a group of physicians working in rural care, a group of medical students, or it could be a group of patients who receive home-based rehabilitation. To understand the cultural patterns, researchers primarily observe the individuals or group of individuals for a prolonged period of time.[ 1 , 7 , 30 ] The scope of ethnography can be broad or narrow depending on the aim. The study of more general cultural groups is termed as macro-ethnography, whereas micro-ethnography focuses on more narrowly defined cultures. Ethnography is usually conducted in a single setting. Ethnographers collect data using a variety of methods such as observation, interviews, audio-video records, and document reviews. A written report includes a detailed description of the culture sharing group with emic and etic perspectives. When the researcher reports the views of the participants it is called emic perspectives and when the researcher reports his or her views about the culture, the term is called etic.[ 7 ]

Example: The aim of the ethnographic study by LeBaron et al . was to explore the barriers to opioid availability and cancer pain management in India. The researchers collected data from fifty-nine participants using in-depth semi-structured interviews, participant observation, and document review. The researchers identified significant barriers by open coding and thematic analysis of the formal interview. [ 31 ]

Historical research

Historical research is the “systematic collection, critical evaluation, and interpretation of historical evidence”.[ 1 ] The purpose of historical research is to gain insights from the past and involves interpreting past events in the light of the present. The data for historical research are usually collected from primary and secondary sources. The primary source mainly includes diaries, first hand information, and writings. The secondary sources are textbooks, newspapers, second or third-hand accounts of historical events and medical/legal documents. The data gathered from these various sources are synthesized and reported as biographical narratives or developmental perspectives in chronological order. The ideas are interpreted in terms of the historical context and significance. The written report describes 'what happened', 'how it happened', 'why it happened', and its significance and implications to current clinical practice.[ 1 , 10 ]

Example: Lubold (2019) analyzed the breastfeeding trends in three countries (Sweden, Ireland, and the United States) using a historical qualitative method. Through analysis of historical data, the researcher found that strong family policies, adherence to international recommendations and adoption of baby-friendly hospital initiative could greatly enhance the breastfeeding rates. [ 32 ]

Case study research

Case study research focuses on the description and in-depth analysis of the case(s) or issues illustrated by the case(s). The design has its origin from psychology, law, and medicine. Case studies are best suited for the understanding of case(s), thus reducing the unit of analysis into studying an event, a program, an activity or an illness. Observations, one to one interviews, artifacts, and documents are used for collecting the data, and the analysis is done through the description of the case. From this, themes and cross-case themes are derived. A written case study report includes a detailed description of one or more cases.[ 7 , 10 ]

Example: Perceptions of poststroke sexuality in a woman of childbearing age was explored using a qualitative case study approach by Beal and Millenbrunch. Semi structured interview was conducted with a 36- year mother of two children with a history of Acute ischemic stroke. The data were analyzed using an inductive approach. The authors concluded that “stroke during childbearing years may affect a woman's perception of herself as a sexual being and her ability to carry out gender roles”. [ 33 ]

Sampling in Qualitative Research

Qualitative researchers widely use non-probability sampling techniques such as purposive sampling, convenience sampling, quota sampling, snowball sampling, homogeneous sampling, maximum variation sampling, extreme (deviant) case sampling, typical case sampling, and intensity sampling. The selection of a sampling technique depends on the nature and needs of the study.[ 34 , 35 , 36 , 37 , 38 , 39 , 40 ] The four widely used sampling techniques are convenience sampling, purposive sampling, snowball sampling, and intensity sampling.

Convenience sampling

It is otherwise called accidental sampling, where the researchers collect data from the subjects who are selected based on accessibility, geographical proximity, ease, speed, and or low cost.[ 34 ] Convenience sampling offers a significant benefit of convenience but often accompanies the issues of sample representation.

Purposive sampling

Purposive or purposeful sampling is a widely used sampling technique.[ 35 ] It involves identifying a population based on already established sampling criteria and then selecting subjects who fulfill that criteria to increase the credibility. However, choosing information-rich cases is the key to determine the power and logic of purposive sampling in a qualitative study.[ 1 ]

Snowball sampling

The method is also known as 'chain referral sampling' or 'network sampling.' The sampling starts by having a few initial participants, and the researcher relies on these early participants to identify additional study participants. It is best adopted when the researcher wishes to study the stigmatized group, or in cases, where findings of participants are likely to be difficult by ordinary means. Respondent ridden sampling is an improvised version of snowball sampling used to find out the participant from a hard-to-find or hard-to-study population.[ 37 , 38 ]

Intensity sampling

The process of identifying information-rich cases that manifest the phenomenon of interest is referred to as intensity sampling. It requires prior information, and considerable judgment about the phenomenon of interest and the researcher should do some preliminary investigations to determine the nature of the variation. Intensity sampling will be done once the researcher identifies the variation across the cases (extreme, average and intense) and picks the intense cases from them.[ 40 ]

Deciding the Sample Size

A-priori sample size calculation is not undertaken in the case of qualitative research. Researchers collect the data from as many participants as possible until they reach the point of data saturation. Data saturation or the point of redundancy is the stage where the researcher no longer sees or hears any new information. Data saturation gives the idea that the researcher has captured all possible information about the phenomenon of interest. Since no further information is being uncovered as redundancy is achieved, at this point the data collection can be stopped. The objective here is to get an overall picture of the chronicle of the phenomenon under the study rather than generalization.[ 1 , 7 , 41 ]

Data Collection in Qualitative Research

The various strategies used for data collection in qualitative research includes in-depth interviews (individual or group), focus group discussions (FGDs), participant observation, narrative life history, document analysis, audio materials, videos or video footage, text analysis, and simple observation. Among all these, the three popular methods are the FGDs, one to one in-depth interviews and the participant observation.

FGDs are useful in eliciting data from a group of individuals. They are normally built around a specific topic and are considered as the best approach to gather data on an entire range of responses to a topic.[ 42 Group size in an FGD ranges from 6 to 12. Depending upon the nature of participants, FGDs could be homogeneous or heterogeneous.[ 1 , 14 ] One to one in-depth interviews are best suited to obtain individuals' life histories, lived experiences, perceptions, and views, particularly while exporting topics of sensitive nature. In-depth interviews can be structured, unstructured, or semi-structured. However, semi-structured interviews are widely used in qualitative research. Participant observations are suitable for gathering data regarding naturally occurring behaviors.[ 1 ]

Data Analysis in Qualitative Research

Various strategies are employed by researchers to analyze data in qualitative research. Data analytic strategies differ according to the type of inquiry. A general content analysis approach is described herewith. Data analysis begins by transcription of the interview data. The researcher carefully reads data and gets a sense of the whole. Once the researcher is familiarized with the data, the researcher strives to identify small meaning units called the 'codes.' The codes are then grouped based on their shared concepts to form the primary categories. Based on the relationship between the primary categories, they are then clustered into secondary categories. The next step involves the identification of themes and interpretation to make meaning out of data. In the results section of the manuscript, the researcher describes the key findings/themes that emerged. The themes can be supported by participants' quotes. The analytical framework used should be explained in sufficient detail, and the analytic framework must be well referenced. The study findings are usually represented in a schematic form for better conceptualization.[ 1 , 7 ] Even though the overall analytical process remains the same across different qualitative designs, each design such as phenomenology, ethnography, and grounded theory has design specific analytical procedures, the details of which are out of the scope of this article.

Computer-Assisted Qualitative Data Analysis Software (CAQDAS)

Until recently, qualitative analysis was done either manually or with the help of a spreadsheet application. Currently, there are various software programs available which aid researchers to manage qualitative data. CAQDAS is basically data management tools and cannot analyze the qualitative data as it lacks the ability to think, reflect, and conceptualize. Nonetheless, CAQDAS helps researchers to manage, shape, and make sense of unstructured information. Open Code, MAXQDA, NVivo, Atlas.ti, and Hyper Research are some of the widely used qualitative data analysis software.[ 14 , 43 ]

Reporting Guidelines

Consolidated Criteria for Reporting Qualitative Research (COREQ) is the widely used reporting guideline for qualitative research. This 32-item checklist assists researchers in reporting all the major aspects related to the study. The three major domains of COREQ are the 'research team and reflexivity', 'study design', and 'analysis and findings'.[ 44 , 45 ]

Critical Appraisal of Qualitative Research

Various scales are available to critical appraisal of qualitative research. The widely used one is the Critical Appraisal Skills Program (CASP) Qualitative Checklist developed by CASP network, UK. This 10-item checklist evaluates the quality of the study under areas such as aims, methodology, research design, ethical considerations, data collection, data analysis, and findings.[ 46 ]

Ethical Issues in Qualitative Research

A qualitative study must be undertaken by grounding it in the principles of bioethics such as beneficence, non-maleficence, autonomy, and justice. Protecting the participants is of utmost importance, and the greatest care has to be taken while collecting data from a vulnerable research population. The researcher must respect individuals, families, and communities and must make sure that the participants are not identifiable by their quotations that the researchers include when publishing the data. Consent for audio/video recordings must be obtained. Approval to be in FGDs must be obtained from the participants. Researchers must ensure the confidentiality and anonymity of the transcripts/audio-video records/photographs/other data collected as a part of the study. The researchers must confirm their role as advocates and proceed in the best interest of all participants.[ 42 , 47 , 48 ]

Rigor in Qualitative Research

The demonstration of rigor or quality in the conduct of the study is essential for every research method. However, the criteria used to evaluate the rigor of quantitative studies are not be appropriate for qualitative methods. Lincoln and Guba (1985) first outlined the criteria for evaluating the qualitative research often referred to as “standards of trustworthiness of qualitative research”.[ 49 ] The four components of the criteria are credibility, transferability, dependability, and confirmability.

Credibility refers to confidence in the 'truth value' of the data and its interpretation. It is used to establish that the findings are true, credible and believable. Credibility is similar to the internal validity in quantitative research.[ 1 , 50 , 51 ] The second criterion to establish the trustworthiness of the qualitative research is transferability, Transferability refers to the degree to which the qualitative results are applicability to other settings, population or contexts. This is analogous to the external validity in quantitative research.[ 1 , 50 , 51 ] Lincoln and Guba recommend authors provide enough details so that the users will be able to evaluate the applicability of data in other contexts.[ 49 ] The criterion of dependability refers to the assumption of repeatability or replicability of the study findings and is similar to that of reliability in quantitative research. The dependability question is 'Whether the study findings be repeated of the study is replicated with the same (similar) cohort of participants, data coders, and context?'[ 1 , 50 , 51 ] Confirmability, the fourth criteria is analogous to the objectivity of the study and refers the degree to which the study findings could be confirmed or corroborated by others. To ensure confirmability the data should directly reflect the participants' experiences and not the bias, motivations, or imaginations of the inquirer.[ 1 , 50 , 51 ] Qualitative researchers should ensure that the study is conducted with enough rigor and should report the measures undertaken to enhance the trustworthiness of the study.

Conclusions

Qualitative research studies are being widely acknowledged and recognized in health care practice. This overview illustrates various qualitative methods and shows how these methods can be used to generate evidence that informs clinical practice. Qualitative research helps to understand the patterns of health behaviors, describe illness experiences, design health interventions, and develop healthcare theories. The ultimate strength of the qualitative research approach lies in the richness of the data and the descriptions and depth of exploration it makes. Hence, qualitative methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 22 April 2024

The design and evaluation of gamified online role-play as a telehealth training strategy in dental education: an explanatory sequential mixed-methods study

  • Chayanid Teerawongpairoj 1 ,
  • Chanita Tantipoj 1 &
  • Kawin Sipiyaruk 2  

Scientific Reports volume  14 , Article number:  9216 ( 2024 ) Cite this article

153 Accesses

Metrics details

  • Health care
  • Health services
  • Public health

To evaluate user perceptions and educational impact of gamified online role-play in teledentistry as well as to construct a conceptual framework highlighting how to design this interactive learning strategy, this research employed an explanatory sequential mixed-methods design. Participants were requested to complete self-perceived assessments toward confidence and awareness in teledentistry before and after participating in a gamified online role-play. They were also asked to complete a satisfaction questionnaire and participate in an in-depth interview to investigate their learning experience. The data were analyzed using descriptive statistics, paired sample t-test, one-way analysis of variance, and framework analysis. There were 18 participants who completed self-perceived assessments and satisfaction questionnaire, in which 12 of them participated in a semi-structured interview. There were statistically significant increases in self-perceived confidence and awareness after participating in the gamified online role-play ( P  < 0.001). In addition, the participants were likely to be satisfied with this learning strategy, where usefulness was perceived as the most positive aspect with a score of 4.44 out of 5, followed by ease of use (4.40) and enjoyment (4.03). The conceptual framework constructed from the qualitative findings has revealed five key elements in designing a gamified online role-play, including learner profile, learning settings, pedagogical components, interactive functions, and educational impact. The gamified online role-play has demonstrated its potential in improving self-perceived confidence and awareness in teledentistry. The conceptual framework developed in this research could be considered to design and implement a gamified online role-play in dental education. This research provides valuable evidence on the educational impact of gamified online role-play in teledentistry and how it could be designed and implemented in dental education. This information would be supportive for dental instructors or educators who are considering to implement teledentistry training in their practice.

Similar content being viewed by others

limitations of qualitative evaluation research

Games in dental education: playing to learn or learning to play?

limitations of qualitative evaluation research

A bibliometric analysis of the use of the Gamification Octalysis Framework in training: evidence from Web of Science

limitations of qualitative evaluation research

Helping general dental practitioners use the Index of Orthodontic Treatment Need: an assessment of available educational apps

Introduction.

Telehealth has gained significant attention from various organization due to its potential to improve healthcare quality and accessibility 1 . It can be supportive in several aspects in healthcare, including medical and nursing services, to enhance continuous monitoring and follow-up 2 . Its adoption has increased substantially during the COVID-19 pandemic, aiming to provide convenient healthcare services 3 . Even though the COVID-19 outbreak has passed, many patients still perceive telehealth as an effective tool in reducing a number of visits and enhancing access to health care services 4 , 5 . This supports the use of telehealth in the post-COVID-19 era.

Teledentistry, a form of telehealth specific to dentistry, has been employed to improve access to dental services 6 . This system offers benefits ranging from online history taking, oral diagnosis, treatment monitoring, and interdisciplinary communication among dental professionals, enabling comprehensive and holistic treatment planning for patients 7 . Teledentistry can also reduce travel time and costs associated with dental appointments 8 , 9 , 10 . There is evidence that teledentistry serves as a valuable tool to enhance access to dental care for patients 11 . Additionally, in the context of long-term management in patients, telehealth has contributed to patient-centered care, by enhancing their surrounding environments 12 . Therefore, teledentistry should be emphasized as one of digital dentistry to enhance treatment quality.

Albeit the benefits of teledentistry, available evidence demonstrates challenges and concerns in the implementation of telehealth. Lack of awareness and knowledge in the use of telehealth can hinder the adoption of telehealth 13 . Legal issues and privacy concerns also emerge as significant challenges in telehealth use 14 . Moreover, online communication skills and technology literacy, including competency in using technological tools and applications, have been frequently reported as challenges in teledentistry 15 , 16 . Concerns regarding limitations stemming from the lack of physical examination are also significant 17 . These challenges and complexities may impact the accuracy of diagnosis and the security and confidentiality of patient information. Therefore, telehealth training for dental professionals emerges as essential prerequisites to effectively navigate the use of teledentistry, fostering confidence and competence in remote oral healthcare delivery.

The feasibility and practicality of telehealth in dental education present ongoing challenges and concerns. Given the limitations of teledentistry compared to face-to-face appointments, areas of training should encompass the telehealth system, online communication, technical issues, confidentiality concerns, and legal compliance 18 . However, there is currently no educational strategy that effectively demonstrates the importance and application of teledentistry 19 . A role-play can be considered as a teaching strategy where learners play a role that closely resembles real-life scenarios. A well-organized storytelling allows learner to manage problematic situations, leading to the development of problem-solving skill 20 , 21 . When compared to traditional lecture-based learning, learners can also enhance their communication skills through conversations with simulated patients 22 , 23 . In addition, they could express their thoughts and emotions during a role-play through experiential learning 20 , 24 , 25 . Role-play through video teleconference would be considered as a distance learning tool for training dental professionals to effectively use teledentistry.

While there have been studies supporting online role-play as an effective learning tool due to its impact of flexibility, engagement, and anonymity 26 , 27 , no evidence has been yet reported whether or not this learning strategy could have potential for training teledentistry. Given the complicated issues in telehealth, role-play for training teledentistry should incorporate different learning aspects compared to face-to-face communication with patients. In addition, game components have proved to be supportive in dental education 28 , 29 . Consequently, this research aimed to evaluate user perceptions and educational impact of gamified online role-play to enhance learner competence and awareness in using teledentistry as well as to construct a conceptual framework highlighting how to design and implement this interactive learning strategy. This research would introduce and promote the design and implementation of gamified online role-play as a learning tool for training teledentistry. To achieve the aim, specific objectives were established as follows:

1. To design a gamified online role-play for teledentistry training.

2. To investigate learner perceptions regarding their confidence and awareness in the use of teledentistry after completing the gamified online role-play.

3. To explore user satisfactions toward the use of gamified online role-play.

4. To develop a conceptual framework for designing and implementing a gamified online role-play for teledentistry training.

Materials and methods

Research design.

This research employed an explanatory sequential mixed-methods design, where a quantitative phase was firstly performed followed by a qualitative phase 30 , 31 . The quantitative phase was conducted based on pre-experimental research using one-group pretest–posttest design. Participants were requested to complete self-perceived assessments toward confidence and awareness in the use of teledentistry before and after participating in a gamified online role-play. They were also asked to complete a satisfaction questionnaire in using a gamified online role-play for training teledentistry. The qualitative phase was afterwards conducted to explore in-depth information through semi-structured interviews, in order to enhance an understanding of the quantitative phase, and to develop a conceptual framework for designing and implementing an online role-play for training teledentistry.

A gamified online role-play for training teledentistry

A gamified online role-play was designed and developed by the author team. To ensure its educational impact was significant, the expected learning outcomes were formulated based on insights gathered from a survey with experienced instructors from the Department of Advanced General Dentistry, Faculty of Dentistry, Mahidol University. These learning outcomes covered areas of online communication skill, technical issues, technology literacy of patients, limitations of physical examination, and privacy concerns of personal information. Learning scenario and instructional content were subsequently designed to support learners in achieving the expected learning outcomes, with their alignments validated by three experts in dental education. A professional actress underwent training to role-play a patient with a dental problem, requesting a virtual consultation or teledentistry. Before conducting data collection, the simulated patient was required to undergo a training and adjusting process with a pilot group under supervision of two experts in advanced general dentistry and dental education who had experience with teledentistry to ensure realism and completeness of learning content.

According to the role-play scenario, an actress was assigned to portray a 34-year-old female with chief complaints of pain around both ears, accompanied by difficulties in chewing food due to tooth loss. She was instructed to express her anxiety and nervousness about addressing these issues. Additionally, it was specified that she could not take a day off from work during this period. Despite this constraint, she required a dental consultation to receive advice for initial self-care, as her symptoms significantly impacted her daily life. Furthermore, she was designated to encounter difficulties with the technological use of the teledentistry platform.

The game components were implemented into the online role-play to enhance motivation and engagement. As challenge and randomness appear to be game elements 32 , 33 , five challenge cards were designed and embedded into the online role-play, where a participant was asked to randomly select one of them before interacting with the simulated patient. The challenging situations were potential technical concerns which could occur frequently during video conferencing, including network problems (e.g., internet disconnection and poor connection) and audiovisual quality issues. The participants were blinded to the selected card, while it was revealed to only the simulated patient. The challenging conditions were mimicked by the organizers and simulated patient, allowing learners to deal with difficulties. Therefore, both challenges and randomness were implemented into this learning intervention not only to create learning situations but also to enhance engagement.

A feedback system was carefully considered and implemented into the gamified online role-play. Immediate feedback appears to be a key feature of interactive learning environments 29 . Formative feedback was instantly delivered to learners through verbal and non-verbal communication, including words (content), tone of voice, facial expressions, and gestures of the simulated patient. This type of feedback allowed participants to reflect on whether or not their inputs were appropriate, enabling them to learn from their mistakes, or so-called the role of failure 34 . Summative feedback was also provided at the end of the role-play through a reflection from a simulated patient and suggestions from an instructor.

Learners were able to interact with the simulated patient using an online meeting room by Cisco WebEx. According to the research setting (Fig.  1 ), a learner was asked to participate in the role-play activity using a computer laptop in a soundproof room, while a simulated patient was arranged in a prepared location showing her residential environment. The researcher and instructor also joined the online meeting room and observed the interaction between the simulated patient and learners during the role-play activity whether or not all necessary information was accurately obtained. The role-play activity took around 30 minutes.

figure 1

A diagram demonstrating the setting of gamified online role-play.

Research participants

Quantitative phase.

The participants in this research were postgraduate students from the Residency Training Program in Advanced General Dentistry at Mahidol University Faculty of Dentistry in academic year 2022, using a volunteer sampling. This program was selected because its objective was to develop graduates capable of integrating competencies from various dental disciplines to provide comprehensive dental care for both normal patients and those with special needs. Therefore, teledentistry should be a supportive component of their service. The recruitment procedure involved posting a recruiting text in the group chat of the residents. Those interested in participating in the research were informed to directly contact us to request more information, and they were subsequently allowed to decide whether they would like to participate. This approach ensured that participation was voluntary. Although there could be a non-response bias within this non-probability sampling technique 35 , it was considered as appropriate for this study, as participants were willing to have contribution in the learning activity, and therefore accurate and reliable research findings with no dropout could be achieved 36 .

The inclusion and exclusion criteria were established to determine the eligibility of prospective participants for this research. This study included postgraduate students from Years 1 to 3 in the Residency Training Program in Advanced General Dentistry at Mahidol University Faculty of Dentistry, enrolled during the academic year 2022. They were also required to at least complete the first semester to be eligible for this research to ensure familiarity with comprehensive dental care. However, they were excluded if they had previous involvement in the pilot testing of the gamified online role-play or if they were not fluent in the Thai language. The sample size was determined using a formula for two dependent samples (comparing means) 37 . To detect a difference in self-perceived confidence and awareness between pre- and post-assessments at a power of 90% and a level of statistical significance of 1%, five participants were required. With an assumed dropout rate of 20%, the number of residents per year (Year 1–3) was set to be 6. Therefore, 18 residents were required for this research.

Qualitative phase

The participants from the quantitative phase were selected for semi-structured interviews using a purposive sampling. This sampling method involved the selection of information-rich participants based on specific criteria deemed relevant to the research objective and to ensure a diverse representation of perspectives and experiences within the sample group 38 . In this research, the information considered for the purposive sampling included demographic data (e.g., sex and year of study), along with self-perceived assessment scores. By incorporating perceptions from a variety of participants, a broad spectrum of insights from different experiences in comprehensive dental practice and diverse improvement levels in self-perceived confidence and awareness could inform the design and implementation of the training program effectively. The sample size for this phase was determined based on data saturation, wherein interviews continued until no new information or emerging themes were retrieved. This method ensured thorough exploration of the research topic and maximized the richness of the qualitative data obtained.

Outcome assessments

To evaluate the gamified online role-play, a triangular design approach was employed, enabling the researchers to compare the research outcomes from different assessment methods. In this research, self-perceived assessments (confidence and awareness) in teledentistry, satisfactions toward gamified online role-play, and learner experience were assessed to assure the quality and feasibility of the gamified online role-play.

Self-perceived confidence and awareness toward teledentistry

All participants were requested to rate their perceptions of teledentistry before and after participating in the gamified online role-play (Supplementary material 1 ). The self-perceived assessment was developed based on previous literature 39 , 40 , 41 , 42 . The assessment scores would inform whether or not the participants could improve their self-perceived confidence and awareness through a learning activity. The assessment consisted of two parts, which were (1) self-perceived confidence and (2) self-perceived awareness. Each part contained six items, which were similar between the pre- and post-assessments. All items were designed using a 5-point Likert scale, where 1 being ‘strongly disagree’ and 5 being ‘strongly agree’.

Satisfactions toward the gamified online role-play

All participants were asked to complete the satisfaction questionnaire after participating in the gamified online role-play, to investigate whether or not they felt satisfied with their learning (Supplementary material 2 ). The questionnaire was developed based on previous literature regarding gamification and role-play 41 , 42 , 43 , 44 . Most of the items were designed using a 5-point Likert scale, where 1 being ‘very dissatisfied’ and 5 being ‘very satisfied’. They were grouped into three aspects, which were (1) Perceived usefulness, (2) Perceived ease of use, and (3) Perceived enjoyment.

Learner experiences within the gamified online role-play

Semi-structured interviews were conducted with the purposively selected participants to gather in-depth information regarding their learning experiences within the gamified online role-play. This technique allowed researchers to ask additional interesting topics raised from the responses of participants. A topic guide for interviews were constructed based on the findings of previous literature 45 , 46 , 47 . The interview was conducted in a private room by a researcher who was trained in conducting qualitative research including interviews. The interview sessions took approximately 45–60 minutes, where all responses from participants were recorded using a digital audio recorder with their permission. The recorded audios were transcribed using a verbatim technique by a transcription service under a confidential agreement.

Validity and reliability of data collection tools

To enhance the quality of self-perceived assessment and satisfaction questionnaire, they were piloted and revised to assure their validity and reliability. According to the content validity, three experts in advanced general dentistry were asked to evaluate the questionnaire, where problematic items were iteratively revised until they achieved the index of item-objective congruence (IOC) higher than 0.5. To perform a test–retest reliability, the validated versions of both self-perceived assessment and satisfaction questionnaire were afterwards piloted in residents from other programs, and the data were analyzed using an intraclass correlation coefficient (ICC), where the values of all items were 0.7 or greater. The data from the first pilot completion of both data collection tools were analyzed using Cronbach’s alpha to ensure the internal consistency of all constructs. The problematic items were deleted to achieve the coefficient alpha of 0.7 or greater for all constructs, which was considered as acceptable internal consistency.

Data analysis

The quantitative data retrieved from self-perceived assessment and satisfaction questionnaire were analyzed with the Statistical Package for Social Sciences software (SPSS, version 29, IBM Corp.). Descriptive statistics were performed to present an overview of the data. The scores from pre- and post-assessments were analyzed using a paired sample t-test to evaluate whether or not the participants would better self-perceive their confidence and awareness in teledentistry after participating in the gamified online role-play. One-way analysis of variance (ANOVA) was conducted to compare whether or not there were statistically significant differences in self-perceived assessment and satisfaction scores among the three academic years.

The qualitative data retrieved from semi-structured interviews were analyzed using a framework analysis, where its procedure involved transcription, familiarization with the interview data, coding, developing an analytical framework, indexing, charting, and data interpreting qualitative findings 48 . In this research, the initial codes had been pre-defined from previous literature and subsequently adjusted following the analysis of each transcript to develop an analytical framework (themes and subthemes), requiring several iterations until no additional codes emerged. Subsequently, the established categories and codes were applied consistently across all transcripts (indexing). The data from each transcript were then charted to develop a matrix, facilitating the management and summarization of qualitative findings. This method enabled the researchers to compare and contrast differences within the data and to identify connections between categories, thereby exploring their relationships and informing data interpretation.

The procedure of framework analysis necessitated a transparent process for data management and interpretation of emerging themes to ensure the robustness of research 49 . The transparency of this analytic approach enabled two researchers (C.Te. and K.S.) to independently analyze the qualitative data, and the emerging themes afterwards were discussed to obtain consensus among the researchers. This technique can be considered as a triangular approach to assure the intercoder reliability and internal validity of this research. The transparent process also allowed an external expert in dental education to verify the accuracy of the analysis. All emerging themes and the decision on data saturation were based on a discussion of all researchers until an agreement was made. NVivo (version 14, QSR International) was used to performed the qualitative data analysis. Subsequently, a conceptual framework was constructed to demonstrate emerging themes and subthemes together with their relationships.

Ethical consideration

The ethical approval for the study was approved by the Institutional Review Board of Faculty of Dentistry and Faculty of Pharmacy, Mahidol University on 29 th September 2022, the ethical approval number: MU-DT/PY-IRB 2022/049.2909. All methods were performed in accordance with the relevant guidelines and regulations. Although the data were not anonymous in nature as they contained identifiable data, they were coded prior to the analysis to assure confidentiality of participants.

Informed consent

Informed consent was obtained from all participants.

There were 18 residents from Year 1 to 3 of the Residency Training Program in Advanced General Dentistry who participated in this research (six from each year). Of these, there were 14 females and 4 males. There was no participant dropout, as all of them completed all required tasks, including the pre- and post-perceived assessments, gamified online role-play, and satisfaction questionnaire. According to the purposive sampling, the participants from the quantitative phase were selected for semi-structured interviews by considering sex, year of study, and self-perceived assessment scores. Twelve students (ten females and two males) participated in semi-structured interviews, where their characteristics are presented in Table 1 .

Internal consistency of all constructs

The data collected from the research participants, in addition to the pilot samples, were analyzed with Cronbach’s alpha to confirm the internal consistency. The coefficient alpha of all constructs demonstrated high internal consistency, as demonstrated in Table 2 .

Self-perceived assessments toward confidence and awareness of teledentistry

There were statistically significant increases in the assessment scores of self-perceived confidence and awareness after participating in the gamified online role-play ( P  < 0.001). According to Table 3 , there was an increase in self-perceived confidence from 3.38 (SD = 0.68) for the pre-assessment to 4.22 (SD = 0.59) for the post-assessment ( P  < 0.001). The findings of self-perceived awareness also showed score improvement from 4.16 (SD = 0.48) to 4.55 (SD = 0.38) after interacting with the simulated patient ( P  < 0.001).

According to Fig.  2 , participants demonstrated a higher level of self-perceived assessments for both self-confidence and awareness in all aspects after participating in the gamified online role-play for teledentistry training.

figure 2

Self-perceived assessments toward confidence and awareness of teledentistry.

When comparing the self-perceived assessment scores toward confidence and awareness in the use of teledentistry among the three years of study (Year 1–3), there were no statistically significant differences in the pre-assessment, post-assessment score, and score difference (Table 4 ).

Satisfactions toward the use of gamified online role-play

According to Fig.  3 , participants exhibited high levels of satisfaction with the use of gamified online role-play across all three aspects. The aspect of usefulness received the highest satisfaction rating with a score of 4.44 (SD = 0.23) out of 5, followed by ease of use and enjoyment, scoring 4.40 (SD = 0.23) and 4.03 (SD = 0.21), respectively. Particularly, participants expressed the highest satisfaction levels regarding the usefulness of gamified online role-play for identifying their role (Mean = 4.72, SD = 0.46) and developing problem-solving skills associated with teledentistry (Mean = 4.61, SD = 0.50). Additionally, they reported satisfaction with the learning sequence presented in the gamified online role-play (Mean = 4.61, SD = 0.50). However, participants did not strongly perceive that the format of the gamified online role-play could engage them with the learning task for an extended period (Mean = 3.72, SD = 0.83).

figure 3

Satisfactions toward the use of gamified online role-play.

When comparing the satisfaction levels perceived by participants from different academic years (Table 5 ), no statistically significant differences were observed among the three groups for all three aspects ( P  > 0.05).

Following the framework analysis of qualitative data, there were five emerging themes, including: (1) learner profile, (2) learning settings of the gamified online role-play, (3) pedagogical components, (4) interactive functions, and (5) educational impact.

Theme 1: Learner profile

Learner experience and preferences appeared to have impact on how the participants perceived the use of gamified online role-play for teledentistry training. When learners preferred role-play or realized benefits of teledentistry, they were likely to support this learning intervention. In addition, they could have seen an overall picture of the assigned tasks before participating in this research.

“I had experience with a role-play activity when I was dental undergraduates, and I like this kind of learning where someone role-plays a patient with specific personalities in various contexts. This could be a reason why I felt interested to participate in this task (the gamified online role-play). I also believed that it would be supportive for my clinical practice.” Participant 12, Year 1, Female “Actually, I' have seen in several videos (about teledentistry), where dentists were teaching patients to perform self-examinations, such as checking their own mouth and taking pictures for consultations. Therefore, I could have thought about what I would experience during the activity (within the gamified online role-play).” Participant 8, Year 2, Female

Theme 2: Learning settings of the gamified online role-play

Subtheme 2.1: location.

Participants had agreed that the location for conducting a gamified online role-play should be in a private room without any disturbances, enabling learners to focus on the simulated patient. This could allow them to effectively communicate and understand of the needs of patient, leading to a better grasp of lesson content. In addition, the environments of both learners and simulated patient should be authentic to the learning quality.

“The room should be a private space without any disturbances. This will make us feel confident and engage in conversations with the simulated patient.” Participant 10, Year 1, Female “… simulating a realistic environment can engage me to interact with the simulated patient more effectively ...” Participant 8, Year 2, Female

Subtheme 2.2: Time allocated for the gamified online role-play

The time allocated for the gamified online role-play in this research was considered as appropriate, as participants believed that a 30-minutes period should be suitable to take information and afterwards give some advice to their patient. In addition, a 10-minutes discussion on how they interact with the patient could be supportive for participants to enhance their competencies in the use of teledentistry.

“… it would probably take about 20 minutes because we would need to gather a lot of information … it might need some time to request and gather various information … maybe another 10-15 minutes to provide some advice.” Participant 7, Year 1, Female “I think during the class … we could allocate around 30 minutes for role-play, … we may have discussion of learner performance for 10-15 minutes ... I think it should not be longer than 45 minutes in total.” Participant 6, Year 2, Female

Subtheme 2.3: Learning consequence within a postgraduate curriculum

Most participants suggested that the gamified online role-play in teledentistry should be arranged in the first year of their postgraduate program. This could maximize the effectiveness of online role-play, as they would be able to implement teledentistry for their clinical practice since the beginning of their training. However, some participants suggested that this learning approach could be rearranged in either second or third year of the program. As they already had experience in clinical practice, the gamified online role-play would reinforce their competence in teledentistry.

"Actually, it would be great if this session could be scheduled in the first year … I would feel more comfortable when dealing with my patients through an online platform." Participant 11, Year 2, Male "I believe this approach should be implemented in the first year because it allows students to be trained in teledentistry before being exposed to real patients. However, if this approach is implemented in either the second or third year when they have already had experience in patient care, they would be able to better learn from conversations with simulated patients." Participant 4, Year 3, Male

Theme 3: Pedagogical components

Subtheme 3.1: learning content.

Learning content appeared to be an important component of pedagogical aspect, as it would inform what participants should learn from the gamified online role-play. Based on the interview data, participants reported they could learn how to use a video teleconference platform for teledentistry. The conditions of simulated patient embedded in an online role-play also allowed them to realize the advantages of teledentistry. In addition, dental problems assigned to the simulated patient could reveal the limitations of teledentistry for participants.

“The learning tasks (within the gamified online role-play) let me know how to manage patients through the teleconference.” Participant 5, Year 2, Female “… there seemed to be limitations (of teledentistry) … there could be a risk of misdiagnosis … the poor quality of video may lead to diagnostic errors … it is difficult for patients to capture their oral lesions.” Participant 3, Year 2, Female

Subtheme 3.2: Feedback

During the use of online role-play, the simulated patient can provide formative feedback to participants through facial expressions and tones of voice, enabling participants to observe and learn to adjust their inquiries more accurately. In addition, at the completion of the gamified online role-play, summative feedback provided by instructors could summarize the performance of participants leading to further improvements in the implementation of teledentistry.

“I knew (whether or not I interacted correctly) from the gestures and emotions of the simulated patient between the conversation. I could have learnt from feedback provided during the role-play, especially from the facial expressions of the patient.” Participant 11, Year 2, Male “The feedback provided at the end let me know how well I performed within the learning tasks.” Participant 2, Year 1, Female

Theme 4: Interactive functions

Subtheme 4.1: the authenticity of the simulated patient.

Most participants believed that a simulated patient with high acting performance could enhance the flow of role-play, allowing learners to experience real consequences. The appropriate level of authenticity could engage learners with the learning activity, as they would have less awareness of time passing in the state of flow. Therefore, they could learn better from the gamified online role-play.

"It was so realistic. ... This allowed me to talk with the simulated patient naturally ... At first, when we were talking, I was not sure how I should perform … but afterwards I no longer had any doubts and felt like I wanted to explain things to her even more." Participant 3, Year 2, Female "At first, I believed that if there was a factor that could influence learning, it would probably be a simulated patient. I was impressed by how this simulated patient could perform very well. It made the conversation flow smoothly and gradually." Participant 9, Year 3, Female

Subtheme 4.2: Entertaining features

Participants were likely to be satisfied with the entertaining features embedded in the gamified online role-play. They felt excited when they were being exposed to the unrevealed challenge which they had randomly selected. In addition, participants suggested to have more learning scenarios or simulated patients where they could randomly select to enhance randomness and excitement.

“It was a playful experience while communicating with the simulated patient. There are elements of surprise from the challenge cards that make the conversation more engaging, and I did not feel bored during the role-play.” Participant 4, Year 3, Male “I like the challenge card we randomly selected, as we had no idea what we would encounter … more scenarios like eight choices and we can randomly choose to be more excited. I think we do not need additional challenge cards, as some of them have already been embedded in patient conditions.” Participant 5, Year 2, Female

Subtheme 4.3: Level of difficulty

Participants suggested the gamified online role-play to have various levels of difficulty, so learners could have a chance to select a suitable level for their competence. The difficulties could be represented through patient conditions (e.g., systemic diseases or socioeconomic status), personal health literacy, and emotional tendencies. They also recommended to design the gamified online role-play to have different levels where learners could select an option that is suitable for them.

“The patient had hidden their information, and I needed to bring them out from the conversation.” Participant 12, Year 1, Female “Patients' emotions could be more sensitive to increase level of challenges. This can provide us with more opportunities to enhance our management skills in handling patient emotions.” Participant 11, Year 2, Male “… we can gradually increase the difficult level, similar to playing a game. These challenges could be related to the simulated patient, such as limited knowledge or difficulties in communication, which is likely to occur in our profession.” Participant 6, Year 2, Female

Theme 5: Educational impact

Subtheme 5.1: self-perceived confidence in teledentistry, communication skills.

Participants were likely to perceive that they could learn from the gamified online role-play and felt more confident in the use of teledentistry. This educational impact was mostly achieved from the online conversation within the role-play activity, where the participants could improve their communication skills through a video teleconference platform.

“I feel like the online role-play was a unique form of learning. I believe that I gained confidence from the online communication the simulated patient. I could develop skills to communicate effectively with real patients.” Participant 11, Year 2, Male “I believe it support us to train communication skills ... It allowed us to practice both listening and speaking skills more comprehensively.” Participant 4, Year 3, Male

Critical thinking and problem-solving skills

In addition to communication skills, participants reported that challenges embedded in the role-play allowed them to enhance critical thinking and problem-solving skills, which were a set of skills required to deal with potential problems in the use of teledentistry.

"It was a way of training before experiencing real situations … It allowed us to think critically whether or not what we performed with the simulated patients was appropriate." Participant 7, Year 1, Female “It allowed us to learn how to effectively solve the arranged problems in simulated situation. We needed to solve problems in order to gather required information from the patient and think about how to deliver dental advice through teledentistry.” Participant 11, Year 2, Male

Subtheme 5.2: Self perceived awareness in teledentistry

Participants believed that they could realize the necessity of teledentistry from the gamified online role-play. The storytelling or patient conditions allowed learners to understand how teledentistry could have both physical and psychological support for dental patients.

“From the activity, I would consider teledentistry as a convenient tool for communicating with patients, especially if a patient cannot go to a dental office”. Participant 5, Year 2, Female “I learned about the benefits of teledentistry, particularly in terms of follow-up. The video conference platform could support information sharing, such as drawing images or presenting treatment plans, to patients.” Participant 8, Year 2, Female

A conceptual framework of learning experience within a gamified online role-play

Based on the qualitative findings, a conceptual framework was developed in which a gamified online role-play was conceptualized as a learning strategy in supporting learners to be able to implement teledentistry in their clinical practice (Fig.  4 ).

figure 4

The conceptual framework of key elements in designing a gamified online role-play.

The conceptual framework has revealed key elements to be considered in designing a gamified online role-play. Learner profile, learning settings, pedagogical components, and interactive functions are considered as influential factors toward user experience within the gamified online role-play. The well-designed learning activity will support learners to achieve expected learning outcomes, considered as educational impact of the gamified online role-play. The contributions of these five key elements to the design of gamified online role-play were interpreted, as follows:

Learner profile: This element tailors the design of gamified online role-plays for teledentistry training involves considering the background knowledge, skills, and experiences of target learners to ensure relevance and engagement.

Learning settings: The element focuses the planning for gamified online role-plays in teledentistry training involves selecting appropriate contexts, such as location and timing, to enhance accessibility and achieve learning outcomes effectively.

Pedagogical components: This element emphasizes the alignment between learning components and learning outcomes within gamified online role-plays, to ensure that the content together with effective feedback design can support learners in improving their competencies from their mistakes.

Interactive functions: This element highlights interactivity features integrated into gamified online role-plays, such as the authenticity and entertaining components to enhance immersion and engagement, together with game difficulty for optimal flow. All these features should engage learners with the learning activities until the achievement of learner outcomes.

Educational impact: This element represents the expected learning outcomes, which will inform the design of learning content and activities within gamified online role-plays. In addition, this element could be considered to evaluate the efficacy of gamified online role-plays, reflecting how well learning designs align with the learning outcomes.

A gamified online role-play can be considered as a learning strategy for teledentistry according to its educational impact. This pedagogical approach could mimic real-life practice, where dental learners could gain experience in the use of teledentistry in simulated situations before interacting with actual patients. Role-play could provide learners opportunities to develop their required competencies, especially communication and real-time decision-making skills, in a predictable and safe learning environment 20 , 23 , 46 . Potential obstacles could also be arranged for learners to deal with, leading to the enhancement of problem-solving skill 50 . In addition, the recognition of teledentistry benefits can enhance awareness and encourage its adoption and implementation, which could be explained by the technology acceptance model 51 . Therefore, a gamified online role-play with a robust design and implementation appeared to have potential in enhancing self-perceived confidence and awareness in the use of teledentistry.

The pedagogical components comprised learning content, which was complemented by assessment and feedback. Learners could develop their competence with engagement through the learning content, gamified by storytelling of the online role-play 52 , 53 . Immediate feedback provided through facial expression and voice tone of simulated patients allowed participants to learn from their failure, considered as a key feature of game-based learning 29 , 45 . The discussion of summative feedback provided from an instructor at the end of role-play activity could support a debriefing process enabling participants to reflect their learning experience, considered as important of simulation-based game 54 . These key considerations should be initially considered in the design of gamified online role-play.

The interactive functions can be considered as another key component for designing and evaluating the gamified online role-play 45 . Several participants enjoyed with a learning process within the gamified online role-play and suggested it to have more learning scenarios. In other words, this tool could engage learners with an instructional process, leading to the achievement of learning outcomes 29 , 45 . As challenge and randomness appear to be game elements 32 , 33 , this learning intervention assigned a set of cards with obstacle tasks for learners to randomly pick up before interacting with simulated patients, which was perceived by participants as a feature to make the role-play more challenging and engaging. This is consistent with previous research, where challenging content for simulated patients could make learners more engaged with a learning process 55 . However, the balance between task challenges and learner competencies is certainly required for the design of learning activities 56 , 57 . The authenticity of simulated patient and immediate feedback could also affect the game flow, leading to the enhancement of learner engagement 45 . These elements could engage participants with a learning process, leading to the enhancement of educational impact.

The educational settings for implementing gamified online role-play into dental curriculum should be another concern. This aspect has been recognized as significant in existing evidence 45 . As this research found no significant differences in all aspects among the three groups of learners, this learning intervention demonstrated the potential for its implementation at any time of postgraduate dental curriculum. This argument can be supported by previous evidence where a role-play could be adaptable for learning at any time, as it requires a short learning period but provides learners with valuable experience prior to being exposed in real-life scenarios 58 . This strategy also provides opportunities for learners who have any question or concern to seek advice or guidance from their instructors 59 . Although the gamified online role-play can be arranged in the program at any time, the first academic year should be considered, as dental learners would be confidence in implementing teledentistry for their clinical practice.

While a gamified online role-play demonstrated its strengths as an interactive learning strategy specifically for teledentistry, there are a couple of potential drawbacks that need to be addressed. The requirement for synchronous participation could limit the flexibility of access time for learners (synchronous interactivity limitation). With only one learner able to engage with a simulated patient at a time (limited participants), more simulated patients would be required if there are a number of learners, otherwise they would need to wait for their turn. Time and resources are significantly required for preparing simulated patients 60 . Despite the use of trained and calibrated professional actors/actresses, inauthenticity may be perceived during role-plays, requiring a significant amount of effort to achieve both interactional and clinical authenticities 46 . Future research could investigate asynchronous learning approaches utilizing non-player character (NPC) controlled by an artificial intelligence system as a simulated patient. This setup would enable multiple learners to have the flexibility to engage with the material at their own pace and at times convenient to them 29 . While there are potential concerns about using gamified online role-plays, this interactive learning intervention offers opportunities for dental professionals to enhance their teledentistry competency in a safe and engaging environment.

Albeit the robust design and data collection tools to assure reliability and validity as well as transparency of this study, a few limitations were raised leading to a potential of further research. While this research recruited only postgraduate students to evaluate the feasibility of gamified online role-play in teledentistry training, further research should include not only experienced dental practitioners but also undergraduate students to confirm its potential use in participants with different learner profiles. More learning scenarios in other dental specialties should also be included to validate its effectiveness, as different specialties could have different limitations and variations. Additional learning scenarios from various dental disciplines should be considered to validate the effectiveness of gamified online role-plays, as different specialties may present unique limitations and variations. A randomized controlled trial with robust design should be required to compare the effectiveness of gamified online role-play with different approaches in training the use of teledentistry.

Conclusions

This research supports the design and implementation of a gamified online role-play in dental education, as dental learners could develop self-perceived confidence and awareness with satisfaction. A well-designed gamified online role-play is necessary to support learners to achieve expected learning outcomes, and the conceptual framework developed in this research can serve as a guidance to design and implement this interactive learning strategy in dental education. However, further research with robust design should be required to validate and ensure the educational impact of gamified online role-play in dental education. Additionally, efforts should be made to develop gamified online role-play in asynchronous learning approaches to enhance the flexibility of learning activities.

Data availability

The data that support the findings of this study are available from the corresponding author, up-on reasonable request. The data are not publicly available due to information that could compromise the privacy of research participants.

Van Dyk, L. A review of telehealth service implementation frameworks. Int. J. Environ. Res. Public Health 11 (2), 1279–1298 (2014).

Article   PubMed   PubMed Central   Google Scholar  

Bartz, C. C. Nursing care in telemedicine and telehealth across the world. Soins. 61 (810), 57–59 (2016).

Article   PubMed   Google Scholar  

Lin, G.S.S., Koh, S.H., Ter, K.Z., Lim, C.W., Sultana, S., Tan, W.W. Awareness, knowledge, attitude, and practice of teledentistry among dental practitioners during COVID-19: A systematic review and meta-analysis. Medicina (Kaunas). 58 (1), 130 (2022).

Wolf, T.G., Schulze, R.K.W., Ramos-Gomez, F., Campus, G. Effectiveness of telemedicine and teledentistry after the COVID-19 pandemic. Int. J. Environ. Res. Public Health. 19 (21), 13857 (2022).

Gajarawala, S. N. & Pelkowski, J. N. Telehealth benefits and barriers. J. Nurse Pract. 17 (2), 218–221 (2021).

Jampani, N. D., Nutalapati, R., Dontula, B. S. & Boyapati, R. Applications of teledentistry: A literature review and update. J. Int. Soc. Prev. Community Dent. 1 (2), 37–44 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Khan, S. A. & Omar, H. Teledentistry in practice: literature review. Telemed. J. E. Health. 19 (7), 565–567 (2013).

Baheti, M. J. B. S., Toshniwal, N. G. & Misal, A. Teledentistry: A need of the era. Int. J. Dent. Med. Res. 1 (2), 80–91 (2014).

Google Scholar  

Datta, N., Derenne, J., Sanders, M. & Lock, J. D. Telehealth transition in a comprehensive care unit for eating disorders: Challenges and long-term benefits. Int. J. Eat. Disord. 53 (11), 1774–1779 (2020).

Bursell, S. E., Brazionis, L. & Jenkins, A. Telemedicine and ocular health in diabetes mellitus. Clin. Exp. Optom. 95 (3), 311–327 (2012).

da Costa, C. B., Peralta, F. D. S. & Ferreira de Mello, A. L. S. How has teledentistry been applied in public dental health services? An integrative review. Telemed. J. E. Health. 26 (7), 945–954 (2020).

Heckemann, B., Wolf, A., Ali, L., Sonntag, S. M. & Ekman, I. Discovering untapped relationship potential with patients in telehealth: A qualitative interview study. BMJ Open. 6 (3), e009750 (2016).

Pérez-Noboa, B., Soledispa-Carrasco, A., Padilla, V. S. & Velasquez, W. Teleconsultation apps in the COVID-19 pandemic: The case of Guayaquil City, Ecuador. IEEE Eng. Manag. Rev. 49 (1), 27–37 (2021).

Article   Google Scholar  

Wamsley, C. E., Kramer, A., Kenkel, J. M. & Amirlak, B. Trends and challenges of telehealth in an academic institution: The unforeseen benefits of the COVID-19 global pandemic. Aesthetic Surg. J. 41 (1), 109–118 (2020).

Jonasdottir, S. K., Thordardottir, I. & Jonsdottir, T. Health professionals’ perspective towards challenges and opportunities of telehealth service provision: A scoping review. Int. J. Med. Inform. 167 , 104862 (2022).

Tan, S. H. X., Lee, C. K. J., Yong, C. W. & Ding, Y. Y. Scoping review: Facilitators and barriers in the adoption of teledentistry among older adults. Gerodontology. 38 (4), 351–365 (2021).

Minervini, G. et al. Teledentistry in the management of patients with dental and temporomandibular disorders. BioMed. Res. Int. 2022 , 7091153 (2022).

Edirippulige, S. & Armfield, N. Education and training to support the use of clinical telehealth: A review of the literature. J. Telemed. Telecare. 23 (2), 273–282 (2017).

Article   CAS   PubMed   Google Scholar  

Mariño, R. & Ghanim, A. Teledentistry: A systematic review of the literature. J. Telemed. Telecare. 19 (4), 179–183 (2013).

Article   ADS   PubMed   Google Scholar  

Armitage-Chan, E. & Whiting, M. Teaching professionalism: Using role-play simulations to generate professionalism learning outcomes. J. Vet. Med. Educ. 43 (4), 359–363 (2016).

Spyropoulos, F., Trichakis, I. & Vozinaki, A.-E. A narrative-driven role-playing game for raising flood awareness. Sustainability. 14 (1), 554 (2022).

Jiang, W. K. et al. Role-play in endodontic teaching: A case study. Chin. J. Dent. Res. 23 (4), 281–288 (2020).

PubMed   Google Scholar  

Vizeshfar, F., Zare, M. & Keshtkaran, Z. Role-play versus lecture methods in community health volunteers. Nurse Educ. Today. 79 , 175–179 (2019).

Nestel, D. & Tierney, T. Role-play for medical students learning about communication: Guidelines for maximising benefits. BMC Med. Educ. 7 , 3 (2007).

Gelis, A. et al. Peer role-play for training communication skills in medical students: A systematic review. Simulat. Health. 15 (2), 106–111 (2020).

Cornelius, S., Gordon, C. & Harris, M. Role engagement and anonymity in synchronous online role play. Int. Rev. Res. Open Distrib. Learn. 12 (5), 57–73 (2011).

Bell, M. Online role-play: Anonymity, engagement and risk. Educ. Med. Int. 38 (4), 251–260 (2001).

Sipiyaruk, K., Gallagher, J. E., Hatzipanagos, S. & Reynolds, P. A. A rapid review of serious games: From healthcare education to dental education. Eur. J. Dent. Educ. 22 (4), 243–257 (2018).

Sipiyaruk, K., Hatzipanagos, S., Reynolds, P. A. & Gallagher, J. E. Serious games and the COVID-19 pandemic in dental education: An integrative review of the literature. Computers. 10 (4), 42 (2021).

Morse, J.M., Niehaus, L. Mixed Method Design: Principles and Procedures. (2016).

Creswell, J. W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches 3rd edn. (SAGE Publications, 2009).

Cheng, V. W. S., Davenport, T., Johnson, D., Vella, K. & Hickie, I. B. Gamification in apps and technologies for improving mental health and well-being: Systematic review. JMIR Ment. Health. 6 (6), e13717 (2019).

Gallego-Durán, F. J. et al. A guide for game-design-based gamification. Informatics. 6 (4), 49 (2019).

Gee, J. P. Learning and games. In The Ecology of Games: Connecting Youth, Games, and Learning (ed. Salen, K.) 21–40 (MIT Press, 2008).

Cheung, K. L., ten Klooster, P. M., Smit, C., de Vries, H. & Pieterse, M. E. The impact of non-response bias due to sampling in public health studies: A comparison of voluntary versus mandatory recruitment in a Dutch national survey on adolescent health. BMC Public Health. 17 (1), 276 (2017).

Murairwa, S. Voluntary sampling design. Int. J. Adv. Res. Manag. Social Sci. 4 (2), 185–200 (2015).

Chow, S.-C., Shao, J., Wang, H. & Lokhnygina, Y. Sample Size Calculations in Clinical Research (CRC Press, 2017).

Book   Google Scholar  

Palinkas, L. A. et al. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration Policy Mental Health Mental Health Services Res. 42 (5), 533–544 (2015).

McIlvried, D. E., Prucka, S. K., Herbst, M., Barger, C. & Robin, N. H. The use of role-play to enhance medical student understanding of genetic counseling. Genet. Med. 10 (10), 739–744 (2008).

Schlegel, C., Woermann, U., Shaha, M., Rethans, J.-J. & van der Vleuten, C. Effects of communication training on real practice performance: A role-play module versus a standardized patient module. J. Nursing Educ. 51 (1), 16–22 (2012).

Manzoor, I. M. F. & Hashmi, N. R. Medical students’ perspective about role-plays as a teaching strategy in community medicine. J. Coll. Physicians Surg. Pak. 22 (4), 222–225 (2012).

Cornes, S., Gelfand, J. M. & Calton, B. Foundational telemedicine workshop for first-year medical students developed during a pandemic. MedEdPORTAL. 17 , 11171 (2021).

King, J., Hill, K. & Gleason, A. All the world’sa stage: Evaluating psychiatry role-play based learning for medical students. Austral. Psychiatry. 23 (1), 76–79 (2015).

Arayapisit, T. et al. An educational board game for learning orofacial spaces: An experimental study comparing collaborative and competitive approaches. Anatomical Sci. Educ. 16 (4), 666–676 (2023).

Sipiyaruk, K., Hatzipanagos, S., Vichayanrat, T., Reynolds, P.A., Gallagher, J.E. Evaluating a dental public health game across two learning contexts. Educ. Sci. 12 (8), 517 (2022).

Pilnick, A. et al. Using conversation analysis to inform role play and simulated interaction in communications skills training for healthcare professionals: Identifying avenues for further development through a scoping review. BMC Med. Educ. 18 (1), 267 (2018).

Lane, C. & Rollnick, S. The use of simulated patients and role-play in communication skills training: A review of the literature to August 2005. Patient Educ. Counseling. 67 (1), 13–20 (2007).

Gale, N. K., Heath, G., Cameron, E., Rashid, S. & Redwood, S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med. Res. Methodol. 13 (1), 117 (2013).

Ritchie, J., Lewis, J., Nicholls, C. M. & Ormston, R. Qualitative Research Practice: A Guide for Social Science Students and Researchers (Sage, 2014).

Chen, J. C. & Martin, A. R. Role-play simulations as a transformative methodology in environmental education. J. Transform. Educ. 13 (1), 85–102 (2015).

Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manag. Inform. Syst. Quart. 13 (3), 319–340 (1989).

Novak, E., Johnson, T. E., Tenenbaum, G. & Shute, V. J. Effects of an instructional gaming characteristic on learning effectiveness, efficiency, and engagement: Using a storyline for teaching basic statistical skills. Interact. Learn. Environ. 24 (3), 523–538 (2016).

Marchiori, E. J. et al. A narrative metaphor to facilitate educational game authoring. Comput. Educ. 58 (1), 590–599 (2012).

Luctkar-Flude, M. et al. Effectiveness of debriefing methods for virtual simulation: A systematic review. Clin. Simulat. Nursing. 57 , 18–30 (2021).

Joyner, B. & Young, L. Teaching medical students using role play: Twelve tips for successful role plays. Med. Teach. 28 (3), 225–229 (2006).

Csikszentmihalyi, M. Flow: The Psychology of Optimal Performance (HarperCollins Publishers, 1990).

Buajeeb, W., Chokpipatkun, J., Achalanan, N., Kriwattanawong, N. & Sipiyaruk, K. The development of an online serious game for oral diagnosis and treatment planning: Evaluation of knowledge acquisition and retention. BMC Med. Educ. 23 (1), 830 (2023).

Littlefield, J. H., Hahn, H. B. & Meyer, A. S. Evaluation of a role-play learning exercise in an ambulatory clinic setting. Adv. Health. Sci. Educ. Theory Pract. 4 (2), 167–173 (1999).

Alkin, M. C. & Christie, C. A. The use of role-play in teaching evaluation. Am. J. Evaluat. 23 (2), 209–218 (2002).

Lovell, K. L., Mavis, B. E., Turner, J. L., Ogle, K. S. & Griffith, M. Medical students as standardized patients in a second-year performance-based assessment experience. Med. Educ. Online. 3 (1), 4301 (1998).

Download references

Acknowledgements

The authors would like to express our sincere gratitude to participants for their contributions in this research. We would also like to thank the experts who provided their helpful suggestions in the validation process of the data collection tools.

This research project was funded by the Faculty of Dentistry, Mahidol University. The APC was funded by Mahidol University.

Author information

Authors and affiliations.

Department of Advanced General Dentistry, Faculty of Dentistry, Mahidol University, Bangkok, Thailand

Chayanid Teerawongpairoj & Chanita Tantipoj

Department of Orthodontics, Faculty of Dentistry, Mahidol University, Bangkok, Thailand

Kawin Sipiyaruk

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, C.Te., C.Ta., and K.S.; methodology, C.Te., C.Ta., and K.S.; validation, C.Te., C.Ta., and K.S.; investigation, C.Te. and K.S.; formal analysis, C.Te., C.Ta., and K.S.; resources, C.Te., C.Ta., and K.S.; data curation, C.Ta. and K.S.; writing-original draft preparation, C.Te., C.Ta., and K.S.; writing-review and editing, C.Te., C.Ta., and K.S. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Kawin Sipiyaruk .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Teerawongpairoj, C., Tantipoj, C. & Sipiyaruk, K. The design and evaluation of gamified online role-play as a telehealth training strategy in dental education: an explanatory sequential mixed-methods study. Sci Rep 14 , 9216 (2024). https://doi.org/10.1038/s41598-024-58425-9

Download citation

Received : 30 September 2023

Accepted : 28 March 2024

Published : 22 April 2024

DOI : https://doi.org/10.1038/s41598-024-58425-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dental education
  • Distance learning
  • Game-based learning
  • Gamification

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

limitations of qualitative evaluation research

IMAGES

  1. Choosing Between Quantitative vs Qualitative Research

    limitations of qualitative evaluation research

  2. PPT

    limitations of qualitative evaluation research

  3. Qualitative Research: Definition, Methodology, Limitation, Examples

    limitations of qualitative evaluation research

  4. 21 Research Limitations Examples (2023)

    limitations of qualitative evaluation research

  5. Limitations Of Qualitative Research

    limitations of qualitative evaluation research

  6. Limitations Of Qualitative Research

    limitations of qualitative evaluation research

VIDEO

  1. Quantitative Research: Its Characteristics, Strengths, and Weaknesses

  2. Limitations & Qualitative Characteristics of Accounting information(ENG):11th Commerce GSEB

  3. QUANTITATIVE METHODOLOGY (Part 2 of 3):

  4. Qualitative and Quantitative ​Data Analysis Approaches​

  5. OR EP 04 PHASES , SCOPE & LIMITATIONS OF OPERATION RESEARCH

  6. Exploring Research Methodologies in the Social Sciences (4 Minutes)

COMMENTS

  1. Presenting and Evaluating Qualitative Research

    The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education. It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and ...

  2. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  3. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  4. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  5. Limited by our limitations

    Abstract. Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations.

  6. Chapter 21: Qualitative evidence

    For qualitative primary studies, the 8-item process evaluation tool developed by the EPPI-Centre (Rees et al 2009, Shepherd et al 2010) can be used to supplement tools selected to assess methodological strengths and limitations and risks to rigour in primary qualitative studies.

  7. PDF What You Can (And Can't) Do With Qualitative Research

    1. Recognize that there is no simple distinction between qualitative and quan-titative research 2. Understand the uses and limitations of both forms of research 3. Understand that qualitative researchrefers to a wide range of models and research practices 4. Work out whether qualitative methods are appropriate to your research topic

  8. Revisiting Bias in Qualitative Research: Reflections on Its

    Bias—commonly understood to be any influence that provides a distortion in the results of a study (Polit & Beck, 2014)—is a term drawn from the quantitative research paradigm.Most (though perhaps not all) of us would recognize the concept as being incompatible with the philosophical underpinnings of qualitative inquiry (Thorne, Stephens, & Truant, 2016).

  9. Strength and Limitations of a Qualitative Research Design from the

    Understanding the strengths and limitations of a Qualitative Research Design can help with decision-making when deciding on a research method. Being a people person is essential if considering ...

  10. Issues of validity and reliability in qualitative research

    Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research.2-4 In the broadest context these terms are applicable, with validity referring to the integrity and ...

  11. PDF Qualitative Research Methods in Program Evaluation

    Typically gathered in the field, that is, the setting being studied, qualitative data used for program evaluation are obtained from three sources (Patton, 2002): In-depth interviews that use open-ended questions: "Interviews" include both one-on-one interviews and focus groups.

  12. Rethinking Barriers and Enablers in Qualitative Health Research

    Well-theorised qualitative research and evaluation has greater transferability, positions the study better within what is already known, and allows others to build on it more effectively than a-theoretical studies that produce a list of findings (Maxwell & Chmiel, 2014).

  13. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  14. Limitations of the Study

    Sample Size Limitations in Qualitative Research. Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework.

  15. Evaluation Research: Definition, Methods and Examples

    The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret. Learn more: Qualitative Market Research: The Complete Guide. Survey software can be used for both the evaluation research methods.

  16. Strengths and Limitations of Qualitative and Quantitative Research Methods

    Jamshed (2014) advocates the use of interviewing and observation as two main methods. to have an in depth and extensive understanding of a complex reality. Qualitative studies ha ve been used in a ...

  17. Challenges in conducting qualitative research in health: A conceptual

    Results: One of the main topics discussed is the nature of qualitative research, its inherent challenges, and how to overcome them. Some of those highlighted here include: identification of the research problem, formation of the research question/aim, and selecting an appropriate methodology and research design, which are the main concerns of qualitative researchers and need to be handled ...

  18. 5 Strengths and 5 Limitations of Qualitative Research

    This may include who buys a product/service, or how employees feel about their employers. Fewer Limitations - Qualitative studies are less stringent than quantitative ones. Outside the box answers to questions, opinions, and beliefs are included in data collection and data analysis. More Versatile - Qualitative research is much easier at times ...

  19. How to Write Limitations of the Study (with examples)

    Common types of limitations and their ramifications include: Theoretical: limits the scope, depth, or applicability of a study. Methodological: limits the quality, quantity, or diversity of the data. Empirical: limits the representativeness, validity, or reliability of the data. Analytical: limits the accuracy, completeness, or significance of ...

  20. Clinician and staff experiences with frustrated patients during an

    This qualitative case study was situated within a larger multi-methods evaluation of the EHR transition. We conducted a total of 122 interviews with 30 clinicians and staff across disciplines at the initial VA EHR transition site before, immediately after, and up to 12 months after go-live (September 2020-November 2021).

  21. Non-dispensing pharmacists integrated into general practices as a new

    This study had several limitations. Firstly, due to logistic reasons there was quite a large time frame between the ending of the intervention period (May 2015) and the interviews (March until June 2018). ... Qualitative evaluation and research methods. Beverly Hills: SAGE; 1990. pp. 169-86. Google Scholar

  22. Full article: "Discharge Doesn't Mean the End" Exploring Success in

    Strengths and Limitations Social positionality can significantly influence individuals' experiences with chronic pain Citation 40 . Thus, the inclusion of participants from certain under-represented groups in chronic pain research is a strength of this study, as it explores the experiences of discharge from diverse YA chronic pain service ...

  23. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    The aim of this integrative review was to analyze and synthetize ethical dilemmas that occur during the progress of qualitative investigation and the strategies proposed to face them. The search for studies used LILACS and MEDLINE databases with descriptors "research ethics" and "qualitative research", originating 108 titles. Upon ...

  24. Full article: Healthcare outcomes for unsheltered persons experiencing

    Despite the limitations, this program evaluation has important implications for policy and future programs designed to help PEH. ... Jenny was also a co-author of a qualitative research project with Dr. Perna on low-barrier buprenorphine treatment for unsheltered PEH. ... Harper Harris is an Evaluation Specialist I for the Research Division at ...

  25. Qualitative Methods in Health Care Research

    Significance of Qualitative Research. The qualitative method of inquiry examines the 'how' and 'why' of decision making, rather than the 'when,' 'what,' and 'where.'[] Unlike quantitative methods, the objective of qualitative inquiry is to explore, narrate, and explain the phenomena and make sense of the complex reality.Health interventions, explanatory health models, and medical-social ...

  26. The design and evaluation of gamified online role-play as a ...

    Research design. This research employed an explanatory sequential mixed-methods design, where a quantitative phase was firstly performed followed by a qualitative phase 30,31.The quantitative ...

  27. PDF arXiv:2404.15777v1 [cs.CL] 24 Apr 2024

    a comprehensive understanding of the potential strengths and limitations of LLMs in medical ap- ... highlighting both quantitative and qualitative strengths in toxicity detection across social media datasets. ... and medical research articles. The evaluation focused on metrics like Precision, Recall, F1 scores for NER and RE tasks; Recall@1 for ...

  28. 2024 CEDHARS Research Symposium poster session winners

    The inaugural CEDHARS research symposium "Addressing Ableism in Research, Healthcare, and Higher Education" was held January 25-26, 2024, at the UAB Alumni House to bring together and promote leading voices in the movement toward inclusion science. ... First: Madison Mintz, UAB Department of Physical Therapy, "Qualitative Evaluation of an ...

  29. Evaluation Research: Possibilities and Limitations

    The limitations of evaluation lie in the minimal effects of most programs and in the reactive nature of both the program and the evaluation. A theory of evaluation research that indicates the possible types and amounts of impact created by various kinds of social programs, as well as the differences between long-term and short-term effects, is ...

  30. PDF 31320 Federal Register /Vol. 89, No. 80/Wednesday, April 24 ...

    due to technical or legal limitations. Also, under §35.203, the final rule allows a public entity flexibility to show that its use of other designs, methods, or techniques as alternatives to WCAG 2.1 Level AA provides substantially equivalent or greater accessibility and usability of the web content or mobile app.