An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Research versus practice in quality improvement? Understanding how we can bridge the gap
Lisa r hirschhorn, rohit ramaswamy, mahesh devnani, abraham wandersman, lisa a simpson, ezequiel garcia-elorrio.
- Author information
- Article notes
- Copyright and License information
Address reprint requests to: Lisa R Hirschhorn, Department of Medical Social Sciences, 625 N Michigan Ave 14-013, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, USA. Tel: 312-503-1797; E-mail: [email protected]
Received 2017 Aug 27; Revised 2018 Jan 17; Accepted 2018 Feb 5; Issue date 2018 Apr.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( http://creativecommons.org/licenses/by-nc/4.0/ ), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]
The gap between implementers and researchers of quality improvement (QI) has hampered the degree and speed of change needed to reduce avoidable suffering and harm in health care. Underlying causes of this gap include differences in goals and incentives, preferred methodologies, level and types of evidence prioritized and targeted audiences. The Salzburg Global Seminar on ‘Better Health Care: How do we learn about improvement?’ brought together researchers, policy makers, funders, implementers, evaluators from low-, middle- and high-income countries to explore how to increase the impact of QI. In this paper, we describe some of the reasons for this gap and offer suggestions to better bridge the chasm between researchers and implementers. Effectively bridging this gap can increase the generalizability of QI interventions, accelerate the spread of effective approaches while also strengthening the local work of implementers. Increasing the effectiveness of research and work in the field will support the knowledge translation needed to achieve quality Universal Health Coverage and the Sustainable Development Goals.
Keywords: improvement, learning, complex adaptive systems, implementation, improvement science
Introduction
After mixed results from the Millennium Development Goals (MDGs) strategy, the global agenda recognized the critical role of ensuring not just access but quality of health care delivery. As a result, quality and improvement have become a core focus within the Universal Health Coverage movement to achieve the goal of better population health and Sustainable Development Goals (SDGs)[ 1 – 3 ]. In low- and middle-income countries, quality improvement (QI) is used to identify performance gaps and implement improvement interventions to address these problems at the local, sub national and national levels. Methods used by these improvement interventions range from process improvements using incremental, cyclically implemented changes appropriate to the local context, to system-level interventions and policies to improve and sustain quality. Regardless of the scope of improvement efforts and methods employed, the impact and spread of QI has often fallen short. Causes of these lost opportunities include how decisions about improvement interventions are made, the methodology for measuring the effectiveness of the intervention, what data are collected and used and how the information on both the implementation and the intervention is communicated to drive spread and knowledge translation [ 4 , 5 ]. Practitioners engaged in improvement in their organizations are frustrated by research reviews which often show a lack of conclusiveness about the effectiveness of QI when many of them see the local benefits from their work. Researchers complain about the lack of rigor in the application of QI methods in practice sittings and about poor documentation of the implementation process [ 6 ].
There is a growing realization of the need for common ground between implementers and researchers that promotes use of more systematic and rigorous methods to assess the improvement intervention effectiveness when appropriate but does not demand that all QI implementations be subject to the experimental methods commonly considered to be the gold standard of evidence. To explore the causes of this gap and address how to bridge the gap and better engage the targeted consumers of generated knowledge, including communities, governments and funders, a session ‘Better Health Care: How do we learn about improvement?’ was organized by Salzburg Global Seminar (SGS) [ 7 ]. The session brought together experts from a range of fields and organizations, including researchers, improvement implementers from the field, policy makers, and representatives from countries and international organizations.
For a partnership between researchers and implementers to become more consistent in improvement projects and studies, the incentives and priorities of each of these groups need to be better aligned in QI work and its evaluation. In this paper, we build on the Salzburg discussions, existing literature, and our own experience to explore the barriers to collaboration and offer suggestions on how to start to address these barriers. In the spirit of quality improvement, we hope that these recommendations are adopted and tried by groups interested in advancing the research and the practice of QI.
Why the gap exists
Both groups use data to evaluate whether improvements have taken place and are interested in the question of ‘did it work’. However, these gaps have occurred in part because of differences in goals, evidence needs and methods used and incentives for results and dissemination.
As we consider the major differences between researchers and implementers, we should recognize that there is not a clearly defined dichotomy between these two groups. Rather, those who are focused on in improvement are part of a continuum and are driven by a range of goals from driving and demonstrating local improvements to a focus on attributing these improvements to QI methods that can be generalized and spread, as illustrated in Table 1 , which also describes differences in incentives, discussed further below. Organization-based implementers focus on quality improvement projects, where the primary goal is driving change to a local problem to improve care. Policy and decision makers' goals are broader improvement, needing evidence for current and future decision on what methods and implementation strategies to use. Researchers have a goal of developing new and generalizable knowledge about the effectiveness of QI methods.
Selected participants and stakeholders in quality improvement work and research and their incentives and goals
Goals | Incentives | |
---|---|---|
QI team members and institutional champions | Implement effective QI projects and promote and support change in their institutions through good improvement practice | Local improvement and disseminate the best local knowledge about what works |
Policy makers whose goals are | Prioritization to invest in improvement projects based on best available evidence from academic research and practical wisdom | Make effective, yet timely and practical decisions given constraints on time and knowledge to choose and spread efficient, effective and sustainable improvement |
Embedded (practice-based) researchers, QI implementers engaged in research | Drive improvement in their own setting, advance the best improvement methods in their own settings and create generalizable knowledge to make a plausible case linking the QI activities to observed outcomes for broader dissemination | Create practical yet generalizable knowledge linking improvement activities to observed outcomes for dissemination to both practice and research audiences |
Academic and other researchers | Establish strong causal relationships between QI and outcomes, promoting more rigorous experimental research in QI | Use of rigorous science that can be published in peer-reviewed journals and establish objective standards of evidence |
Incentives for results and dissemination
The differences in goals and evidence are related to often competing incentives. Implementers are incentivized to improve quality and meet the demands of stakeholders, whether local communities, government or funders. Researchers are rewarded through dissemination of evidence in high-impact peer-reviewed journals, research grants and academic promotions. Policy makers are rewarded by timely response to gaps with broad visible changes in their populations. Timeframes of these incentives are also often different, with the most rigorous studies taking years to measure impact, followed by careful analysis and dissemination. Implementers and policy makers, however, are often under pressure to show short-term change and respond to new and emerging issues even as they continue with existing improvement work.
The goals of documentation and dissemination of projects can also differ between researchers and implementers and their stakeholders. There is a strong recognition that the evidence generated by even the best QI efforts is not effectively translated into further spread and adoption [ 8 ]. This is because implementers working on QI interventions in their organizations are incentivized by improvement and do not usually have a demand to document their work beyond communication with organizational leaders. While there are growing venues for sharing of case reports through learning collaboratives and local meetings designed to facilitate peer learning, this documentation typically involves a description of the process of implementation, but not at a level of detail or rigor of value to researchers and the broader community. There are a number of disincentives for implementers to increase the rigor and detail of their local work including competing demands to deliver services and ongoing improvement, and the paucity of journals interested in publishing even well- documented local results because they prioritize rigorous results of evaluations with strong designs involving carefully constructed QI research studies. Researchers are incentivized by more academic dissemination through these peer-reviewed journals and presentation at conferences. This nonalignment results in practitioners being deprived of access to broader venues to disseminate their work and researchers losing rich contextual data that is critically important to evaluate the effectiveness of QI.
Evidence needed and methods prioritized
The differences in the goals and incentives of different stakeholders lead to differences in the amount of evidence that is considered adequate and the methods used to generate this evidence. Implementers are interested in the evidence of change in their local projects, with less emphasis on transferring or generalizing what they did for use in other settings. They may rely on a combination of pre-and-post intervention data, QI statistical methods such as run charts and tacit organizational knowledge to assess the evidence of change in their projects. Policy makers have an interest in evidence which is robust enough from the QI to inform resource allocation, but may still have a focus on a specific geography rather than generalizability at scale. They are interested in generalizable knowledge about successful QI methods, but are sensitive to the burden and costs and time of requiring rigorous research methods on implementing groups.
Researchers aim for evidence which is robust enough to provide globally relevant conclusions with limited threats to internal validity. This group is most supportive of the use of rigorous experimental research designs to generate the highest possible standards of evidence. Traditionally, this had been limited to a small set of rigid experimental designs with appropriate controls or comparison groups driven in part by research funders and academic standards to be able to attribute change to the improvement interventions. This set of designs has been expanding in the past few years as better understanding of the value of quasi-experimental methods has emerged. [ 9 , 10 ]
Why better alignment is needed
QI interventions differ from many fixed clinical or public health interventions [ 11 ]. In this supplement, Ramaswamy and others describe QI interventions as complex (multi-pronged and context-specific) interventions in complex systems (non-linear pathways and emergent behaviors). For better learning from QI, implementers, policy makers and researchers both need to know not just effectiveness (the focus of local measurement, outcomes research and impact evaluation) but also 'how and why' the change happened (implementation), cost and sustainability ensuring that the evidence produced will be more relevant to the stakeholders at the local and broader level. Therefore, finding a common ground through ‘development of a culture of partnership’ [ 12 ] to co-identify appropriate methods and data collection to understand and disseminate implementation strategies is critical to inform how to how to create the different knowledge products: generalizable evidence for dissemination (researchers), insights into how to scale (policy makers) and how to sustain the improvements (implementers) [ 13 ]. A well-known and commonly cited example is the Surgical Safety Checklist, which was found to improve adherence to evidence-based practices and save lives across a range of settings [ 14 ]. However, attempts to replicate these successes were not always effective since capturing generalizable knowledge on how to introduce and support the implementation of this intervention with fidelity was not part of the original research dissemination, [ 15 ] a lesson understood by the original researchers and addressed through accompanying toolkits [ 16 ].
Another important area where collaboration between implementers and researchers is needed to improve learning from QI in understanding the impact of different contextual factors to identify which aspects of an improvement intervention are generalizable, which are context-specific and which are critical to address when planning replication. During the seminar, a study of antenatal corticosteroids (ANCS), an intervention found in higher income settings to reduce death among premature infants, was discussed to identify how contextual factors can be better addressed through local knowledge to inform implementation [ 17 ]. The randomized controlled trial showed that implementation of ANCS in low-resource settings resulted in increased mortality among some of the infants who were given steroids; the published conclusion was that ANCS was not a recommended improvement intervention in these settings. The group identified the gap in the translation of ANCS use from resource richer settings did not consider the different contextual factors which required adaption such as the lack of capacity to accurately determine prematurity needed to determine eligibility for the steroids.
Starting the work to bridge the gap
Based on the reasons for the gaps identified above, we recommend a number of initial steps to better bridge the gap between researchers and implementers:
Aligning project goals and joint planning : Before QI projects get launched, the initial work must start with implementers and researchers discussing and agreeing on the goals and objectives of the work including and beyond local improvement. In addition to alignment of improvement goals, all stakeholders must be engaged at the start of the QI project to agree on the purposes and uses of the results, local learning or broader dissemination or both. This work needs to happen at the design phase and continue with ongoing planned communication throughout the work. This will ensure that all stakeholders are jointly engaged in identifying the most appropriate research questions and the most appropriate methods to answer them.
Choosing the right research design . The joint framing of goals and research questions can lead to a selection of evaluation and research designs at an appropriate, mutually agreed upon level of rigor including right research methodology for success [ 18 ]. This balancing of rigor versus flexibility, described in the meeting as a ‘bamboo scaffold that bends in the wind’ can only be accomplished when there is an open discussion of trade-offs between investments in data collection for research and data collection for demonstrating local improvements. Detailed documentation of implementation approaches is time consuming and resource intensive, and cannot be routinely expected for every project. On the other hand, some improvement in documentation as part of routine practice will benefit practitioners by providing important insights about local sustainability, and can be used by researchers to assess generalizability, attribution and scale.
The need to understand both process and context in the evaluation and study of QI interventions also cannot be accomplished without engaging both researchers and practitioners in the process [ 13 ]. The knowledge about how the project was implemented, and what was relevant to the context often resides with those responsible for implementation. However, as mentioned previously, the implementers often have neither the incentives nor the support to systematically document and disseminate this knowledge in a way that makes it available for general use. Researchers can play a key role in influencing the QI research integration by supporting systematic documentation of the implementation process in addition to an evaluation of outcomes and by partnering with implementers to make this happen. Introduction of adaptive designs such as SMART trials into improvement research may also offer a common ground where improvement implementers and researchers can collaborate introducing use of data to make mid-course changes to the implementation design.
Building implementer research capacity. Building capacity of implementers as potential producers of and better consumers of research and evaluation results in another important approach to bridge the gap. For example, empowerment evaluation is designed to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement and evaluate their own program [ 19 ]. Building capacity within implementing organizations through technical support provided by researchers for interested implementers can establish a viable infrastructure for practitioners and researchers to work together more effectively. For example, multi-year research practice partnerships in facilities in Kenya has led to sustainable QI programs with dissemination of methods and results through co-authored peer-reviewed journals and conference presentations [ 20 ] Similar results were seen for research capacity building targeting implementers in the Africa Health Initiative in five countries in Africa [ 21 ]. Support for practice-based researchers to build their capacity in QI and in process evaluation using implementation science methods can also increase the potential of improvement projects to produce the knowledge needed about the implementation to spread learning within and beyond their organization.
Aligning incentives to drive collaboration : Creating areas of shared incentives will require initiatives from funders and universities to appreciate the higher value of co-produced research, reward capacity building of researchers in the field and fund innovative models of embedded research where researchers are part of or embedded into the implementing organization [ 22 ]. In addition, offering opportunities for meaningful participation in research and building capacity for this work among implementers has also been associated with better improvement and dissemination [ 23 ].
Simplifying documentation for dissemination of learning : As mentioned earlier, it is useful for both implementers and researchers if documenting the implementation of QI programs becomes part of routine practice. However, this will not happen without simplifying documentation standards. SQUIRE and TiDieR guidelines are very helpful for academic publications. However, they are not always a good fit for projects whose primary purpose is not research but who have the potential to add to the knowledge needed to improve QI [ 24 , 25 ]. Researchers could partner with implementers to develop simpler, practice-based research guidelines and to create other venues such as through existing organizations focused on quality and improvement where methods and results could be posted using these guidelines without a formal peer-review process. Templates and examples could be provided to improve the quality of documentation as well as editorial staff to assist with structure and formatting. The incentive for implementers is to get their stories told, and at the same time provide an opportunity for researchers to get data on where to focus further research. In addition, there are growing options to share knowledge and research findings such as the WHO Global Learning Lab for Quality UHC which provides a forum for implementers to disseminate work available to broader community [ 26 ].
To improve learning from and effectiveness of QI work requires involvement and collaboration between both researchers and practitioners. Researchers can advance the field by creating generalizable knowledge on the effectiveness of interventions and on implementation strategies and practitioners improve outcomes on the ground by implementing QI interventions. By increasing the collaboration, more systematic evaluations of interventions in local contexts and better design of research will result in production of the generalizable knowledge needed to increase the impact of QI. In order for this to take place, there needs to be an intentional effort to address the gaps that challenge researchers and practitioners working together. This can occur by aligning incentives, increasing the value and utility of produced research to implementers, and as a shared community developing new guidance to bring these different groups to more effective collaboration. The growing experience in QI and improvement science offers many opportunities for better collaboration between researchers and implementers to increase the value of this partnership to accelerating progress toward quality Universal Health Coverage and the Sustainable Development Goals.
M.D. received financial support from SGS to attend this seminar.
- 1. Wold Health Organization Why quality UHC? Available from http://www.who.int/servicedeliverysafety/areas/qhc/quality-uhc/en/ . (5 January 2018, date last accessed).
- 2. Leatherman S, Ferris TG, Berwick D et al. . The role of quality improvement in strengthening health systems in developing countries. Int J Qual Heal Care [Internet] 2010;22:237–43. http://www.ncbi.nlm.nih.gov/pubmed/20543209 . [ DOI ] [ PubMed ] [ Google Scholar ]
- 3. Victora CG, Requejo JH, Barros AJD et al. . Countdown to 2015: a decade of tracking progress for maternal, newborn, and child survival. Lancet 2016;387:2049–59. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 4. Kruk ME, Pate M, Mullan Z. Introducing the Lancet Global Health Commission on high-quality health systems in the SDG era. Lancet Glob Health 2017;5:e480–1. [ DOI ] [ PubMed ] [ Google Scholar ]
- 5. Dixon-Woods M, Martin GP. Does quality improvement improve quality? Futur Hosp J 2016;3:191–4. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 6. Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. Health Aff 2005;24:138–50. [ DOI ] [ PubMed ] [ Google Scholar ]
- 7. Salzburg Global Seminar Better Health Care: How do we learna about improvement?. [Internet]. Vol. 55, Salzburg Global Seinar Seriesm. 2016. Available from: http://ovidsp.ovid.com/ovidweb.cgi?T=JS&NEWS=N&PAGE=fulltext&D=medl&AN=19346632
- 8. Tabak RG, Khoong EC, Chambers DA et al. . Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012;43:337–50. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 9. Parry GJ, Carson-Stevens A, Luff DF et al. . Recommendations for evaluation of health care improvement initiatives. Acad Pediatr [Internet] 2013;13:S23–30. http://dx.doi.org/10.1016/j.acap.2013.04.007 . [ DOI ] [ PubMed ] [ Google Scholar ]
- 10. Kairalla J a, Coffey CS, Thomann M a et al. . Adaptive trial designs: a review of barriers and opportunities. Trials 2012;13:145. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 11. Davidoff F. Improvement interventions are social treatments, not pills. Ann Intern Med 2014;161:526–7. [ DOI ] [ PubMed ] [ Google Scholar ]
- 12. Marshall MN. Bridging the ivory towers and the swampy lowlands; increasing the impact of health services research on quality improvement. Int J Qual Heal Care [Internet] 2014;26:1–5. http://intqhc.oxfordjournals.org/cgi/doi/10.1093/intqhc/mzt076 . [ DOI ] [ PubMed ] [ Google Scholar ]
- 13. Vindrola-Padros C, Pape T, Utley M et al. . The role of embedded research in quality improvement: a narrative review. BMJ Qual Saf 2017;26:70–80. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 14. Haynes A, Weiser T, Berry W et al. . A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med 2009;360:491–9. [ DOI ] [ PubMed ] [ Google Scholar ]
- 15. Gillespie BM, Marshall A. Implementation of safety checklists in surgery: a realist synthesis of evidence. Implement Sci 2015;10:137. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 16. WHO : WHO surgical safety checklist and implementation manual. http://www.who.int/patientsafety/safesurgery/ss_checklist/en/ (Last date accessed, 2 October 2014).
- 17. Althabe F, Belizán JM, McClure EM et al. . A population-based, multifaceted strategy to implement antenatal corticosteroid treatment versus standard care for the reduction of neonatal mortality due to preterm birth in low-income and middle-income countries: the ACT cluster-randomised trial. Lancet 2015;385:629–39. doi:10.1016/S0140-6736(14)61651-2 . [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 18. Colquhoun H, Leeman J, Michie S et al. . Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems and policies. Implement Sci. 2014;9:154. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 19. Fetterman DM, Wandersman A. Empowerment evaluation. Eval Pract. 1994;15:1–15. [ Google Scholar ]
- 20. Ramaswamy R, Rothschild C, Alabi F et al. . Quality in practice using value stream mapping to improve quality of care in low-resource facility settings. Int J Qual Health Care 2017;29:961–5. [ DOI ] [ PubMed ] [ Google Scholar ]
- 21. Hedt-Gauthier BL, Chilengi R, Jackson E et al. . Research capacity building integrated into PHIT projects: leveraging research and research funding to build national capacity. BMC Health Serv Res 2017;17:825 https://doi.org/10.1186/s12913-017-2657-6 . [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 22. Ghaffar A, Langlois E, Rasanathan K et al. . Strengthening health systems through embedded research. Bull World Health Organ 2017;95:87. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 23. Sherr K, Requejo JH, Basinga P. Implementation research to catalyze advances in health systems strengthening in sub-Saharan Africa: the African Health Initiative. BioMed Cent Heal Serv Res 2013;13:S1. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 24. Revised Standards For Quality Improvement Reporting Excellentce SQUIRE 2.0 Guidelines http://squire-statement.org/index.cfm?fuseaction=Page.ViewPage&PageID=471 (Last date accessed, 24 December 2017).
- 25. Hoffmann TC, Glasziou PP, Boutron I et al. . Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014;348:1–12. [ DOI ] [ PubMed ] [ Google Scholar ]
- 26. World Health Organization (WHO) WHO Global Learning Laboratory for Quality UHC http://www.who.int/servicedeliverysafety/areas/qhc/gll/en/ (Last date accessed, 15 December 2017).
- View on publisher site
- PDF (191.6 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
Add to Collections
- - Google Chrome
Intended for healthcare professionals
- My email alerts
- BMA member login
- Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution
Search form
- Advanced search
- Search responses
- Search blogs
- Quality improvement...
Quality improvement into practice
Read the full collection.
- Related content
- Peer review
- Adam Backhouse , quality improvement programme lead 1 ,
- Fatai Ogunlayi , public health specialty registrar 2
- 1 North London Partners in Health and Care, Islington CCG, London N1 1TH, UK
- 2 Institute of Applied Health Research, Public Health, University of Birmingham, B15 2TT, UK
- Correspondence to: A Backhouse adam.backhouse{at}nhs.net
What you need to know
Thinking of quality improvement (QI) as a principle-based approach to change provides greater clarity about ( a ) the contribution QI offers to staff and patients, ( b ) how to differentiate it from other approaches, ( c ) the benefits of using QI together with other change approaches
QI is not a silver bullet for all changes required in healthcare: it has great potential to be used together with other change approaches, either concurrently (using audit to inform iterative tests of change) or consecutively (using QI to adapt published research to local context)
As QI becomes established, opportunities for these collaborations will grow, to the benefit of patients.
The benefits to front line clinicians of participating in quality improvement (QI) activity are promoted in many health systems. QI can represent a valuable opportunity for individuals to be involved in leading and delivering change, from improving individual patient care to transforming services across complex health and care systems. 1
However, it is not clear that this promotion of QI has created greater understanding of QI or widespread adoption. QI largely remains an activity undertaken by experts and early adopters, often in isolation from their peers. 2 There is a danger of a widening gap between this group and the majority of healthcare professionals.
This article will make it easier for those new to QI to understand what it is, where it fits with other approaches to improving care (such as audit or research), when best to use a QI approach, making it easier to understand the relevance and usefulness of QI in delivering better outcomes for patients.
How this article was made
AB and FO are both specialist quality improvement practitioners and have developed their expertise working in QI roles for a variety of UK healthcare organisations. The analysis presented here arose from AB and FO’s observations of the challenges faced when introducing QI, with healthcare providers often unable to distinguish between QI and other change approaches, making it difficult to understand what QI can do for them.
How is quality improvement defined?
There are many definitions of QI ( box 1 ). The BMJ ’s Quality Improvement series uses the Academy of Medical Royal Colleges definition. 6 Rather than viewing QI as a single method or set of tools, it can be more helpful to think of QI as based on a set of principles common to many of these definitions: a systematic continuous approach that aims to solve problems in healthcare, improve service provision, and ultimately provide better outcomes for patients.
Definitions of quality improvement
Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered. 3
The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems. 4
Using a systematic change method and strategies to improve patient experience and outcome. 5
To make a difference to patients by improving safety, effectiveness, and experience of care by using understanding of our complex healthcare environment, applying a systematic approach, and designing, testing, and implementing changes using real time measurement for improvement. 6
In this article we discuss QI as an approach to improving healthcare that follows the principles outlined in box 2 ; this may be a useful reference to consider how particular methods or tools could be used as part of a QI approach.
Principles of QI
Primary intent— To bring about measurable improvement to a specific aspect of healthcare delivery, often with evidence or theory of what might work but requiring local iterative testing to find the best solution. 7
Employing an iterative process of testing change ideas— Adopting a theory of change which emphasises a continuous process of planning and testing changes, studying and learning from comparing the results to a predicted outcome, and adapting hypotheses in response to results of previous tests. 8 9
Consistent use of an agreed methodology— Many different QI methodologies are available; commonly cited methodologies include the Model for Improvement, Lean, Six Sigma, and Experience-based Co-design. 4 Systematic review shows that the choice of tools or methodologies has little impact on the success of QI provided that the chosen methodology is followed consistently. 10 Though there is no formal agreement on what constitutes a QI tool, it would include activities such as process mapping that can be used within a range of QI methodological approaches. NHS Scotland’s Quality Improvement Hub has a glossary of commonly used tools in QI. 11
Empowerment of front line staff and service users— QI work should engage staff and patients by providing them with the opportunity and skills to contribute to improvement work. Recognition of this need often manifests in drives from senior leadership or management to build QI capability in healthcare organisations, but it also requires that frontline staff and service users feel able to make use of these skills and take ownership of improvement work. 12
Using data to drive improvement— To drive decision making by measuring the impact of tests of change over time and understanding variation in processes and outcomes. Measurement for improvement typically prioritises this narrative approach over concerns around exactness and completeness of data. 13 14
Scale-up and spread, with adaptation to context— As interventions tested using a QI approach are scaled up and the degree of belief in their efficacy increases, it is desirable that they spread outward and be adopted by others. Key to successful diffusion of improvement is the adaption of interventions to new environments, patient and staff groups, available resources, and even personal preferences of healthcare providers in surrounding areas, again using an iterative testing approach. 15 16
What other approaches to improving healthcare are there?
Taking considered action to change healthcare for the better is not new, but QI as a distinct approach to improving healthcare is a relatively recent development. There are many well established approaches to evaluating and making changes to healthcare services in use, and QI will only be adopted more widely if it offers a new perspective or an advantage over other approaches in certain situations.
A non-systematic literature scan identified the following other approaches for making change in healthcare: research, clinical audit, service evaluation, and clinical transformation. We also identified innovation as an important catalyst for change, but we did not consider it an approach to evaluating and changing healthcare services so much as a catch-all term for describing the development and introduction of new ideas into the system. A summary of the different approaches and their definition is shown in box 3 . Many have elements in common with QI, but there are important difference in both intent and application. To be useful to clinicians and managers, QI must find a role within healthcare that complements research, audit, service evaluation, and clinical transformation while retaining the core principles that differentiate it from these approaches.
Alternatives to QI
Research— The attempt to derive generalisable new knowledge by addressing clearly defined questions with systematic and rigorous methods. 17
Clinical audit— A way to find out if healthcare is being provided in line with standards and to let care providers and patients know where their service is doing well, and where there could be improvements. 18
Service evaluation— A process of investigating the effectiveness or efficiency of a service with the purpose of generating information for local decision making about the service. 19
Clinical transformation— An umbrella term for more radical approaches to change; a deliberate, planned process to make dramatic and irreversible changes to how care is delivered. 20
Innovation— To develop and deliver new or improved health policies, systems, products and technologies, and services and delivery methods that improve people’s health. Health innovation responds to unmet needs by employing new ways of thinking and working. 21
Why do we need to make this distinction for QI to succeed?
Improvement in healthcare is 20% technical and 80% human. 22 Essential to that 80% is clear communication, clarity of approach, and a common language. Without this shared understanding of QI as a distinct approach to change, QI work risks straying from the core principles outlined above, making it less likely to succeed. If practitioners cannot communicate clearly with their colleagues about the key principles and differences of a QI approach, there will be mismatched expectations about what QI is and how it is used, lowering the chance that QI work will be effective in improving outcomes for patients. 23
There is also a risk that the language of QI is adopted to describe change efforts regardless of their fidelity to a QI approach, either due to a lack of understanding of QI or a lack of intention to carry it out consistently. 9 Poor fidelity to the core principles of QI reduces its effectiveness and makes its desired outcome less likely, leading to wasted effort by participants and decreasing its credibility. 2 8 24 This in turn further widens the gap between advocates of QI and those inclined to scepticism, and may lead to missed opportunities to use QI more widely, consequently leading to variation in the quality of patient care.
Without articulating the differences between QI and other approaches, there is a risk of not being able to identify where a QI approach can best add value. Conversely, we might be tempted to see QI as a “silver bullet” for every healthcare challenge when a different approach may be more effective. In reality it is not clear that QI will be fit for purpose in tackling all of the wicked problems of healthcare delivery and we must be able to identify the right tool for the job in each situation. 25 Finally, while different approaches will be better suited to different types of challenge, not having a clear understanding of how approaches differ and complement each other may mean missed opportunities for multi-pronged approaches to improving care.
What is the relationship between QI and other approaches such as audit?
Academic journals, healthcare providers, and “arms-length bodies” have made various attempts to distinguish between the different approaches to improving healthcare. 19 26 27 28 However, most comparisons do not include QI or compare QI to only one or two of the other approaches. 7 29 30 31 To make it easier for people to use QI approaches effectively and appropriately, we summarise the similarities, differences, and crossover between QI and other approaches to tackling healthcare challenges ( fig 1 ).
How quality improvement interacts with other approaches to improving healthcare
- Download figure
- Open in new tab
- Download powerpoint
QI and research
Research aims to generate new generalisable knowledge, while QI typically involves a combination of generating new knowledge or implementing existing knowledge within a specific setting. 32 Unlike research, including pragmatic research designed to test effectiveness of interventions in real life, QI does not aim to provide generalisable knowledge. In common with QI, research requires a consistent methodology. This method is typically used, however, to prove or disprove a fixed hypothesis rather than the adaptive hypotheses developed through the iterative testing of ideas typical of QI. Both research and QI are interested in the environment where work is conducted, though with different intentions: research aims to eliminate or at least reduce the impact of many variables to create generalisable knowledge, whereas QI seeks to understand what works best in a given context. The rigour of data collection and analysis required for research is much higher; in QI a criterion of “good enough” is often applied.
Relationship with QI
Though the goal of clinical research is to develop new knowledge that will lead to changes in practice, much has been written on the lag time between publication of research evidence and system-wide adoption, leading to delays in patients benefitting from new treatments or interventions. 33 QI offers a way to iteratively test the conditions required to adapt published research findings to the local context of individual healthcare providers, generating new knowledge in the process. Areas with little existing knowledge requiring further research may be identified during improvement activities, which in turn can form research questions for further study. QI and research also intersect in the field of improvement science, the academic study of QI methods which seeks to ensure QI is carried out as effectively as possible. 34
Scenario: QI for translational research
Newly published research shows that a particular physiotherapy intervention is more clinically effective when delivered in short, twice-daily bursts rather than longer, less frequent sessions. A team of hospital physiotherapists wish to implement the change but are unclear how they will manage the shift in workload and how they should introduce this potentially disruptive change to staff and to patients.
Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this article?
Adopting a QI approach, the team realise that, although the change they want to make is already determined, the way in which it is introduced and adapted to their wards is for them to decide. They take time to explain the benefits of the change to colleagues and their current patients, and ask patients how they would best like to receive their extra physiotherapy sessions.
The change is planned and tested for two weeks with one physiotherapist working with a small number of patients. Data are collected each day, including reasons why sessions were missed or refused. The team review the data each day and make iterative changes to the physiotherapist’s schedule, and to the times of day the sessions are offered to patients. Once an improvement is seen, this new way of working is scaled up to all of the patients on the ward.
The findings of the work are fed into a service evaluation of physiotherapy provision across the hospital, which uses the findings of the QI work to make recommendations about how physiotherapy provision should be structured in the future. People feel more positive about the change because they know colleagues who have already made it work in practice.
QI and clinical audit
Clinical audit is closely related to QI: it is often used with the intention of iteratively improving the standard of healthcare, albeit in relation to a pre-determined standard of best practice. 35 When used iteratively, interspersed with improvement action, the clinical audit cycle adheres to many of the principles of QI. However, in practice clinical audit is often used by healthcare organisations as an assurance function, making it less likely to be carried out with a focus on empowering staff and service users to make changes to practice. 36 Furthermore, academic reviews of audit programmes have shown audit to be an ineffective approach to improving quality due to a focus on data collection and analysis without a well developed approach to the action section of the audit cycle. 37 Clinical audits, such as the National Clinical Audit Programme in the UK (NCAPOP), often focus on the management of specific clinical conditions. QI can focus on any part of service delivery and can take a more cross-cutting view which may identify issues and solutions that benefit multiple patient groups and pathways. 30
Audit is often the first step in a QI process and is used to identify improvement opportunities, particularly where compliance with known standards for high quality patient care needs to be improved. Audit can be used to establish a baseline and to analyse the impact of tests of change against the baseline. Also, once an improvement project is under way, audit may form part of rapid cycle evaluation, during the iterative testing phase, to understand the impact of the idea being tested. Regular clinical audit may be a useful assurance tool to help track whether improvements have been sustained over time.
Scenario: Audit and QI
A foundation year 2 (FY2) doctor is asked to complete an audit of a pre-surgical pathway by looking retrospectively through patient documentation. She concludes that adherence to best practice is mixed and recommends: “Remind the team of the importance of being thorough in this respect and re-audit in 6 months.” The results are presented at an audit meeting, but a re-audit a year later by a new FY2 doctor shows similar results.
Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this paper?
Contrast the above with a team-led, rapid cycle audit in which everyone contributes to collecting and reviewing data from the previous week, discussed at a regular team meeting. Though surgical patients are often transient, their experience of care and ideas for improvement are captured during discharge conversations. The team identify and test several iterative changes to care processes. They document and test these changes between audits, leading to sustainable change. Some of the surgeons involved work across multiple hospitals, and spread some of the improvements, with the audit tool, as they go.
QI and service evaluation
In practice, service evaluation is not subject to the same rigorous definition or governance as research or clinical audit, meaning that there are inconsistencies in the methodology for carrying it out. While the primary intent for QI is to make change that will drive improvement, the primary intent for evaluation is to assess the performance of current patient care. 38 Service evaluation may be carried out proactively to assess a service against its stated aims or to review the quality of patient care, or may be commissioned in response to serious patient harm or red flags about service performance. The purpose of service evaluation is to help local decision makers determine whether a service is fit for purpose and, if necessary, identify areas for improvement.
Service evaluation may be used to initiate QI activity by identifying opportunities for change that would benefit from a QI approach. It may also evaluate the impact of changes made using QI, either during the work or after completion to assess sustainability of improvements made. Though likely planned as separate activities, service evaluation and QI may overlap and inform each other as they both develop. Service evaluation may also make a judgment about a service’s readiness for change and identify any barriers to, or prerequisites for, carrying out QI.
QI and clinical transformation
Clinical transformation involves radical, dramatic, and irreversible change—the sort of change that cannot be achieved through continuous improvement alone. As with service evaluation, there is no consensus on what clinical transformation entails, and it may be best thought of as an umbrella term for the large scale reform or redesign of clinical services and the non-clinical services that support them. 20 39 While it is possible to carry out transformation activity that uses elements of QI approach, such as effective engagement of the staff and patients involved, QI which rests on iterative test of change cannot have a transformational approach—that is, one-off, irreversible change.
There is opportunity to use QI to identify and test ideas before full scale clinical transformation is implemented. This has the benefit of engaging staff and patients in the clinical transformation process and increasing the degree of belief that clinical transformation will be effective or beneficial. Transformation activity, once completed, could be followed up with QI activity to drive continuous improvement of the new process or allow adaption of new ways of working. As interventions made using QI are scaled up and spread, the line between QI and transformation may seem to blur. The shift from QI to transformation occurs when the intention of the work shifts away from continuous testing and adaptation into the wholesale implementation of an agreed solution.
Scenario: QI and clinical transformation
An NHS trust’s human resources (HR) team is struggling to manage its junior doctor placements, rotas, and on-call duties, which is causing tension and has led to concern about medical cover and patient safety out of hours. A neighbouring trust has launched a smartphone app that supports clinicians and HR colleagues to manage these processes with the great success.
This problem feels ripe for a transformation approach—to launch the app across the trust, confident that it will solve the trust’s problems.
Before continuing reading think about your own organisation— What do you think will happen, and how would you use the QI principles described in this article for this situation?
Outcome without QI
Unfortunately, the HR team haven’t taken the time to understand the underlying problems with their current system, which revolve around poor communication and clarity from the HR team, based on not knowing who to contact and being unable to answer questions. HR assume that because the app has been a success elsewhere, it will work here as well.
People get excited about the new app and the benefits it will bring, but no consideration is given to the processes and relationships that need to be in place to make it work. The app is launched with a high profile campaign and adoption is high, but the same issues continue. The HR team are confused as to why things didn’t work.
Outcome with QI
Although the app has worked elsewhere, rolling it out without adapting it to local context is a risk – one which application of QI principles can mitigate.
HR pilot the app in a volunteer specialty after spending time speaking to clinicians to better understand their needs. They carry out several tests of change, ironing out issues with the process as they go, using issues logged and clinician feedback as a source of data. When they are confident the app works for them, they expand out to a directorate, a division, and finally the transformational step of an organisation-wide rollout can be taken.
Education into practice
Next time when faced with what looks like a quality improvement (QI) opportunity, consider asking:
How do you know that QI is the best approach to this situation? What else might be appropriate?
Have you considered how to ensure you implement QI according to the principles described above?
Is there opportunity to use other approaches in tandem with QI for a more effective result?
How patients were involved in the creation of this article
This article was conceived and developed in response to conversations with clinicians and patients working together on co-produced quality improvement and research projects in a large UK hospital. The first iteration of the article was reviewed by an expert patient, and, in response to their feedback, we have sought to make clearer the link between understanding the issues raised and better patient care.
Contributors: This work was initially conceived by AB. AB and FO were responsible for the research and drafting of the article. AB is the guarantor of the article.
Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.
Provenance and peer review: This article is part of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ , including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ ’s quality improvement editor post are funded by the Health Foundation.
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .
- Olsson-Brown A
- Dixon-Woods M ,
- Batalden PB ,
- Berwick D ,
- Øvretveit J
- Academy of Medical Royal Colleges
- Nelson WA ,
- McNicholas C ,
- Woodcock T ,
- Alderwick H ,
- ↵ NHS Scotland Quality Improvement Hub. Quality improvement glossary of terms. http://www.qihub.scot.nhs.uk/qi-basics/quality-improvement-glossary-of-terms.aspx .
- McNicol S ,
- Solberg LI ,
- Massoud MR ,
- Albrecht Y ,
- Illingworth J ,
- Department of Health
- ↵ NHS England. Clinical audit. https://www.england.nhs.uk/clinaudit/ .
- Healthcare Quality Improvement Partnership
- McKinsey Hospital Institute
- ↵ World Health Organization. WHO Health Innovation Group. 2019. https://www.who.int/life-course/about/who-health-innovation-group/en/ .
- Sheffield Microsystem Coaching Academy
- Davidoff F ,
- Leviton L ,
- Taylor MJ ,
- Nicolay C ,
- Tarrant C ,
- Twycross A ,
- ↵ University Hospitals Bristol NHS Foundation Trust. Is your study research, audit or service evaluation. http://www.uhbristol.nhs.uk/research-innovation/for-researchers/is-it-research,-audit-or-service-evaluation/ .
- ↵ University of Sheffield. Differentiating audit, service evaluation and research. 2006. https://www.sheffield.ac.uk/polopoly_fs/1.158539!/file/AuditorResearch.pdf .
- ↵ Royal College of Radiologists. Audit and quality improvement. https://www.rcr.ac.uk/clinical-radiology/audit-and-quality-improvement .
- Gundogan B ,
- Finkelstein JA ,
- Brickman AL ,
- Health Foundation
- Johnston G ,
- Crombie IK ,
- Davies HT ,
- Hillman T ,
- ↵ NHS Health Research Authority. Defining research. 2013. https://www.clahrc-eoe.nihr.ac.uk/wp-content/uploads/2014/04/defining-research.pdf .
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- My Bibliography
- Collections
- Citation manager
Save citation to file
Email citation, add to collections.
- Create a new collection
- Add to an existing collection
Add to My Bibliography
Your saved search, create a file for external citation management software, your rss feed.
- Search in PubMed
- Search in NLM Catalog
- Add to Search
Research and Quality Improvement: How Can They Work Together?
Affiliations.
- 1 Director, Data Science, Quality Insights, Williamsburg, VA.
- 2 Chair, ANNA Research Committee.
- 3 President, ANNA's Old Dominion Chapter.
- 4 Instructor, Case Western Reserve University, Cleveland, OH.
- 5 Associate Degree Nursing Instructor, Northeast Wisconsin Technical College, Green Bay, WI.
- PMID: 35503694
Research and quality improvement provide a mechanism to support the advancement of knowledge, and to evaluate and learn from experience. The focus of research is to contribute to developing knowledge or gather evidence for theories in a field of study, whereas the focus of quality improvement is to standardize processes and reduce variation to improve outcomes for patients and health care organizations. Both methods of inquiry broaden our knowledge through the generation of new information and the application of findings to practice. This article in the "Exploring the Evidence: Focusing on the Fundamentals" series provides nephrology nurses with basic information related to the role of research and quality improvement projects, as well as some examples of ways in which they have been used together to advance clinical knowledge and improve patient outcomes.
Keywords: kidney disease; nephrology; quality improvement; research.
Copyright© by the American Nephrology Nurses Association.
PubMed Disclaimer
Conflict of interest statement
The authors reported no actual or potential conflict of interest in relation to this nursing continuing professional development (NCPD) activity.
- Search in MeSH
Grants and funding
- K23 NR019744/NR/NINR NIH HHS/United States
LinkOut - more resources
Full text sources.
- Ovid Technologies, Inc.
- Citation Manager
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
IMAGES
VIDEO
COMMENTS
Through the conduct of quality improvement (QI) projects and clinical research studies, advanced practitioners and nurse scientists have the opportunity to contribute exponentially not only to their organizations, but also towards personal and professional growth.
Improvement research projects which are typically well-designed and with some form of control groups and comparators can both address improvement priorities and generate generalisable research data at the same time.
Educating clinicians in effective quality improvement techniques is critical to the future of healthcare. Our blended approach of quality improvement methodologies coupled with health care subject matter expertise has made the Mayo Clinic Quality Academy successful in this charge.
In low- and middle-income countries, quality improvement (QI) is used to identify performance gaps and implement improvement interventions to address these problems at the local, sub national and national levels.
Determining whether a project is evidence-based practice (EBP), quality improvement (QI), or research can be challenging—even for experts! Some projects that appear to be EBP or QI may contain elements of research.
Thinking of quality improvement (QI) as a principle-based approach to change provides greater clarity about (a) the contribution QI offers to staff and patients, (b) how to differentiate it from other approaches, (c) the benefits of using QI together with other change approaches.
The focus of research is to contribute to developing knowledge or gather evidence for theories in a field of study, whereas the focus of quality improvement is to standardize processes and reduce variation to improve outcomes for patients and health care organizations.
Quality Improvement (QI) has been defined as systematic, data-guided activities, designed to bring about immediate improvements in health care delivery in particular settings. Initiators of QI projects identify promising improvements, implement small scale changes, monitor results, and decide about additional changes and wider implementation ...
Three Tips for Facilitating the Quality Improvement Process. Place a priority on encouraging communication, engagement, and participation for all of the stakeholders affected by the QI process. Learn what is most important to the people who make up the microsystem and look for ways to help them embrace the changes and begin to take ownership of ...
Determining whether a QI project was successful largely depends on collecting data that is specific to the problem and properly analyzing that data to determine whether improvement in the setting could be reported.