Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

a systematic literature review strategy

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

PubMed Disclaimer

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Aromataris E, et al. Int J Evid Based Healthc. 2015 Sep;13(3):132-40. doi: 10.1097/XEB.0000000000000055. Int J Evid Based Healthc. 2015. PMID: 26360830
  • RAMESES publication standards: meta-narrative reviews. Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. Wong G, et al. BMC Med. 2013 Jan 29;11:20. doi: 10.1186/1741-7015-11-20. BMC Med. 2013. PMID: 23360661 Free PMC article.
  • A Primer on Systematic Reviews and Meta-Analyses. Nguyen NH, Singh S. Nguyen NH, et al. Semin Liver Dis. 2018 May;38(2):103-111. doi: 10.1055/s-0038-1655776. Epub 2018 Jun 5. Semin Liver Dis. 2018. PMID: 29871017 Review.
  • Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals. Hedin RJ, Umberham BA, Detweiler BN, Kollmorgen L, Vassar M. Hedin RJ, et al. Anesth Analg. 2016 Oct;123(4):1018-25. doi: 10.1213/ANE.0000000000001452. Anesth Analg. 2016. PMID: 27537925 Review.
  • The Association between Emotional Intelligence and Prosocial Behaviors in Children and Adolescents: A Systematic Review and Meta-Analysis. Cao X, Chen J. Cao X, et al. J Youth Adolesc. 2024 Aug 28. doi: 10.1007/s10964-024-02062-y. Online ahead of print. J Youth Adolesc. 2024. PMID: 39198344
  • The impact of chemical pollution across major life transitions: a meta-analysis on oxidative stress in amphibians. Martin C, Capilla-Lasheras P, Monaghan P, Burraco P. Martin C, et al. Proc Biol Sci. 2024 Aug;291(2029):20241536. doi: 10.1098/rspb.2024.1536. Epub 2024 Aug 28. Proc Biol Sci. 2024. PMID: 39191283 Free PMC article.
  • Target mechanisms of mindfulness-based programmes and practices: a scoping review. Maloney S, Kock M, Slaghekke Y, Radley L, Lopez-Montoyo A, Montero-Marin J, Kuyken W. Maloney S, et al. BMJ Ment Health. 2024 Aug 24;27(1):e300955. doi: 10.1136/bmjment-2023-300955. BMJ Ment Health. 2024. PMID: 39181568 Free PMC article. Review.
  • Bridging disciplines-key to success when implementing planetary health in medical training curricula. Malmqvist E, Oudin A. Malmqvist E, et al. Front Public Health. 2024 Aug 6;12:1454729. doi: 10.3389/fpubh.2024.1454729. eCollection 2024. Front Public Health. 2024. PMID: 39165783 Free PMC article. Review.
  • Strength of evidence for five happiness strategies. Puterman E, Zieff G, Stoner L. Puterman E, et al. Nat Hum Behav. 2024 Aug 12. doi: 10.1038/s41562-024-01954-0. Online ahead of print. Nat Hum Behav. 2024. PMID: 39134738 No abstract available.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ingenta plc
  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • A-Z Publications

Annual Review of Psychology

Volume 70, 2019, review article, how to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses.

  • Andy P. Siddaway 1 , Alex M. Wood 2 , and Larry V. Hedges 3
  • View Affiliations Hide Affiliations Affiliations: 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected] 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected]
  • Vol. 70:747-770 (Volume publication date January 2019) https://doi.org/10.1146/annurev-psych-010418-102803
  • First published as a Review in Advance on August 08, 2018
  • Copyright © 2019 by Annual Reviews. All rights reserved

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Article metrics loading...

Full text loading...

Literature Cited

  • APA Publ. Commun. Board Work. Group J. Artic. Rep. Stand. 2008 . Reporting standards for research in psychology: Why do we need them? What might they be?. Am. Psychol . 63 : 848– 49 [Google Scholar]
  • Baumeister RF 2013 . Writing a literature review. The Portable Mentor: Expert Guide to a Successful Career in Psychology MJ Prinstein, MD Patterson 119– 32 New York: Springer, 2nd ed.. [Google Scholar]
  • Baumeister RF , Leary MR 1995 . The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117 : 497– 529 [Google Scholar]
  • Baumeister RF , Leary MR 1997 . Writing narrative literature reviews. Rev. Gen. Psychol. 3 : 311– 20 Presents a thorough and thoughtful guide to conducting narrative reviews. [Google Scholar]
  • Bem DJ 1995 . Writing a review article for Psychological Bulletin. Psychol . Bull 118 : 172– 77 [Google Scholar]
  • Borenstein M , Hedges LV , Higgins JPT , Rothstein HR 2009 . Introduction to Meta-Analysis New York: Wiley Presents a comprehensive introduction to meta-analysis. [Google Scholar]
  • Borenstein M , Higgins JPT , Hedges LV , Rothstein HR 2017 . Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res. Synth. Methods 8 : 5– 18 [Google Scholar]
  • Braver SL , Thoemmes FJ , Rosenthal R 2014 . Continuously cumulating meta-analysis and replicability. Perspect. Psychol. Sci. 9 : 333– 42 [Google Scholar]
  • Bushman BJ 1994 . Vote-counting procedures. The Handbook of Research Synthesis H Cooper, LV Hedges 193– 214 New York: Russell Sage Found. [Google Scholar]
  • Cesario J 2014 . Priming, replication, and the hardest science. Perspect. Psychol. Sci. 9 : 40– 48 [Google Scholar]
  • Chalmers I 2007 . The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the importance of systematic reviews. Treating Individuals: From Randomised Trials to Personalised Medicine PM Rothwell 37– 58 London: Lancet [Google Scholar]
  • Cochrane Collab. 2003 . Glossary Rep., Cochrane Collab. London: http://community.cochrane.org/glossary Presents a comprehensive glossary of terms relevant to systematic reviews. [Google Scholar]
  • Cohn LD , Becker BJ 2003 . How meta-analysis increases statistical power. Psychol. Methods 8 : 243– 53 [Google Scholar]
  • Cooper HM 2003 . Editorial. Psychol. Bull. 129 : 3– 9 [Google Scholar]
  • Cooper HM 2016 . Research Synthesis and Meta-Analysis: A Step-by-Step Approach Thousand Oaks, CA: Sage, 5th ed.. Presents a comprehensive introduction to research synthesis and meta-analysis. [Google Scholar]
  • Cooper HM , Hedges LV , Valentine JC 2009 . The Handbook of Research Synthesis and Meta-Analysis New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Cumming G 2014 . The new statistics: why and how. Psychol. Sci. 25 : 7– 29 Discusses the limitations of null hypothesis significance testing and viable alternative approaches. [Google Scholar]
  • Earp BD , Trafimow D 2015 . Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6 : 621 [Google Scholar]
  • Etz A , Vandekerckhove J 2016 . A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11 : e0149794 [Google Scholar]
  • Ferguson CJ , Brannick MT 2012 . Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychol. Methods 17 : 120– 28 [Google Scholar]
  • Fleiss JL , Berlin JA 2009 . Effect sizes for dichotomous data. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 237– 53 New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Garside R 2014 . Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how. Innovation 27 : 67– 79 [Google Scholar]
  • Hedges LV , Olkin I 1980 . Vote count methods in research synthesis. Psychol. Bull. 88 : 359– 69 [Google Scholar]
  • Hedges LV , Pigott TD 2001 . The power of statistical tests in meta-analysis. Psychol. Methods 6 : 203– 17 [Google Scholar]
  • Higgins JPT , Green S 2011 . Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0 London: Cochrane Collab. Presents comprehensive and regularly updated guidelines on systematic reviews. [Google Scholar]
  • John LK , Loewenstein G , Prelec D 2012 . Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 : 524– 32 [Google Scholar]
  • Juni P , Witschi A , Bloch R , Egger M 1999 . The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282 : 1054– 60 [Google Scholar]
  • Klein O , Doyen S , Leys C , Magalhães de Saldanha da Gama PA , Miller S et al. 2012 . Low hopes, high expectations: expectancy effects and the replicability of behavioral experiments. Perspect. Psychol. Sci. 7 : 6 572– 84 [Google Scholar]
  • Lau J , Antman EM , Jimenez-Silva J , Kupelnick B , Mosteller F , Chalmers TC 1992 . Cumulative meta-analysis of therapeutic trials for myocardial infarction. N. Engl. J. Med. 327 : 248– 54 [Google Scholar]
  • Light RJ , Smith PV 1971 . Accumulating evidence: procedures for resolving contradictions among different research studies. Harvard Educ. Rev. 41 : 429– 71 [Google Scholar]
  • Lipsey MW , Wilson D 2001 . Practical Meta-Analysis London: Sage Comprehensive and clear explanation of meta-analysis. [Google Scholar]
  • Matt GE , Cook TD 1994 . Threats to the validity of research synthesis. The Handbook of Research Synthesis H Cooper, LV Hedges 503– 20 New York: Russell Sage Found. [Google Scholar]
  • Maxwell SE , Lau MY , Howard GS 2015 . Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?. Am. Psychol. 70 : 487– 98 [Google Scholar]
  • Moher D , Hopewell S , Schulz KF , Montori V , Gøtzsche PC et al. 2010 . CONSORT explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340 : c869 [Google Scholar]
  • Moher D , Liberati A , Tetzlaff J , Altman DG PRISMA Group. 2009 . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339 : 332– 36 Comprehensive reporting guidelines for systematic reviews. [Google Scholar]
  • Morrison A , Polisena J , Husereau D , Moulton K , Clark M et al. 2012 . The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int. J. Technol. Assess. Health Care 28 : 138– 44 [Google Scholar]
  • Nelson LD , Simmons J , Simonsohn U 2018 . Psychology's renaissance. Annu. Rev. Psychol. 69 : 511– 34 [Google Scholar]
  • Noblit GW , Hare RD 1988 . Meta-Ethnography: Synthesizing Qualitative Studies Newbury Park, CA: Sage [Google Scholar]
  • Olivo SA , Macedo LG , Gadotti IC , Fuentes J , Stanton T , Magee DJ 2008 . Scales to assess the quality of randomized controlled trials: a systematic review. Phys. Ther. 88 : 156– 75 [Google Scholar]
  • Open Sci. Collab. 2015 . Estimating the reproducibility of psychological science. Science 349 : 943 [Google Scholar]
  • Paterson BL , Thorne SE , Canam C , Jillings C 2001 . Meta-Study of Qualitative Health Research: A Practical Guide to Meta-Analysis and Meta-Synthesis Thousand Oaks, CA: Sage [Google Scholar]
  • Patil P , Peng RD , Leek JT 2016 . What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspect. Psychol. Sci. 11 : 539– 44 [Google Scholar]
  • Rosenthal R 1979 . The “file drawer problem” and tolerance for null results. Psychol. Bull. 86 : 638– 41 [Google Scholar]
  • Rosnow RL , Rosenthal R 1989 . Statistical procedures and the justification of knowledge in psychological science. Am. Psychol. 44 : 1276– 84 [Google Scholar]
  • Sanderson S , Tatt ID , Higgins JP 2007 . Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int. J. Epidemiol. 36 : 666– 76 [Google Scholar]
  • Schreiber R , Crooks D , Stern PN 1997 . Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue JM Morse 311– 26 Thousand Oaks, CA: Sage [Google Scholar]
  • Shrout PE , Rodgers JL 2018 . Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 : 487– 510 [Google Scholar]
  • Stroebe W , Strack F 2014 . The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 : 59– 71 [Google Scholar]
  • Stroup DF , Berlin JA , Morton SC , Olkin I , Williamson GD et al. 2000 . Meta-analysis of observational studies in epidemiology (MOOSE): a proposal for reporting. JAMA 283 : 2008– 12 [Google Scholar]
  • Thorne S , Jensen L , Kearney MH , Noblit G , Sandelowski M 2004 . Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual. Health Res. 14 : 1342– 65 [Google Scholar]
  • Tong A , Flemming K , McInnes E , Oliver S , Craig J 2012 . Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med. Res. Methodol. 12 : 181– 88 [Google Scholar]
  • Trickey D , Siddaway AP , Meiser-Stedman R , Serpell L , Field AP 2012 . A meta-analysis of risk factors for post-traumatic stress disorder in children and adolescents. Clin. Psychol. Rev. 32 : 122– 38 [Google Scholar]
  • Valentine JC , Biglan A , Boruch RF , Castro FG , Collins LM et al. 2011 . Replication in prevention science. Prev. Sci. 12 : 103– 17 [Google Scholar]
  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

a systematic literature review strategy

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Systematic Reviews

Constructing a search strategy and searching for evidence.

Aromataris, Edoardo PhD; Riitano, Dagmara BHSC, BA

Edoardo Aromataris is the director of synthesis science at the Joanna Briggs Institute in the School of Translational Health Science, University of Adelaide, South Australia, where Dagmara Riitano is a research officer. Contact author: Edoardo Aromataris, [email protected] . The authors have disclosed no potential conflicts of interest, financial or otherwise.

The Joanna Briggs Institute aims to inform health care decision making globally through the use of research evidence. It has developed innovative methods for appraising and synthesizing evidence; facilitating the transfer of evidence to health systems, health care professionals, and consumers; and creating tools to evaluate the impact of research on outcomes. For more on the institute's approach to weighing the evidence for practice, go to http://joannabriggs.org/jbi-approach.html .

Overview 

This article is the third in a new series on the systematic review from the Joanna Briggs Institute, an international collaborative supporting evidence-based practice in nursing, medicine, and allied health fields. The purpose of the series is to show nurses how to conduct a systematic review—one step at a time. This article details the major considerations surrounding search strategies and presents an example of a search using the PubMed platform (pubmed.gov).

The third article in a series from the Joanna Briggs Institute details how to develop a comprehensive search strategy for a systematic review.

The systematic literature review, widely regarded as the gold standard for determining evidence-based practice, is increasingly used to guide policy decisions and the direction of future research. The findings of systematic reviews have greater validity than those of other types of reviews because the systematic methods used seek to minimize bias and increase rigor in identifying and synthesizing the best available evidence on a particular question. It's therefore important that when you search for evidence, you attempt to find all eligible studies and consider them for inclusion in your review. 1

One rule of thumb we use when beginning a search for evidence to support a systematic review: if you don't find the evidence, it can't be reviewed! Unfortunately, there is no prescriptive approach to conducting a comprehensive search. But searching is an art that can be cultivated and practiced. It involves several standard processes, such as developing search strings, searching across bibliographic citation databases that index health care research, looking for “gray,” or unpublished, literature, and hand searching.

GETTING STARTED

Developing a search strategy is an iterative process—that is, it involves continual assessment and refinement. As keywords or key terms are used in a search, their usefulness will be determined by the search results. Consequently, searching for evidence is sometimes considered more of an art than a science. It's therefore unlikely that two people, whether they are clinicians or librarians, will develop an identical search strategy or yield identical results from a search on the same review question.

The time required to conduct a search for a systematic review will also vary. It's dependent on the review question, the breadth of the evidence base, and the scope of the proposed search as stated in the review protocol. Narrow searches will often be adequate when investigating a topic requiring a few specific keywords, such as when you're searching only for randomized controlled trials (RCTs) conducted in a single population with a rare disorder. A narrow search will be less resource intensive than a search conducted when the review question is broader or the search relies on general keywords (such as education , prevention , or experience ). And while it may seem important conceptually to use a general keyword (such as safety in a search for articles on medical errors, for example), in practice it will add few relevant studies beyond those identified using more specific terms (such as error or harm ).

When beginning the search for evidence, you should conduct a few small searches as a test of various search terms and combinations of terms. An ideal search strategy is both sensitive and specific: a sensitive search will recall relevant studies, while a specific search will exclude irrelevant studies. A search that is overly sensitive may capture all the necessary studies but may require a labor-intensive vetting of unnecessary studies at the stage of study selection. A search that is overly specific will yield fewer results but is always subject to the risk that important studies may have been omitted.

Finding help. Given the complexity of the many indexing languages and rules governing the various databases, we recommend that early in the process you make use of an experienced research librarian who can examine your search strategy and help you choose citation databases relevant to your review question. If you can't easily access the services of a research librarian, there are many online tutorials that can help. A Google search—for example, “How do I search using PubMed?”—will reveal sites containing helpful hints and training developed by the U.S. National Library of Medicine (NLM) and librarians from across the globe.

DEVELOPING THE SEARCH STRATEGY

A review protocol with a clearly defined review question and inclusion criteria will provide the foundation for your search strategy. Before embarking on the search, you will need to understand the review question and what information you'll need to address it. For example, it's important to consider the type of data being sought (quantitative, qualitative, economic), the types of studies that report the data (RCTs, cohort studies, ethnographic studies), and the limits or restrictions you'll apply (publication date or language). This will shorten the time required to search and help to ensure that the information retrieved is both relevant and valid.

Once you've determined the review question, you'll need to identify the key terms articulated in the question and the protocol and create a logic grid or concept map. In a logic grid for a review on the effectiveness of an intervention, for example, each column represents a discrete concept that is generally aligned with each element of the PICO mnemonic— P opulation, I ntervention, C omparison intervention, and O utcome measures.

T1-27

Consider an example using the following review question: “Is animal-assisted therapy more effective than music therapy in managing aggressive behavior in elderly people with dementia?” Within this question are the four PICO concepts: elderly patients with dementia (population), animal-assisted therapy (intervention), music therapy (comparison intervention), and aggressive behavior (outcome measures) (see Table 1 for an example of a logic grid).

T2-27

Keywords or free-text words. The first formal step in all searches is to determine any alternative terms or synonyms for the identified concepts in the logic grid. Normally, you'll identify these terms—often referred to as keywords or free-text words—within the literature itself. Perhaps you'll start with a simple search using the terms dementia and animal-assisted therapy or music therapy and aggressive behavior . By looking at the titles and abstracts of the retrieved articles, you can find key terms used in the literature, as well as key concepts that are important to your question. For instance, is the term animal-assisted therapy used synonymously with the term pet therapy ? Furthermore, retrieving and reading a few relevant studies of any design—such as an experimental study or a traditional literature review on the topic—will further aid in identifying any commonly used terms.

When developing your search strategy, note that most search platforms (such as Ovid or EBSCOhost) used to access databases (such as MEDLINE) search for the exact terms entered in the database, including any misspellings. This means that to conduct a comprehensive search, you should enter as many relevant key terms as possible. Important articles may be overlooked if all relevant synonyms for a concept aren't included, as some authors may refer to the same concept using a different term (such as heart attack instead of myocardial infarction ). Such differences notwithstanding, you may find that including a relevant but broad term may retrieve many irrelevant studies.

Expanding on the logic grid shown in Table 1 , Table 2 now contains the keywords chosen from scanning the titles and abstracts of retrieved articles in your initial search. Column one contains terms relating to dementia , the defining feature of the population of interest; columns two and three contain terms relating to animal-assisted therapy and music therapy , the intervention and comparator of interest; and column four contains terms relating to aggressive behavior , the outcome of interest. Placing the terms into a logic grid illustrates how the related concepts or synonyms will combine to construct the final search string.

Index terms or subject headings. Comprehensive search strategies should consist of both keywords or free-text words and index terms, which are used by some major bibliographic databases to describe the content of each published article using a “controlled vocabulary”—that is, a list of standard terms that categorize articles based on their content (such terms will vary from database to database). For example, PubMed uses medical subject heading (MeSH) terms, the controlled vocabulary of MEDLINE. 2 MeSH terms are categorized within 16 main “trees” (such as anatomy, organisms, diseases, drugs, and chemicals), each of which branches from the broadest to the most specific terms.

To determine whether index terms exist for the concepts you've identified in your review question, you can search for each term in the MeSH database (selected from the drop-down list on the PubMed home page). For example, by entering dementia , PubMed will identify relevant MeSH terms that include Dementia and Alzheimer Disease . By selecting Dementia , you'll see the term's tree, including the subcategories listed below it, such as Lewy Body Disease .

As was the case when identifying key terms to use in the search strategy, it is also recommended that an initial, simple search using a few key concepts ( dementia AND animal-assisted therapy or dementia AND music therapy AND aggressive behavior ) be performed in PubMed to identify index terms. The aim is to retrieve a few relevant articles to see how they were indexed using the controlled vocabulary. Once the results are displayed, you can scroll through the citations and click on the title of any eligible article to view its details. From here, follow the link to the article's MeSH terms and examine which ones were used to describe the article's content. Repeat this process with a number of different articles to determine whether similar indexing terms have been used.

T3-27

The terms in the logic grid can now be updated with the MeSH terms you have chosen from those listed with each retrieved article (see Table 3 ). The [mh] that appears next to these terms in the grid is the search-field descriptor that stands for “MeSH headings.” It's worth noting that “Entry Terms” under each search term's MeSH listing (if one is available) can also be examined for suggestions of alternative terms that can be searched in titles and abstracts.

Because new articles in PubMed are not indexed immediately, and because indexing is a manual, subjective process susceptible to human variation, it's important to also search for the key terms in the titles and abstracts of articles—in other words, for free-text or keywords—to capture any articles that could be missed by using index terms (such as MeSH headings) alone. For example, if we did not search for free-text words and did not include the index term Bonding, Human Pet (a MeSH term), we might miss an important article that wasn't indexed under the MeSH term Animal-Assisted Therapy .

T4-27

By adding the search-field descriptor [tiab] (meaning “title/abstract”) to a search term, you can direct PubMed to search the title and abstract field code for these terms. A number of other search-field descriptors can be used as well, such as [au] for “author” and [pt] for “publication type.” 2 Using a search-field descriptor such as [tw] (“text word”) is often preferred over [tiab] for systematic reviews because the former searches in the title and abstract of articles as well as across a greater number of fields and will return a greater number of results for the same search query. Shortcuts or “wildcard” characters can also be used to account for different terminology or spelling. For example, PubMed allows truncation searching, in which an asterisk can substitute for any word's beginning or ending (for instance, a search for therap* will retrieve articles with the words therapy and therapeutic ). Search-field descriptors and wildcard characters should be applied to any newly identified keywords and index terms in the logic grid (see Table 4 ).

Once all search terms, including both free-text words and indexing terms, have been collected and finalized, a second search can then be undertaken across all selected citation databases. Initially, the key terms and synonyms within each column in the logic grid are combined using “OR.” (Most databases use some form of Boolean logic—search terms connected by the Boolean operators “OR” and “AND,” among others.) This will direct the database to find articles containing any of the search terms within the indicated fields. To do this in PubMed, select the “Advanced” search box and clear the search history. Copy and paste the first set of terms into PubMed and run the search.

For example, an initial search for articles related to different types of dementia might look like this:

“Dementia [tw] OR Alzheimer [tw] OR Huntington* [tw] OR Kluver [tw] OR Lewy [tw] OR Dementia [mh] OR Alzheimer disease [mh]"

This search could yield more than 100,000 citations. Following this, clear the search box and repeat the process with search terms from the second column in Table 4 . It is easier to search each column of the logic grid individually—particularly if each column contains an extensive list of search terms—rather than combining all the search sets in one go. Furthermore, by running each search successively you can determine if a component of the search string is producing many irrelevant results and easily adjust the search strategy. In our example, if you add the term aggress* [tw] to capture aggressive and aggression in the title or abstract, you will get an overwhelming number of irrelevant results because these terms are also used to describe the spread of certain cancers.

Once you complete the searches aligned to each concept, click on the “Advanced” option again. This allows for display of the “search history” and for a ready combination of the individual searches using the Boolean operators “AND” and “OR.” Using this method, parentheses are automatically placed around each set of terms to maintain the logical structure of the search. For example, the search for articles on animal-assisted therapy versus music therapy to treat aggression in patients with dementia might look like this:

“(Dementia [tw] OR Alzheimer [tw] OR Huntington* [tw] OR Kluver [tw] OR Lewy [tw] OR Dementia [mh] OR Alzheimer disease [mh]) AND (Animal assisted therapy [tw] OR Animal assisted activit* [tiab] OR Animal assisted intervention* [tiab] OR Animal therapy [tw] OR Pet therapy [tw] OR Dog therapy [tw] OR Dog assisted therapy [tw] OR Canine assisted therapy [tw] OR Aquarium [tiab] OR Animal Assisted Therapy [mh] OR Pets [mh] OR Dogs [mh] OR Cats [mh] OR Birds [mh] OR Bonding, Human-Pet [mh] OR Animals, Domestic [mh]) OR (Music* [tw] OR Music therapy [tw] OR Singing [tw] OR Sing [tw] OR Auditory stimulat* [tw] OR Music [mh] OR Music Therapy [mh] OR Acoustic Stimulation [mh] OR Singing [mh]) AND (Aggression [tw] OR Neuropsychiatric [tiab] OR Apathy inventory [tiab] OR Cornell scale [tiab] OR Cohen Mansfield [tiab] OR BEHAVE-AD [tiab] OR CERAD-BRSD [tiab] OR Behavior* [tiab] OR Behaviour* [tiab] OR Aggression [mh] OR Personality inventory [mh] OR Psychomotor agitation [mh])"

Once the final search has been conducted, you can further refine search results by publication date, study groups, language, or any other limits appropriate to the review topic by selecting the relevant filter (left-hand side of the screen in PubMed) from the range available. PubMed also provides predefined search filters that restrict search results to specific clinical study categories or subject matters (such as clinical queries). You will have determined the date range for the search at the protocol development stage. Given that your aim is to summarize the evidence surrounding a particular question, you should justify any limits to the publication date of included studies in the background section of the protocol. The chosen time frame will vary depending on the review question. For example, reviewers may impose a start date for a search that coincides with the introduction of a new intervention and the advent of the preceding clinical research on it.

The structure of the search strategy will remain the same regardless of the search platform used to search a database. But since most major databases use a unique controlled vocabulary to index their articles, the indexing terms will need to be adapted to each database; in most cases the key terms remain the same across different databases. These differences in indexing terms are the main reason it is not recommended to search bibliographic citation databases for a systematic review using a federated search engine or platform—that is, one that searches multiple databases and sources at once.

You should also be aware that the platforms used to search citation databases often use different wildcard characters or commands. For this reason, beginning searchers should use the online tutorials and help pages of the various platforms and databases. For example, while Ovid's search platform can also be used to search the MEDLINE database, the terms used for truncation searching are quite different: an asterisk (*) is used for unlimited truncation within PubMed and a dollar symbol ($) in Ovid. Moreover, in Ovid the question mark (?) wildcard can be used within or at the end of a word to substitute for one character or no characters ( behavio?r will retrieve articles with the words behaviour and behavior ); the number sign (#) wildcard can substitute for a single character ( wom#n will retrieve articles with both woman and women ). The use of wildcards for substitution of characters is not supported in PubMed.

Because searching is an iterative process, you won't want to predetermine when it will end. Consequently, it is important to look at the results of the search continually as you develop the search strategy to determine whether the results are relevant. One way to do this is to check if already identified relevant articles are being captured by the search. If not, the search strategy will need to be modified accordingly.

Once the search is complete, the results can be exported to bibliographic management software such as EndNote or Reference Manager. These tools are useful for organizing the search results, removing duplicate citations, and selecting studies (the next step of the systematic review process, to be discussed in the next article in this series).

WHERE TO SEARCH?

Developing the search strategy and search filters for use within each database is an important and time-consuming part of the search process, often more so than the search itself! Another important consideration is where to search. A search for a systematic review should be comprehensive and attempt to identify all of the available evidence. This can be an enormous undertaking.

Generally, a systematic review to inform health care practice and policy should search the major medical databases including MEDLINE from the NLM in North America and searchable through PubMed, and Embase, a product of Elsevier that indexes many European biomedical journals; the controlled vocabulary for Embase is searchable through Emtree, which also contains all MeSH terms ( www.elsevier.com/online-tools/embase/emtree ). Nurses undertaking systematic reviews will find that much literature relevant to nursing practice is also available in the Cumulative Index to Nursing and Allied Health Literature (CINAHL) database by EBSCO. Beyond these, there are many others: Web of Science, PsycINFO, Scopus, JSTOR, Academic Search Premier, Academic Onefile, the Cochrane Nursing Care Field trials register, and the list goes on.

You should establish which databases index articles relevant to the topic at hand. Some databases have a specific topic focus, such as PsycINFO, which should be searched for a question related to mental health. The JBI Database of Systematic Reviews and Implementation Reports is, as the name suggests, a repository for systematic reviews and would be unnecessary for most review searches (systematic reviews rarely include other systematic reviews among their inclusion criteria). Similarly, a quick Google search (“What information is in… ?”) to establish the content and coverage of other databases is worthwhile and will help in identifying unnecessary overlap in the search strategy.

Hand searching. You may also wish to consider more traditional means of locating evidence. Screening the reference lists of studies already selected for inclusion in the review is often a valuable means of identifying other pertinent studies. Similarly, hand searching specific journals is often used by systematic review authors to locate studies. Journals selected for hand searching should be identified as relevant from database or preliminary searching; the likelihood is that these journals may contain relevant studies. Because hand searching can be an onerous task, it's recommended that no more than two or three relevant journals should be hand searched for a review.

Finding experts is another method of locating evidence. While contacting authors to clarify details of studies and to request data are relatively common pursuits for the systematic reviewer during the appraisal and extraction processes, doing so to identify relevant studies can also be useful. Such experts can often provide papers that even a comprehensive search may have failed to identify.

SHADES OF GRAY

Systematic reviews that purport to have conducted a comprehensive search should have made some attempt to search for gray literature. The International Conference on Grey Literature in Luxembourg defined it in 1997 (and expanded on it in 2004) as literature “produced at all levels of government, academic, business and industry in electronic and print formats not controlled by commercial publishing.” 3 However, this definition is often broadened to include any study or paper that has not been formally published or peer reviewed. Gray literature often appears in the form of government or institution reports and newsletters and even in blogs, conference proceedings, census reports, or nonindependent research papers. As a result, these reports or manuscripts are often not as widely available and are generally more difficult to locate.

Nonetheless, the inclusion of gray literature in systematic reviews has emerged as an important adjunct to commercially published research, as it often reflects a source of timely or alternative information that can help to minimize publication bias and provide a more accurate and thorough account of the evidence. 4, 5

There are three common ways to search for gray literature. The first involves searching or browsing the Web sites of organizations relevant to the review question (such as the World Health Organization or the National Institute for Health and Care Excellence). The second involves searching databases that collate and index gray literature. Although gray literature is rarely indexed, two commonly used sources are OpenGrey ( www.opengrey.eu ), an open access database to gray literature from Europe, and the Grey Literature Report ( www.greylit.org ), a bimonthly report from the New York Academy of Medicine. Reviewers will find that such databases do not have an extensive or advanced search capability, and therefore searching them is often limited to the use of a few critical keywords. Furthermore, they lack indexing or subject headings; without this feature a search can be quite time consuming. The third approach is to use online search engines. Search engines such as Google do not use a controlled vocabulary and so performing a simple search of a few select keywords is best. Such sites will yield a large number of results. To make results more manageable, you can try limiting the search to terms that appear on the title page of an article only 6 or by using keywords that limit the results to specific documents (such as guidelines). Searches can also be limited by language or sources (for example, adding site:gov to a Google search will limit results to government Web sites). An example of a tool that can also help is the federated search engine MedNar ( http://mednar.com/mednar ) that searches across a range of government and organizational sites, as well as commercial databases.

Other sources of gray literature can be found in numerous guides developed to assist researchers. For example, the Canadian Agency for Drugs and Technologies in Health's Grey Matters provides an extensive list of gray literature sources that can be searched. 7 Developed with the systematic reviewer in mind, the tool kit provides a checklist that aids users in documenting the search process and in ensuring it has been conducted in a standardized way.

REPORTING THE SEARCH STRATEGY

The final consideration is reporting the details of the search strategy, including the filters (such as language, date limits) and databases and other sources used. A hallmark of a systematic review is its reproducibility; another researcher should be able to review the same question and arrive at similar conclusions. Without a transparent reporting of the search strategy—one that allows readers to assess the quality of the search and its sources, and in turn, make a judgment on the likely credibility of the review 8, 9 —this would not be possible.

Most journals that publish systematic reviews now espouse the PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analyses; online at www.prisma-statement.org ), which dictate that the full search strategy for at least one major database should be reported in an appendix and published along with the review. 10 Online repositories of systematic reviews, such as the JBI Database of Systematic Reviews and Implementation Reports and the Cochrane Database of Systematic Reviews , allow for publication of all the search filters and strategies across the databases and sites used. A systematic reviewer will appreciate that reporting only the search filters used is inadequate. The methods section of a review should list all of the bibliographic citation databases searched, ideally with the platform used to search them, as well as the dates they were searched and any limits used. The results of the search should be adequately reported, as well; this is often quite simple to convey in a flow diagram, which is also detailed in the PRISMA guidelines. 10

Once the search is complete and the results from each source have been exported, the next step, study selection, can begin. This is where titles, abstracts, and sometimes the full text of studies found are screened against the inclusion and exclusion criteria. This step of the process will be the focus of the next article in this series.

evidence; gray literature; literature search; review question; systematic review

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

The systematic review: an overview, evidence-based practice: step by step: igniting a spirit of inquiry, evidence-based practice: step by step: the seven steps of evidence-based..., evidence-based practice, step by step: asking the clinical question: a key step ..., developing the review question and inclusion criteria.

  • Open access
  • Published: 14 August 2018

Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies

  • Chris Cooper   ORCID: orcid.org/0000-0003-0864-5607 1 ,
  • Andrew Booth 2 ,
  • Jo Varley-Campbell 1 ,
  • Nicky Britten 3 &
  • Ruth Garside 4  

BMC Medical Research Methodology volume  18 , Article number:  85 ( 2018 ) Cite this article

209k Accesses

224 Citations

117 Altmetric

Metrics details

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before.

The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies.

A literature review.

Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge.

The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching.

Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process.

Conclusions

Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.

Peer Review reports

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving review stakeholders clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. This is in contrast to the information science literature, which has developed information processing models as an explicit basis for dialogue and empirical testing. Without an explicit model, research in the process of systematic literature searching will remain immature and potentially uneven, and the development of shared information models will be assumed but never articulated.

One way of developing such a conceptual model is by formally examining the implicit “programme theory” as embodied in key methodological texts. The aim of this review is therefore to determine if a shared model of the literature searching process in systematic reviews can be detected across guidance documents and, if so, how this process is reported and supported.

Identifying guidance

Key texts (henceforth referred to as “guidance”) were identified based upon their accessibility to, and prominence within, United Kingdom systematic reviewing practice. The United Kingdom occupies a prominent position in the science of health information retrieval, as quantified by such objective measures as the authorship of papers, the number of Cochrane groups based in the UK, membership and leadership of groups such as the Cochrane Information Retrieval Methods Group, the HTA-I Information Specialists’ Group and historic association with such centres as the UK Cochrane Centre, the NHS Centre for Reviews and Dissemination, the Centre for Evidence Based Medicine and the National Institute for Clinical Excellence (NICE). Coupled with the linguistic dominance of English within medical and health science and the science of systematic reviews more generally, this offers a justification for a purposive sample that favours UK, European and Australian guidance documents.

Nine guidance documents were identified. These documents provide guidance for different types of reviews, namely: reviews of interventions, reviews of health technologies, reviews of qualitative research studies, reviews of social science topics, and reviews to inform guidance.

Whilst these guidance documents occasionally offer additional guidance on other types of systematic reviews, we have focused on the core and stated aims of these documents as they relate to literature searching. Table  1 sets out: the guidance document, the version audited, their core stated focus, and a bibliographical pointer to the main guidance relating to literature searching.

Once a list of key guidance documents was determined, it was checked by six senior information professionals based in the UK for relevance to current literature searching in systematic reviews.

Identifying supporting studies

In addition to identifying guidance, the authors sought to populate an evidence base of supporting studies (henceforth referred to as “studies”) that contribute to existing search practice. Studies were first identified by the authors from their knowledge on this topic area and, subsequently, through systematic citation chasing key studies (‘pearls’ [ 1 ]) located within each key stage of the search process. These studies are identified in Additional file  1 : Appendix Table 1. Citation chasing was conducted by analysing the bibliography of references for each study (backwards citation chasing) and through Google Scholar (forward citation chasing). A search of PubMed using the systematic review methods filter was undertaken in August 2017 (see Additional file 1 ). The search terms used were: (literature search*[Title/Abstract]) AND sysrev_methods[sb] and 586 results were returned. These results were sifted for relevance to the key stages in Fig.  1 by CC.

figure 1

The key stages of literature search guidance as identified from nine key texts

Extracting the data

To reveal the implicit process of literature searching within each guidance document, the relevant sections (chapters) on literature searching were read and re-read, with the aim of determining key methodological stages. We defined a key methodological stage as a distinct step in the overall process for which specific guidance is reported, and action is taken, that collectively would result in a completed literature search.

The chapter or section sub-heading for each methodological stage was extracted into a table using the exact language as reported in each guidance document. The lead author (CC) then read and re-read these data, and the paragraphs of the document to which the headings referred, summarising section details. This table was then reviewed, using comparison and contrast to identify agreements and areas of unique guidance. Consensus across multiple guidelines was used to inform selection of ‘key stages’ in the process of literature searching.

Having determined the key stages to literature searching, we then read and re-read the sections relating to literature searching again, extracting specific detail relating to the methodological process of literature searching within each key stage. Again, the guidance was then read and re-read, first on a document-by-document-basis and, secondly, across all the documents above, to identify both commonalities and areas of unique guidance.

Results and discussion

Our findings.

We were able to identify consensus across the guidance on literature searching for systematic reviews suggesting a shared implicit model within the information retrieval community. Whilst the structure of the guidance varies between documents, the same key stages are reported, even where the core focus of each document is different. We were able to identify specific areas of unique guidance, where a document reported guidance not summarised in other documents, together with areas of consensus across guidance.

Unique guidance

Only one document provided guidance on the topic of when to stop searching [ 2 ]. This guidance from 2005 anticipates a topic of increasing importance with the current interest in time-limited (i.e. “rapid”) reviews. Quality assurance (or peer review) of literature searches was only covered in two guidance documents [ 3 , 4 ]. This topic has emerged as increasingly important as indicated by the development of the PRESS instrument [ 5 ]. Text mining was discussed in four guidance documents [ 4 , 6 , 7 , 8 ] where the automation of some manual review work may offer efficiencies in literature searching [ 8 ].

Agreement between guidance: Defining the key stages of literature searching

Where there was agreement on the process, we determined that this constituted a key stage in the process of literature searching to inform systematic reviews.

From the guidance, we determined eight key stages that relate specifically to literature searching in systematic reviews. These are summarised at Fig. 1 . The data extraction table to inform Fig. 1 is reported in Table  2 . Table 2 reports the areas of common agreement and it demonstrates that the language used to describe key stages and processes varies significantly between guidance documents.

For each key stage, we set out the specific guidance, followed by discussion on how this guidance is situated within the wider literature.

Key stage one: Deciding who should undertake the literature search

The guidance.

Eight documents provided guidance on who should undertake literature searching in systematic reviews [ 2 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ]. The guidance affirms that people with relevant expertise of literature searching should ‘ideally’ be included within the review team [ 6 ]. Information specialists (or information scientists), librarians or trial search co-ordinators (TSCs) are indicated as appropriate researchers in six guidance documents [ 2 , 7 , 8 , 9 , 10 , 11 ].

How the guidance corresponds to the published studies

The guidance is consistent with studies that call for the involvement of information specialists and librarians in systematic reviews [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 ] and which demonstrate how their training as ‘expert searchers’ and ‘analysers and organisers of data’ can be put to good use [ 13 ] in a variety of roles [ 12 , 16 , 20 , 21 , 24 , 25 , 26 ]. These arguments make sense in the context of the aims and purposes of literature searching in systematic reviews, explored below. The need for ‘thorough’ and ‘replicable’ literature searches was fundamental to the guidance and recurs in key stage two. Studies have found poor reporting, and a lack of replicable literature searches, to be a weakness in systematic reviews [ 17 , 18 , 27 , 28 ] and they argue that involvement of information specialists/ librarians would be associated with better reporting and better quality literature searching. Indeed, Meert et al. [ 29 ] demonstrated that involving a librarian as a co-author to a systematic review correlated with a higher score in the literature searching component of a systematic review [ 29 ]. As ‘new styles’ of rapid and scoping reviews emerge, where decisions on how to search are more iterative and creative, a clear role is made here too [ 30 ].

Knowing where to search for studies was noted as important in the guidance, with no agreement as to the appropriate number of databases to be searched [ 2 , 6 ]. Database (and resource selection more broadly) is acknowledged as a relevant key skill of information specialists and librarians [ 12 , 15 , 16 , 31 ].

Whilst arguments for including information specialists and librarians in the process of systematic review might be considered self-evident, Koffel and Rethlefsen [ 31 ] have questioned if the necessary involvement is actually happening [ 31 ].

Key stage two: Determining the aim and purpose of a literature search

The aim: Five of the nine guidance documents use adjectives such as ‘thorough’, ‘comprehensive’, ‘transparent’ and ‘reproducible’ to define the aim of literature searching [ 6 , 7 , 8 , 9 , 10 ]. Analogous phrases were present in a further three guidance documents, namely: ‘to identify the best available evidence’ [ 4 ] or ‘the aim of the literature search is not to retrieve everything. It is to retrieve everything of relevance’ [ 2 ] or ‘A systematic literature search aims to identify all publications relevant to the particular research question’ [ 3 ]. The Joanna Briggs Institute reviewers’ manual was the only guidance document where a clear statement on the aim of literature searching could not be identified. The purpose of literature searching was defined in three guidance documents, namely to minimise bias in the resultant review [ 6 , 8 , 10 ]. Accordingly, eight of nine documents clearly asserted that thorough and comprehensive literature searches are required as a potential mechanism for minimising bias.

The need for thorough and comprehensive literature searches appears as uniform within the eight guidance documents that describe approaches to literature searching in systematic reviews of effectiveness. Reviews of effectiveness (of intervention or cost), accuracy and prognosis, require thorough and comprehensive literature searches to transparently produce a reliable estimate of intervention effect. The belief that all relevant studies have been ‘comprehensively’ identified, and that this process has been ‘transparently’ reported, increases confidence in the estimate of effect and the conclusions that can be drawn [ 32 ]. The supporting literature exploring the need for comprehensive literature searches focuses almost exclusively on reviews of intervention effectiveness and meta-analysis. Different ‘styles’ of review may have different standards however; the alternative, offered by purposive sampling, has been suggested in the specific context of qualitative evidence syntheses [ 33 ].

What is a comprehensive literature search?

Whilst the guidance calls for thorough and comprehensive literature searches, it lacks clarity on what constitutes a thorough and comprehensive literature search, beyond the implication that all of the literature search methods in Table 2 should be used to identify studies. Egger et al. [ 34 ], in an empirical study evaluating the importance of comprehensive literature searches for trials in systematic reviews, defined a comprehensive search for trials as:

a search not restricted to English language;

where Cochrane CENTRAL or at least two other electronic databases had been searched (such as MEDLINE or EMBASE); and

at least one of the following search methods has been used to identify unpublished trials: searches for (I) conference abstracts, (ii) theses, (iii) trials registers; and (iv) contacts with experts in the field [ 34 ].

Tricco et al. (2008) used a similar threshold of bibliographic database searching AND a supplementary search method in a review when examining the risk of bias in systematic reviews. Their criteria were: one database (limited using the Cochrane Highly Sensitive Search Strategy (HSSS)) and handsearching [ 35 ].

Together with the guidance, this would suggest that comprehensive literature searching requires the use of BOTH bibliographic database searching AND supplementary search methods.

Comprehensiveness in literature searching, in the sense of how much searching should be undertaken, remains unclear. Egger et al. recommend that ‘investigators should consider the type of literature search and degree of comprehension that is appropriate for the review in question, taking into account budget and time constraints’ [ 34 ]. This view tallies with the Cochrane Handbook, which stipulates clearly, that study identification should be undertaken ‘within resource limits’ [ 9 ]. This would suggest that the limitations to comprehension are recognised but it raises questions on how this is decided and reported [ 36 ].

What is the point of comprehensive literature searching?

The purpose of thorough and comprehensive literature searches is to avoid missing key studies and to minimize bias [ 6 , 8 , 10 , 34 , 37 , 38 , 39 ] since a systematic review based only on published (or easily accessible) studies may have an exaggerated effect size [ 35 ]. Felson (1992) sets out potential biases that could affect the estimate of effect in a meta-analysis [ 40 ] and Tricco et al. summarize the evidence concerning bias and confounding in systematic reviews [ 35 ]. Egger et al. point to non-publication of studies, publication bias, language bias and MEDLINE bias, as key biases [ 34 , 35 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Comprehensive searches are not the sole factor to mitigate these biases but their contribution is thought to be significant [ 2 , 32 , 34 ]. Fehrmann (2011) suggests that ‘the search process being described in detail’ and that, where standard comprehensive search techniques have been applied, increases confidence in the search results [ 32 ].

Does comprehensive literature searching work?

Egger et al., and other study authors, have demonstrated a change in the estimate of intervention effectiveness where relevant studies were excluded from meta-analysis [ 34 , 47 ]. This would suggest that missing studies in literature searching alters the reliability of effectiveness estimates. This is an argument for comprehensive literature searching. Conversely, Egger et al. found that ‘comprehensive’ searches still missed studies and that comprehensive searches could, in fact, introduce bias into a review rather than preventing it, through the identification of low quality studies then being included in the meta-analysis [ 34 ]. Studies query if identifying and including low quality or grey literature studies changes the estimate of effect [ 43 , 48 ] and question if time is better invested updating systematic reviews rather than searching for unpublished studies [ 49 ], or mapping studies for review as opposed to aiming for high sensitivity in literature searching [ 50 ].

Aim and purpose beyond reviews of effectiveness

The need for comprehensive literature searches is less certain in reviews of qualitative studies, and for reviews where a comprehensive identification of studies is difficult to achieve (for example, in Public health) [ 33 , 51 , 52 , 53 , 54 , 55 ]. Literature searching for qualitative studies, and in public health topics, typically generates a greater number of studies to sift than in reviews of effectiveness [ 39 ] and demonstrating the ‘value’ of studies identified or missed is harder [ 56 ], since the study data do not typically support meta-analysis. Nussbaumer-Streit et al. (2016) have registered a review protocol to assess whether abbreviated literature searches (as opposed to comprehensive literature searches) has an impact on conclusions across multiple bodies of evidence, not only on effect estimates [ 57 ] which may develop this understanding. It may be that decision makers and users of systematic reviews are willing to trade the certainty from a comprehensive literature search and systematic review in exchange for different approaches to evidence synthesis [ 58 ], and that comprehensive literature searches are not necessarily a marker of literature search quality, as previously thought [ 36 ]. Different approaches to literature searching [ 37 , 38 , 59 , 60 , 61 , 62 ] and developing the concept of when to stop searching are important areas for further study [ 36 , 59 ].

The study by Nussbaumer-Streit et al. has been published since the submission of this literature review [ 63 ]. Nussbaumer-Streit et al. (2018) conclude that abbreviated literature searches are viable options for rapid evidence syntheses, if decision-makers are willing to trade the certainty from a comprehensive literature search and systematic review, but that decision-making which demands detailed scrutiny should still be based on comprehensive literature searches [ 63 ].

Key stage three: Preparing for the literature search

Six documents provided guidance on preparing for a literature search [ 2 , 3 , 6 , 7 , 9 , 10 ]. The Cochrane Handbook clearly stated that Cochrane authors (i.e. researchers) should seek advice from a trial search co-ordinator (i.e. a person with specific skills in literature searching) ‘before’ starting a literature search [ 9 ].

Two key tasks were perceptible in preparing for a literature searching [ 2 , 6 , 7 , 10 , 11 ]. First, to determine if there are any existing or on-going reviews, or if a new review is justified [ 6 , 11 ]; and, secondly, to develop an initial literature search strategy to estimate the volume of relevant literature (and quality of a small sample of relevant studies [ 10 ]) and indicate the resources required for literature searching and the review of the studies that follows [ 7 , 10 ].

Three documents summarised guidance on where to search to determine if a new review was justified [ 2 , 6 , 11 ]. These focused on searching databases of systematic reviews (The Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE)), institutional registries (including PROSPERO), and MEDLINE [ 6 , 11 ]. It is worth noting, however, that as of 2015, DARE (and NHS EEDs) are no longer being updated and so the relevance of this (these) resource(s) will diminish over-time [ 64 ]. One guidance document, ‘Systematic reviews in the Social Sciences’, noted, however, that databases are not the only source of information and unpublished reports, conference proceeding and grey literature may also be required, depending on the nature of the review question [ 2 ].

Two documents reported clearly that this preparation (or ‘scoping’) exercise should be undertaken before the actual search strategy is developed [ 7 , 10 ]).

The guidance offers the best available source on preparing the literature search with the published studies not typically reporting how their scoping informed the development of their search strategies nor how their search approaches were developed. Text mining has been proposed as a technique to develop search strategies in the scoping stages of a review although this work is still exploratory [ 65 ]. ‘Clustering documents’ and word frequency analysis have also been tested to identify search terms and studies for review [ 66 , 67 ]. Preparing for literature searches and scoping constitutes an area for future research.

Key stage four: Designing the search strategy

The Population, Intervention, Comparator, Outcome (PICO) structure was the commonly reported structure promoted to design a literature search strategy. Five documents suggested that the eligibility criteria or review question will determine which concepts of PICO will be populated to develop the search strategy [ 1 , 4 , 7 , 8 , 9 ]. The NICE handbook promoted multiple structures, namely PICO, SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) and multi-stranded approaches [ 4 ].

With the exclusion of The Joanna Briggs Institute reviewers’ manual, the guidance offered detail on selecting key search terms, synonyms, Boolean language, selecting database indexing terms and combining search terms. The CEE handbook suggested that ‘search terms may be compiled with the help of the commissioning organisation and stakeholders’ [ 10 ].

The use of limits, such as language or date limits, were discussed in all documents [ 2 , 3 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ].

Search strategy structure

The guidance typically relates to reviews of intervention effectiveness so PICO – with its focus on intervention and comparator - is the dominant model used to structure literature search strategies [ 68 ]. PICOs – where the S denotes study design - is also commonly used in effectiveness reviews [ 6 , 68 ]. As the NICE handbook notes, alternative models to structure literature search strategies have been developed and tested. Booth provides an overview on formulating questions for evidence based practice [ 69 ] and has developed a number of alternatives to the PICO structure, namely: BeHEMoTh (Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory [ 55 ]; SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) for identification of social science and evaluation studies [ 69 ] and, working with Cooke and colleagues, SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [ 70 ]. SPIDER has been compared to PICO and PICOs in a study by Methley et al. [ 68 ].

The NICE handbook also suggests the use of multi-stranded approaches to developing literature search strategies [ 4 ]. Glanville developed this idea in a study by Whitting et al. [ 71 ] and a worked example of this approach is included in the development of a search filter by Cooper et al. [ 72 ].

Writing search strategies: Conceptual and objective approaches

Hausner et al. [ 73 ] provide guidance on writing literature search strategies, delineating between conceptually and objectively derived approaches. The conceptual approach, advocated by and explained in the guidance documents, relies on the expertise of the literature searcher to identify key search terms and then develop key terms to include synonyms and controlled syntax. Hausner and colleagues set out the objective approach [ 73 ] and describe what may be done to validate it [ 74 ].

The use of limits

The guidance documents offer direction on the use of limits within a literature search. Limits can be used to focus literature searching to specific study designs or by other markers (such as by date) which limits the number of studies returned by a literature search. The use of limits should be described and the implications explored [ 34 ] since limiting literature searching can introduce bias (explored above). Craven et al. have suggested the use of a supporting narrative to explain decisions made in the process of developing literature searches and this advice would usefully capture decisions on the use of search limits [ 75 ].

Key stage five: Determining the process of literature searching and deciding where to search (bibliographic database searching)

Table 2 summarises the process of literature searching as reported in each guidance document. Searching bibliographic databases was consistently reported as the ‘first step’ to literature searching in all nine guidance documents.

Three documents reported specific guidance on where to search, in each case specific to the type of review their guidance informed, and as a minimum requirement [ 4 , 9 , 11 ]. Seven of the key guidance documents suggest that the selection of bibliographic databases depends on the topic of review [ 2 , 3 , 4 , 6 , 7 , 8 , 10 ], with two documents noting the absence of an agreed standard on what constitutes an acceptable number of databases searched [ 2 , 6 ].

The guidance documents summarise ‘how to’ search bibliographic databases in detail and this guidance is further contextualised above in terms of developing the search strategy. The documents provide guidance of selecting bibliographic databases, in some cases stating acceptable minima (i.e. The Cochrane Handbook states Cochrane CENTRAL, MEDLINE and EMBASE), and in other cases simply listing bibliographic database available to search. Studies have explored the value in searching specific bibliographic databases, with Wright et al. (2015) noting the contribution of CINAHL in identifying qualitative studies [ 76 ], Beckles et al. (2013) questioning the contribution of CINAHL to identifying clinical studies for guideline development [ 77 ], and Cooper et al. (2015) exploring the role of UK-focused bibliographic databases to identify UK-relevant studies [ 78 ]. The host of the database (e.g. OVID or ProQuest) has been shown to alter the search returns offered. Younger and Boddy [ 79 ] report differing search returns from the same database (AMED) but where the ‘host’ was different [ 79 ].

The average number of bibliographic database searched in systematic reviews has risen in the period 1994–2014 (from 1 to 4) [ 80 ] but there remains (as attested to by the guidance) no consensus on what constitutes an acceptable number of databases searched [ 48 ]. This is perhaps because thinking about the number of databases searched is the wrong question, researchers should be focused on which databases were searched and why, and which databases were not searched and why. The discussion should re-orientate to the differential value of sources but researchers need to think about how to report this in studies to allow findings to be generalised. Bethel (2017) has proposed ‘search summaries’, completed by the literature searcher, to record where included studies were identified, whether from database (and which databases specifically) or supplementary search methods [ 81 ]. Search summaries document both yield and accuracy of searches, which could prospectively inform resource use and decisions to search or not to search specific databases in topic areas. The prospective use of such data presupposes, however, that past searches are a potential predictor of future search performance (i.e. that each topic is to be considered representative and not unique). In offering a body of practice, this data would be of greater practicable use than current studies which are considered as little more than individual case studies [ 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 ].

When to database search is another question posed in the literature. Beyer et al. [ 91 ] report that databases can be prioritised for literature searching which, whilst not addressing the question of which databases to search, may at least bring clarity as to which databases to search first [ 91 ]. Paradoxically, this links to studies that suggest PubMed should be searched in addition to MEDLINE (OVID interface) since this improves the currency of systematic reviews [ 92 , 93 ]. Cooper et al. (2017) have tested the idea of database searching not as a primary search method (as suggested in the guidance) but as a supplementary search method in order to manage the volume of studies identified for an environmental effectiveness systematic review. Their case study compared the effectiveness of database searching versus a protocol using supplementary search methods and found that the latter identified more relevant studies for review than searching bibliographic databases [ 94 ].

Key stage six: Determining the process of literature searching and deciding where to search (supplementary search methods)

Table 2 also summaries the process of literature searching which follows bibliographic database searching. As Table 2 sets out, guidance that supplementary literature search methods should be used in systematic reviews recurs across documents, but the order in which these methods are used, and the extent to which they are used, varies. We noted inconsistency in the labelling of supplementary search methods between guidance documents.

Rather than focus on the guidance on how to use the methods (which has been summarised in a recent review [ 95 ]), we focus on the aim or purpose of supplementary search methods.

The Cochrane Handbook reported that ‘efforts’ to identify unpublished studies should be made [ 9 ]. Four guidance documents [ 2 , 3 , 6 , 9 ] acknowledged that searching beyond bibliographic databases was necessary since ‘databases are not the only source of literature’ [ 2 ]. Only one document reported any guidance on determining when to use supplementary methods. The IQWiG handbook reported that the use of handsearching (in their example) could be determined on a ‘case-by-case basis’ which implies that the use of these methods is optional rather than mandatory. This is in contrast to the guidance (above) on bibliographic database searching.

The issue for supplementary search methods is similar in many ways to the issue of searching bibliographic databases: demonstrating value. The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged [ 37 , 61 , 62 , 96 , 97 , 98 , 99 , 100 , 101 ] but understanding the value of the search methods to identify studies and data is unclear. In a recently published review, Cooper et al. (2017) reviewed the literature on supplementary search methods looking to determine the advantages, disadvantages and resource implications of using supplementary search methods [ 95 ]. This review also summarises the key guidance and empirical studies and seeks to address the question on when to use these search methods and when not to [ 95 ]. The guidance is limited in this regard and, as Table 2 demonstrates, offers conflicting advice on the order of searching, and the extent to which these search methods should be used in systematic reviews.

Key stage seven: Managing the references

Five of the documents provided guidance on managing references, for example downloading, de-duplicating and managing the output of literature searches [ 2 , 4 , 6 , 8 , 10 ]. This guidance typically itemised available bibliographic management tools rather than offering guidance on how to use them specifically [ 2 , 4 , 6 , 8 ]. The CEE handbook provided guidance on importing data where no direct export option is available (e.g. web-searching) [ 10 ].

The literature on using bibliographic management tools is not large relative to the number of ‘how to’ videos on platforms such as YouTube (see for example [ 102 ]). These YouTube videos confirm the overall lack of ‘how to’ guidance identified in this study and offer useful instruction on managing references. Bramer et al. set out methods for de-duplicating data and reviewing references in Endnote [ 103 , 104 ] and Gall tests the direct search function within Endnote to access databases such as PubMed, finding a number of limitations [ 105 ]. Coar et al. and Ahmed et al. consider the role of the free-source tool, Zotero [ 106 , 107 ]. Managing references is a key administrative function in the process of review particularly for documenting searches in PRISMA guidance.

Key stage eight: Documenting the search

The Cochrane Handbook was the only guidance document to recommend a specific reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 9 ]. Six documents provided guidance on reporting the process of literature searching with specific criteria to report [ 3 , 4 , 6 , 8 , 9 , 10 ]. There was consensus on reporting: the databases searched (and the host searched by), the search strategies used, and any use of limits (e.g. date, language, search filters (The CRD handbook called for these limits to be justified [ 6 ])). Three guidance documents reported that the number of studies identified should be recorded [ 3 , 6 , 10 ]. The number of duplicates identified [ 10 ], the screening decisions [ 3 ], a comprehensive list of grey literature sources searched (and full detail for other supplementary search methods) [ 8 ], and an annotation of search terms tested but not used [ 4 ] were identified as unique items in four documents.

The Cochrane Handbook was the only guidance document to note that the full search strategies for each database should be included in the Additional file 1 of the review [ 9 ].

All guidance documents should ultimately deliver completed systematic reviews that fulfil the requirements of the PRISMA reporting guidelines [ 108 ]. The guidance broadly requires the reporting of data that corresponds with the requirements of the PRISMA statement although documents typically ask for diverse and additional items [ 108 ]. In 2008, Sampson et al. observed a lack of consensus on reporting search methods in systematic reviews [ 109 ] and this remains the case as of 2017, as evidenced in the guidance documents, and in spite of the publication of the PRISMA guidelines in 2009 [ 110 ]. It is unclear why the collective guidance does not more explicitly endorse adherence to the PRISMA guidance.

Reporting of literature searching is a key area in systematic reviews since it sets out clearly what was done and how the conclusions of the review can be believed [ 52 , 109 ]. Despite strong endorsement in the guidance documents, specifically supported in PRISMA guidance, and other related reporting standards too (such as ENTREQ for qualitative evidence synthesis, STROBE for reviews of observational studies), authors still highlight the prevalence of poor standards of literature search reporting [ 31 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 ]. To explore issues experienced by authors in reporting literature searches, and look at uptake of PRISMA, Radar et al. [ 120 ] surveyed over 260 review authors to determine common problems and their work summaries the practical aspects of reporting literature searching [ 120 ]. Atkinson et al. [ 121 ] have also analysed reporting standards for literature searching, summarising recommendations and gaps for reporting search strategies [ 121 ].

One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [ 5 , 122 , 123 ]. A corresponding guideline for documentation of supplementary search methods does not yet exist although this idea is currently being explored.

How the reporting of the literature searching process corresponds to critical appraisal tools is an area for further research. In the survey undertaken by Radar et al. (2014), 86% of survey respondents (153/178) identified a need for further guidance on what aspects of the literature search process to report [ 120 ]. The PRISMA statement offers a brief summary of what to report but little practical guidance on how to report it [ 108 ]. Critical appraisal tools for systematic reviews, such as AMSTAR 2 (Shea et al. [ 124 ]) and ROBIS (Whiting et al. [ 125 ]), can usefully be read alongside PRISMA guidance, since they offer greater detail on how the reporting of the literature search will be appraised and, therefore, they offer a proxy on what to report [ 124 , 125 ]. Further research in the form of a study which undertakes a comparison between PRISMA and quality appraisal checklists for systematic reviews would seem to begin addressing the call, identified by Radar et al., for further guidance on what to report [ 120 ].

Limitations

Other handbooks exist.

A potential limitation of this literature review is the focus on guidance produced in Europe (the UK specifically) and Australia. We justify the decision for our selection of the nine guidance documents reviewed in this literature review in section “ Identifying guidance ”. In brief, these nine guidance documents were selected as the most relevant health care guidance that inform UK systematic reviewing practice, given that the UK occupies a prominent position in the science of health information retrieval. We acknowledge the existence of other guidance documents, such as those from North America (e.g. the Agency for Healthcare Research and Quality (AHRQ) [ 126 ], The Institute of Medicine [ 127 ] and the guidance and resources produced by the Canadian Agency for Drugs and Technologies in Health (CADTH) [ 128 ]). We comment further on this directly below.

The handbooks are potentially linked to one another

What is not clear is the extent to which the guidance documents inter-relate or provide guidance uniquely. The Cochrane Handbook, first published in 1994, is notably a key source of reference in guidance and systematic reviews beyond Cochrane reviews. It is not clear to what extent broadening the sample of guidance handbooks to include North American handbooks, and guidance handbooks from other relevant countries too, would alter the findings of this literature review or develop further support for the process model. Since we cannot be clear, we raise this as a potential limitation of this literature review. On our initial review of a sample of North American, and other, guidance documents (before selecting the guidance documents considered in this review), however, we do not consider that the inclusion of these further handbooks would alter significantly the findings of this literature review.

This is a literature review

A further limitation of this review was that the review of published studies is not a systematic review of the evidence for each key stage. It is possible that other relevant studies could help contribute to the exploration and development of the key stages identified in this review.

This literature review would appear to demonstrate the existence of a shared model of the literature searching process in systematic reviews. We call this model ‘the conventional approach’, since it appears to be common convention in nine different guidance documents.

The findings reported above reveal eight key stages in the process of literature searching for systematic reviews. These key stages are consistently reported in the nine guidance documents which suggests consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews.

In Table 2 , we demonstrate consensus regarding the application of literature search methods. All guidance documents distinguish between primary and supplementary search methods. Bibliographic database searching is consistently the first method of literature searching referenced in each guidance document. Whilst the guidance uniformly supports the use of supplementary search methods, there is little evidence for a consistent process with diverse guidance across documents. This may reflect differences in the core focus across each document, linked to differences in identifying effectiveness studies or qualitative studies, for instance.

Eight of the nine guidance documents reported on the aims of literature searching. The shared understanding was that literature searching should be thorough and comprehensive in its aim and that this process should be reported transparently so that that it could be reproduced. Whilst only three documents explicitly link this understanding to minimising bias, it is clear that comprehensive literature searching is implicitly linked to ‘not missing relevant studies’ which is approximately the same point.

Defining the key stages in this review helps categorise the scholarship available, and it prioritises areas for development or further study. The supporting studies on preparing for literature searching (key stage three, ‘preparation’) were, for example, comparatively few, and yet this key stage represents a decisive moment in literature searching for systematic reviews. It is where search strategy structure is determined, search terms are chosen or discarded, and the resources to be searched are selected. Information specialists, librarians and researchers, are well placed to develop these and other areas within the key stages we identify.

This review calls for further research to determine the suitability of using the conventional approach. The publication dates of the guidance documents which underpin the conventional approach may raise questions as to whether the process which they each report remains valid for current systematic literature searching. In addition, it may be useful to test whether it is desirable to use the same process model of literature searching for qualitative evidence synthesis as that for reviews of intervention effectiveness, which this literature review demonstrates is presently recommended best practice.

Abbreviations

Behaviour of interest; Health context; Exclusions; Models or Theories

Cochrane Database of Systematic Reviews

The Cochrane Central Register of Controlled Trials

Database of Abstracts of Reviews of Effects

Enhancing transparency in reporting the synthesis of qualitative research

Institute for Quality and Efficiency in Healthcare

National Institute for Clinical Excellence

Population, Intervention, Comparator, Outcome

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Setting, Perspective, Intervention, Comparison, Evaluation

Sample, Phenomenon of Interest, Design, Evaluation, Research type

STrengthening the Reporting of OBservational studies in Epidemiology

Trial Search Co-ordinators

Booth A. Unpacking your literature search toolbox: on search styles and tactics. Health Information & Libraries Journal. 2008;25(4):313–7.

Article   Google Scholar  

Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell Publishing Ltd; 2006.

Book   Google Scholar  

Institute for Quality and Efficiency in Health Care (IQWiG). IQWiG Methods Resources. 7 Information retrieval 2014 [Available from: https://www.ncbi.nlm.nih.gov/books/NBK385787/ .

NICE: National Institute for Health and Care Excellence. Developing NICE guidelines: the manual 2014. Available from: https://www.nice.org.uk/media/default/about/what-we-do/our-programmes/developing-nice-guidelines-the-manual.pdf .

Sampson M. MJ, Lefebvre C, Moher D, Grimshaw J. Peer Review of Electronic Search Strategies: PRESS; 2008.

Google Scholar  

Centre for Reviews & Dissemination. Systematic reviews – CRD’s guidance for undertaking reviews in healthcare. York: Centre for Reviews and Dissemination, University of York; 2009.

eunetha: European Network for Health Technology Assesment Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness 2016. Available from: http://www.eunethta.eu/sites/default/files/Guideline_Information_Retrieval_V1-1.pdf .

Kugley SWA, Thomas J, Mahood Q, Jørgensen AMK, Hammerstrøm K, Sathe N. Searching for studies: a guide to information retrieval for Campbell systematic reviews. Oslo: Campbell Collaboration. 2017; Available from: https://www.campbellcollaboration.org/library/searching-for-studies-information-retrieval-guide-campbell-reviews.html

Lefebvre C, Manheimer E, Glanville J. Chapter 6: searching for studies. In: JPT H, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions; 2011.

Collaboration for Environmental Evidence. Guidelines for Systematic Review and Evidence Synthesis in Environmental Management.: Environmental Evidence:; 2013. Available from: http://www.environmentalevidence.org/wp-content/uploads/2017/01/Review-guidelines-version-4.2-final-update.pdf .

The Joanna Briggs Institute. Joanna Briggs institute reviewers’ manual. 2014th ed: the Joanna Briggs institute; 2014. Available from: https://joannabriggs.org/assets/docs/sumari/ReviewersManual-2014.pdf

Beverley CA, Booth A, Bath PA. The role of the information specialist in the systematic review process: a health information case study. Health Inf Libr J. 2003;20(2):65–74.

Article   CAS   Google Scholar  

Harris MR. The librarian's roles in the systematic review process: a case study. Journal of the Medical Library Association. 2005;93(1):81–7.

PubMed   PubMed Central   Google Scholar  

Egger JB. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One. 2015;10(5):e0125931.

Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014;67(9):1001–7.

Article   PubMed   Google Scholar  

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68(6):617–26.

Weller AC. Mounting evidence that librarians are essential for comprehensive literature searches for meta-analyses and Cochrane reports. J Med Libr Assoc. 2004;92(2):163–4.

Swinkels A, Briddon J, Hall J. Two physiotherapists, one librarian and a systematic literature review: collaboration in action. Health Info Libr J. 2006;23(4):248–56.

Foster M. An overview of the role of librarians in systematic reviews: from expert search to project manager. EAHIL. 2015;11(3):3–7.

Lawson L. OPERATING OUTSIDE LIBRARY WALLS 2004.

Vassar M, Yerokhin V, Sinnett PM, Weiher M, Muckelrath H, Carr B, et al. Database selection in systematic reviews: an insight through clinical neurology. Health Inf Libr J. 2017;34(2):156–64.

Townsend WA, Anderson PF, Ginier EC, MacEachern MP, Saylor KM, Shipman BL, et al. A competency framework for librarians involved in systematic reviews. Journal of the Medical Library Association : JMLA. 2017;105(3):268–75.

Cooper ID, Crum JA. New activities and changing roles of health sciences librarians: a systematic review, 1990-2012. Journal of the Medical Library Association : JMLA. 2013;101(4):268–77.

Crum JA, Cooper ID. Emerging roles for biomedical librarians: a survey of current practice, challenges, and changes. Journal of the Medical Library Association : JMLA. 2013;101(4):278–86.

Dudden RF, Protzko SL. The systematic review team: contributions of the health sciences librarian. Med Ref Serv Q. 2011;30(3):301–15.

Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol. 2008;61(5):440–8.

Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Academic medicine : journal of the Association of American Medical Colleges. 2011;86(8):1049–54.

Meert D, Torabi N, Costella J. Impact of librarians on reporting of the literature searching component of pediatric systematic reviews. Journal of the Medical Library Association : JMLA. 2016;104(4):267–77.

Morris M, Boruff JT, Gore GC. Scoping reviews: establishing the role of the librarian. Journal of the Medical Library Association : JMLA. 2016;104(4):346–54.

Koffel JB, Rethlefsen ML. Reproducibility of search strategies is poor in systematic reviews published in high-impact pediatrics, cardiology and surgery journals: a cross-sectional study. PLoS One. 2016;11(9):e0163309.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Fehrmann P, Thomas J. Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods. 2011;2(1):15–32.

Booth A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic Reviews. 2016;5(1):74.

Article   PubMed   PubMed Central   Google Scholar  

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health technology assessment (Winchester, England). 2003;7(1):1–76.

Tricco AC, Tetzlaff J, Sampson M, Fergusson D, Cogo E, Horsley T, et al. Few systematic reviews exist documenting the extent of bias: a systematic review. J Clin Epidemiol. 2008;61(5):422–34.

Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Int J Technol Assess Health Care. 2010;26(4):431–5.

Papaioannou D, Sutton A, Carroll C, Booth A, Wong R. Literature searching for social science systematic reviews: consideration of a range of search techniques. Health Inf Libr J. 2010;27(2):114–22.

Petticrew M. Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’. Systematic Reviews. 2015;4(1):36.

Betrán AP, Say L, Gülmezoglu AM, Allen T, Hampson L. Effectiveness of different databases in identifying studies for systematic reviews: experience from the WHO systematic review of maternal morbidity and mortality. BMC Med Res Methodol. 2005;5

Felson DT. Bias in meta-analytic research. J Clin Epidemiol. 1992;45(8):885–92.

Article   PubMed   CAS   Google Scholar  

Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502–5.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. BMC Med Res Methodol. 2017;17(1):64.

Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12(4):e0176210.

Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet (London, England). 1997;350(9074):326–9.

Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health technology assessment (Winchester, England). 2003;7(41):1–90.

Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58(8):769–76.

Mills EJ, Kanters S, Thorlund K, Chaimani A, Veroniki A-A, Ioannidis JPA. The effects of excluding treatments from network meta-analyses: survey. BMJ : British Medical Journal. 2013;347

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16(1):127.

van Driel ML, De Sutter A, De Maeseneer J, Christiaens T. Searching for unpublished trials in Cochrane reviews may not be worth the effort. J Clin Epidemiol. 2009;62(8):838–44.e3.

Buchberger B, Krabbe L, Lux B, Mattivi JT. Evidence mapping for decision making: feasibility versus accuracy - when to abandon high sensitivity in electronic searches. German medical science : GMS e-journal. 2016;14:Doc09.

Lorenc T, Pearson M, Jamal F, Cooper C, Garside R. The role of systematic reviews of qualitative evidence in evaluating interventions: a case study. Research Synthesis Methods. 2012;3(1):1–10.

Gough D. Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Res Pap Educ. 2007;22(2):213–28.

Barroso J, Gollop CJ, Sandelowski M, Meynell J, Pearce PF, Collins LJ. The challenges of searching for and retrieving qualitative studies. West J Nurs Res. 2003;25(2):153–78.

Britten N, Garside R, Pope C, Frost J, Cooper C. Asking more of qualitative synthesis: a response to Sally Thorne. Qual Health Res. 2017;27(9):1370–6.

Booth A, Carroll C. Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable? Health Info Libr J. 2015;32(3):220–35.

Kwon Y, Powelson SE, Wong H, Ghali WA, Conly JM. An assessment of the efficacy of searching in biomedical databases beyond MEDLINE in identifying studies for a systematic review on ward closures as an infection control intervention to control outbreaks. Syst Rev. 2014;3:135.

Nussbaumer-Streit B, Klerings I, Wagner G, Titscher V, Gartlehner G. Assessing the validity of abbreviated literature searches for rapid reviews: protocol of a non-inferiority and meta-epidemiologic study. Systematic Reviews. 2016;5:197.

Wagner G, Nussbaumer-Streit B, Greimel J, Ciapponi A, Gartlehner G. Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey. BMC Med Res Methodol. 2017;17(1):121.

Ogilvie D, Hamilton V, Egan M, Petticrew M. Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? J Epidemiol Community Health. 2005;59(9):804–8.

Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19(4):591–603.

Pearson M, Moxham T, Ashton K. Effectiveness of search strategies for qualitative research about barriers and facilitators of program delivery. Eval Health Prof. 2011;34(3):297–308.

Levay P, Raynor M, Tuvey D. The Contributions of MEDLINE, Other Bibliographic Databases and Various Search Techniques to NICE Public Health Guidance. 2015. 2015;10(1):19.

Nussbaumer-Streit B, Klerings I, Wagner G, Heise TL, Dobrescu AI, Armijo-Olivo S, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Briscoe S, Cooper C, Glanville J, Lefebvre C. The loss of the NHS EED and DARE databases and the effect on evidence synthesis and evaluation. Res Synth Methods. 2017;8(3):256–7.

Stansfield C, O'Mara-Eves A, Thomas J. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges. Research Synthesis Methods.n/a-n/a.

Petrova M, Sutcliffe P, Fulford KW, Dale J. Search terms and a validated brief search filter to retrieve publications on health-related values in Medline: a word frequency analysis study. Journal of the American Medical Informatics Association : JAMIA. 2012;19(3):479–88.

Stansfield C, Thomas J, Kavanagh J. 'Clustering' documents automatically to support scoping reviews of research: a case study. Res Synth Methods. 2013;4(3):230–41.

PubMed   Google Scholar  

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Andrew B. Clear and present questions: formulating questions for evidence based practice. Library Hi Tech. 2006;24(3):355–68.

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012;22(10):1435–43.

Whiting P, Westwood M, Bojke L, Palmer S, Richardson G, Cooper J, et al. Clinical effectiveness and cost-effectiveness of tests for the diagnosis and investigation of urinary tract infection in children: a systematic review and economic model. Health technology assessment (Winchester, England). 2006;10(36):iii-iv, xi-xiii, 1–154.

Cooper C, Levay P, Lorenc T, Craig GM. A population search filter for hard-to-reach populations increased search efficiency for a systematic review. J Clin Epidemiol. 2014;67(5):554–9.

Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Systematic Reviews. 2012;1(1):19.

Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S. Prospective comparison of search strategies for systematic reviews: an objective approach yielded higher sensitivity than a conceptual one. J Clin Epidemiol. 2016;77:118–24.

Craven J, Levay P. Recording database searches for systematic reviews - what is the value of adding a narrative to peer-review checklists? A case study of nice interventional procedures guidance. Evid Based Libr Inf Pract. 2011;6(4):72–87.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, et al. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66(9):1051–7.

Cooper C, Rogers M, Bethel A, Briscoe S, Lowe J. A mapping review of the literature on UK-focused health and social care databases. Health Inf Libr J. 2015;32(1):5–22.

Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Inf Libr J. 2009;26(2):126–35.

Lam MT, McDiarmid M. Increasing number of databases searched in systematic reviews and meta-analyses between 1994 and 2014. Journal of the Medical Library Association : JMLA. 2016;104(4):284–9.

Bethel A, editor Search summary tables for systematic reviews: results and findings. HLC Conference 2017a.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews - are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16(1):161.

Adams CE, Frederick K. An investigation of the adequacy of MEDLINE searches for randomized controlled trials (RCTs) of the effects of mental health care. Psychol Med. 1994;24(3):741–8.

Kelly L, St Pierre-Hansen N. So many databases, such little clarity: searching the literature for the topic aboriginal. Canadian family physician Medecin de famille canadien. 2008;54(11):1572–3.

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Injury Prevention. 2008;14(6):401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58(9):867–73.

Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complementary Therapies in Medicine. 2004;12(4):228–32.

Suarez-Almazor ME, Belseck E, Homik J, Dorgan M, Ramos-Remus C. Identifying clinical trials in the medical literature with electronic databases: MEDLINE alone is not enough. Control Clin Trials. 2000;21(5):476–87.

Taylor B, Wylie E, Dempster M, Donnelly M. Systematically retrieving research: a case study evaluating seven databases. Res Soc Work Pract. 2007;17(6):697–706.

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Info Libr J. 2013;30(1):49–58.

Duffy S, de Kock S, Misso K, Noake C, Ross J, Stirk L. Supplementary searches of PubMed to improve currency of MEDLINE and MEDLINE in-process searches via Ovid. Journal of the Medical Library Association : JMLA. 2016;104(4):309–12.

Katchamart W, Faulkner A, Feldman B, Tomlinson G, Bombardier C. PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. J Clin Epidemiol. 2011;64(7):805–7.

Cooper C, Lovell R, Husk K, Booth A, Garside R. Supplementary search methods were more effective and offered better value than bibliographic database searching: a case study from public health and environmental enhancement (in Press). Research Synthesis Methods. 2017;

Cooper C, Booth, A., Britten, N., Garside, R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: A methodological review. (In Press). BMC Systematic Reviews. 2017.

Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ (Clinical research ed). 2005;331(7524):1064–5.

Article   PubMed Central   Google Scholar  

Hinde S, Spackman E. Bidirectional citation searching to completion: an exploration of literature searching methods. PharmacoEconomics. 2015;33(1):5–11.

Levay P, Ainsworth N, Kettle R, Morgan A. Identifying evidence for public health guidance: a comparison of citation searching with web of science and Google scholar. Res Synth Methods. 2016;7(1):34–45.

McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias RS, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ (Clinical research ed). 1998;317(7172):1562–3.

Westphal A, Kriston L, Holzel LP, Harter M, von Wolff A. Efficiency and contribution of strategies for finding randomized controlled trials: a case study from a systematic review on therapeutic interventions of chronic depression. Journal of public health research. 2014;3(2):177.

Matthews EJ, Edwards AG, Barker J, Bloor M, Covey J, Hood K, et al. Efficient literature searching in diffuse topics: lessons from a systematic review of research on communicating risk to patients in primary care. Health Libr Rev. 1999;16(2):112–20.

Bethel A. Endnote Training (YouTube Videos) 2017b [Available from: http://medicine.exeter.ac.uk/esmi/workstreams/informationscience/is_resources,_guidance_&_advice/ .

Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. Journal of the Medical Library Association : JMLA. 2016;104(3):240–3.

Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. Journal of the Medical Library Association : JMLA. 2017;105(1):84–7.

Gall C, Brahmi FA. Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly. Medical reference services quarterly. 2004;23(3):25–32.

Ahmed KK, Al Dhubaib BE. Zotero: a bibliographic assistant to researcher. J Pharmacol Pharmacother. 2011;2(4):303–5.

Coar JT, Sewell JP. Zotero: harnessing the power of a personal bibliographic manager. Nurse Educ. 2010;35(5):205–7.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No consensus exists on search reporting methods for systematic reviews. J Clin Epidemiol. 2008;61(8):748–54.

Toews LC. Compliance of systematic reviews in veterinary journals with preferred reporting items for systematic reviews and meta-analysis (PRISMA) literature search reporting guidelines. Journal of the Medical Library Association : JMLA. 2017;105(3):233–9.

Booth A. "brimful of STARLITE": toward standards for reporting literature searches. Journal of the Medical Library Association : JMLA. 2006;94(4):421–9. e205

Faggion CM Jr, Wu YC, Tu YK, Wasiak J. Quality of search strategies reported in systematic reviews published in stereotactic radiosurgery. Br J Radiol. 2016;89(1062):20150878.

Mullins MM, DeLuca JB, Crepaz N, Lyles CM. Reporting quality of search methods in systematic reviews of HIV behavioral interventions (2000–2010): are the searches clearly explained, systematic and reproducible? Research Synthesis Methods. 2014;5(2):116–30.

Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE. Analysis of the reporting of search strategies in Cochrane systematic reviews. Journal of the Medical Library Association : JMLA. 2009;97(1):21–9.

Bigna JJ, Um LN, Nansseu JR. A comparison of quality of abstracts of systematic reviews including meta-analysis of randomized controlled trials in high-impact general medicine journals before and after the publication of PRISMA extension for abstracts: a systematic review and meta-analysis. Syst Rev. 2016;5(1):174.

Akhigbe T, Zolnourian A, Bulters D. Compliance of systematic reviews articles in brain arteriovenous malformation with PRISMA statement guidelines: review of literature. Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia. 2017;39:45–8.

Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One. 2011;6(11):e27611.

Wasiak J, Tyack Z, Ware R. Goodwin N. Jr. Poor methodological quality and reporting standards of systematic reviews in burn care management. International wound journal: Faggion CM; 2016.

Tam WW, Lo KK, Khalechelvam P. Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study. BMJ Open. 2017;7(2):e013905.

Rader T, Mann M, Stansfield C, Cooper C, Sampson M. Methods for documenting systematic review searches: a discussion of common issues. Res Synth Methods. 2014;5(2):98–115.

Atkinson KM, Koenka AC, Sanchez CE, Moshontz H, Cooper H. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res Synth Methods. 2015;6(1):87–95.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009;62(9):944–52.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Clinical research ed). 2017;358.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Relevo R, Balshem H. Finding evidence for comparing medical interventions: AHRQ and the effective health care program. J Clin Epidemiol. 2011;64(11):1168–77.

Medicine Io. Standards for Systematic Reviews 2011 [Available from: http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews/Standards.aspx .

CADTH: Resources 2018.

Download references

Acknowledgements

CC acknowledges the supervision offered by Professor Chris Hyde.

This publication forms a part of CC’s PhD. CC’s PhD was funded through the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme (Project Number 16/54/11). The open access fee for this publication was paid for by Exeter Medical School.

RG and NB were partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula.

The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and affiliations.

Institute of Health Research, University of Exeter Medical School, Exeter, UK

Chris Cooper & Jo Varley-Campbell

HEDS, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

Nicky Britten

European Centre for Environment and Human Health, University of Exeter Medical School, Truro, UK

Ruth Garside

You can also search for this author in PubMed   Google Scholar

Contributions

CC conceived the idea for this study and wrote the first draft of the manuscript. CC discussed this publication in PhD supervision with AB and separately with JVC. CC revised the publication with input and comments from AB, JVC, RG and NB. All authors revised the manuscript prior to submission. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chris Cooper .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

Appendix tables and PubMed search strategy. Key studies used for pearl growing per key stage, working data extraction tables and the PubMed search strategy. (DOCX 30 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Cooper, C., Booth, A., Varley-Campbell, J. et al. Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol 18 , 85 (2018). https://doi.org/10.1186/s12874-018-0545-3

Download citation

Received : 20 September 2017

Accepted : 06 August 2018

Published : 14 August 2018

DOI : https://doi.org/10.1186/s12874-018-0545-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Literature Search Process
  • Citation Chasing
  • Tacit Models
  • Unique Guidance
  • Information Specialists

BMC Medical Research Methodology

ISSN: 1471-2288

a systematic literature review strategy

University of Leeds logo

  • Study and research support
  • Literature searching

Systematic reviews

A systematic review is a tightly structured literature review which aims to analyse and appraise all and synthesise all evidence available on a particular question or topic to arrive at a considered judgement or set of conclusions.   

Researchers conducting systematic reviews use explicit, systematic methods documented in advance with a protocol to minimise bias and arrive at a balanced conclusion.  

A systematic review can take several months to a couple of years to complete and often require collaboration between a group of people. Originally developed for medical related fields to support evidence-based practice, systematic reviews are increasingly being undertaken in other fields such as environmental science, business, and social science. 

This  video on systematic reviews from Cochrane explains why they are important and how they are done.

Are you doing a systematic review or a systematic search?

For many undergraduate assignments, dissertation, or theses, you will be required to provide an overview of the literature on a research topic. While this requires a comprehensive and structured search of the literature, you will not be required to adhere to the strict methodology of a systematic review. 

Watch our video on systematic searching to find out more.

Which type of review is appropriate for your purposes?

When planning your evidence review, it is important to choose a methodology that matches the purpose of the review.

This academic article provides an extensive analysis of the strengths and weaknesses of different review types.

Also see the "Right Review" tool to assist you in identifying which type of evidence synthesis would be appropriate for your research question.

Guidance and protocols for conducting systematic reviews

Cochrane Handbook for Systematic Reviews of Interventions This handbook contains methodological guidance for the preparation and maintenance of Cochrane Reviews on the effects of healthcare interventions.

Writing a Campbell Collaboration Systematic Review This webpage provides guidance on writing a Campbell review on the effects of social interventions.

Cochrane-Campbell Handbook for Qualitative Evidence Synthesis This handbook describes the steps involved in preparing and maintaining systematic reviews of qualitative evidence for Cochrane and Campbell reviews. It guidance is applicable to all systematic reviews of qualitative evidence.

Joanna Briggs Institute (JBI) Reviewer's Manual This manual provides guidance on different types of systematic reviews and scoping reviews aimed at supporting the translation of healthcare research into practice.

The Collaboration for Environmental Evidence (CEE): Guidelines and Standards for Evidence Synthesis in Environmental Management This website provides guidance on conducting a CEE Evidence Synthesis 

Recommendations for the conduct of systematic reviews in toxicology and environmental health research (COSTER) COSTER provides a set of recommendations on the production of systematic reviews in environmental health and toxicology.

Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies (CAMARADES) tools and resources CAMARADES provides tools and resources for producing systematic review and meta-analysis of data from experimental animal studies.

Non-Interventional, Reproducible, and Open (NIRO) Systematic Review guidelines The NIRO guidelines provided guidance on undertaking a systematic review in the area of non-interventional research.

PRISMA PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions.

ROSES RepOrting standards for Systematic Evidence Syntheses These standards include pro forma, flow-diagram and descriptive summary of the plan and conduct of environmental systematic reviews and systematic maps.

EQUATOR (Enhancing the QUAlity and Transparency Of health Research) This is a library of reporting guidelines and also links to other resources relevant to research reporting and writing.  

Some recommended additional reading

An introduction to systematic reviews

Systematic approaches to a successful literature search

Systematic searching

Remember to document your search for your systematic review. 

  • University of Michigan Library
  • Research Guides

Evidence Syntheses (Scoping, systematic, & other types of reviews)

  • Search Strategy
  • Types of Reviews
  • Should You Do a Systematic Review?
  • Work with a Search Expert
  • Covidence Review Software
  • Evidence in an Evidence Synthesis
  • Information Sources

Developing an Answerable Question

Creating a search strategy, identifying synonyms & related terms, keywords vs. index terms, combining search terms using boolean operators, a sr search strategy, search limits.

  • Managing Records
  • Selection Process
  • Data Collection Process
  • Study Risk of Bias Assessment
  • Reporting Results
  • For Search Professionals

Validated Search Filters

Depending on your topic, you may be able to save time in constructing your search by using specific search filters (also called "hedges") developed & validated by researchers. Validated filters include:

  • PubMed’s Clinical Queries &  Health Services Research Queries pages
  • Ovid Medline’s Clinical Queries  filters (also documented by McMaster Health Information Research Unit)
  • EBSCOhost’s main search page for CINAHL (Clinical Queries category)
  • American U of Beirut, esp. for " humans" filters .
  • Countway Library of Medicine methodology filters
  • InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource
  • SIGN (Scottish Intercollegiate Guidelines Network) filters page

Why Create a Sensitive Search?

In many literature reviews, you try to balance the sensitivity of the search (how many potentially relevant articles you find) and specificit y (how many definitely relevant articles  you find ), realizing that you will miss some.  In an evidence synthesis, you want a very sensitive search:  you are trying to find all potentially relevant articles.  An evidence synthesis search will:

  • contain many synonyms & variants of search terms
  • use care in adding search filters
  • search multiple resources, databases & grey literature, such as reports & clinical trials.

PICO is a good framework to help clarify your systematic review question.

P -   Patient, Population or Problem: What are the important characteristics of the patients &/or problem?

I -  Intervention:  What you plan to do for the patient or problem?

C -  Comparison: What, if anything, is the alternative to the intervention?

O -  Outcome:  What is the outcome that you would like to measure?

Beyond PICO: the SPIDER tool for qualitative evidence synthesis.

5-SPICE: the application of an original framework for community health worker program design, quality improvement and research agenda setting.

A well constructed search strategy is the core of your evidence synthesis and will be reported on in the methods section of your paper. The search strategy retrieves the majority of the studies you will assess for eligibility & inclusion. The quality of the search strategy also affects what items may have been missed.  Informationists can be partners in this process.

For an evidence synthesis, it is important to broaden your search to maximize the retrieval of relevant results.

Use keywords:  How other people might describe a topic?

Identify the appropriate index terms (subject headings) for your topic.

  • Index terms differ by database (MeSH, or  Medical Subject Headings ,  Emtree terms, Subject headings) are assigned by experts based on the article's content.
  • Check the indexing of sentinel articles (3-6 articles that are fundamental to your topic).  Sentinel articles can also be used to  test your search results.

Include spelling variations (e.g., behavior , behaviour).  

Both types of search terms are useful & both should be used in your search.

Keywords help to broaden your results. They will be searched for at least in journal titles, author names, article titles, & article abstracts. They can also be tagged to search all text.

Index/subject terms  help to focus your search appropriately, looking for items that have had a specific term applied by an indexer.

Boolean operators let you combine search terms in specific ways to broaden or narrow your results.

a systematic literature review strategy

An example of a search string for one concept in a systematic review.

a systematic literature review strategy

In this example from a PubMed search, [mh] = MeSH &  [tiab] = Title/Abstract, a more focused version of a keyword search.

A typical database search limit allows you to narrow results so that you retrieve articles that are most relevant to your research question. Limit types vary by database & include:

  • Article/publication type
  • Publication dates

In an evidence synthesis search, you should use care when applying limits, as you may lose articles inadvertently.  For more information, see, Chapter 4: Searching for and selecting studies of the Cochrane Handbook particularly regarding language & format limits in Section 4.4.5 .

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: The first step of doing a systematic literature review is coming up with a review question, like what do you actually want to know about the world and how can you phrase that as a simple question. You can write down all of the questions you want and then choose from the best one or a combination but I like to go to ChatGPT and use them as like a sounding board and a research assistant so that they can help me really sort of refine what I actually want to do a systematic literature review on. So here we are, we head over and we say, help me define a systematic literature review research question about beards and their smell. Maybe that's what I was interested in. My beard smells lovely. It smells like Australian sandalwood at the moment. Beautiful. It says a systematic literature review research question should be specific blah blah blah. And then it comes up with one. How do microbial communities in beards influence blah blah blah. And it gives me kind of a first start. The one thing I found about any AI that you're asking, it makes a lot of assumptions about what you want to know. So I highly recommend that you go in and you sort of like re-prompt it and you say, I like this bit, but I don't like this bit, or this bit's good, but you're a little bit off on this area. That is how you kind of use this as a research assistant as like a sounding board for all of your ideas. Then once you've got a research question and you need to spend probably most of the time of the first bit of searching on this because it's so very important. Come up with a definitive but broad, and I know that is so contradictory, but you need to come up with something that is focused enough that it will give you sort of like a good outcome but not too broad that all of a sudden, you know, like you're dealing with thousands and thousands of papers. So that is the challenge, and use ChatGPT to get that balance. Now, you can also use frameworks. There's different frameworks that you can use which will help you with this first sort of like step. And I just asked ChatGPT. I'm familiar with some of these, but some of these were new to me as well. I said, what frameworks for a systematic literature review can be used for this question? And it says Prisma, it used Cochrane Handbook for systematic reviews, it's got the Joanna Briggs Institute Methodology, Spyder and Pico. One of the most famous ones arguably is Pico where you say, okay, I've got this P, population, I've got this I, intervention that I'm looking at, I've got this C, comparison of all of the things that I found and O, outcome. Then what happened when they did these things? And quite often the C stands for comparison because it's a quantitative measurement of comparing it to say like a placebo if you're doing a lot of health stuff or another sort of intervention. So that's how we use frameworks to start thinking about our research question. What population are we gonna look at? What intervention are we looking at? What comparison, if any, are we gonna look at? And we're gonna look for the outcomes within those systems and structures that we set in place. So that's step one. Step two, actually, is what defines a literature review from a systematic literature review? Let's get into that. This is so very important for a systematic literature review because we need to know what methods we are going to use to filter all of the different stuff that we're gonna come across. We wanna know stuff like what procedure are we gonna go through to find the literature. We wanna know what keywords we're gonna use, what semantic search terms we're gonna use in certain databases to find the literature. Now, I like to head over to something like Search Smart. This will give you sort of like the best databases to search for your systematic literature review. And so all you need to do is look for scholarly records or clinical trials if you want, put in the subjects or the keywords and then sort of like define whether or not you want systemic keyword searching, backwards citation, forwards, all of that sort of stuff and also non-paywall databases and you click Start Comparison and it will go off and give you all of the different databases that you can look at. Then, keywords. Keywords are so very important because we often find research based on how they're described like in the abstract or the title. So be very specific with your keywords. By the way, I have another video, go check it out here, where I talk about how to find all of the literature that you'll ever need using different approaches, AI, Boolean searches, old school keyword searches, and that video will allow you to find everything you need in your systematic review. But databases are very important. Where are you gonna search? what keywords are you gonna search for, what semantic search questions, and that's new for this sort of like era of AI because it allows us to actually just put our research question into a database and have it sort of understand that question and give us results back. So now we're on to the exciting part which is finding the research papers. The one thing I like to do first and foremost, and that's only possible now because of AI's semantic search. I love it so much. Let's head over to the three tools that I think you would wanna use. The first one is Elicit. Ask a research question. Beards and, ooh, not bears, and smells. Let's see, that's not really a research question, but let's see what it comes up with. But it's that sort of stuff that you need to sort of like thinking about. Like, is that a keyword combination that you want to put in all of the databases or not? Whatever you decide using your meat brain. So, here we go. Here's all of the different papers that I could talk about. Brilliant. The next one is consensus. Beards and smell. Then we can go off and find all of the papers here using that sort of semantic keyword search as well. And we've also got size space. I can go here, beards and smell. And this is where I like to find all of my stuff using keywords and semantic search. So making sense, oh, this hasn't really done too well with beards, beards and issues, blah, blah, blah. So overall, you can see that we've got a little bit of discrepancy between what these pick up. So it's very important, I think, that you try a few to see what works best for you. And then finally, we gotta head over to something like Google Scholar, and we wanna say, okay, what keywords are we gonna put in? This isn't semantic search, this is just putting in beards and smell. And we can use Boolean operators to make sure that we're actually gonna get the papers that are relevant for us. So we can go beards, and then and, because we want and, smell. There we are. So then we're gonna come up with all of the smell and beard articles that it's going to come up with. The smell report, shame and glory. Only the beards, even after beards became merely rather than daring, the rather radical, oh my God, I don't like this one. The British Journal of Sociology, come on now, you can do better than that. But that is where you can go and actually find all of this information. And so semantic, keywords, databases, and Boolean operators to have a look at what you're excluding and including in your search is very, very important. So that is the step three. Yeah, step three, that is searching for the paper. And now we need to filter and screen and read. Once we've ended up with a load of papers from our searching based on the criteria and the methods we set out in step two, we've now got like an exclusion and inclusion protocol where we need to say, okay, we've got all of these studies, Which ones are we going to include and which ones are we going to exclude? And it's a really sort of like simple process of just filtering. This is why you need a load of papers at the top. Put loads of papers at the top and then they have to filter down to the useful papers down the bottom. And it may only be a small fraction of all of the papers you found, but this is what a systematic review is all about. It's about making sure that we include the papers that are relevant for your research question and not just like general themes, which is like a normal literature review where we just sort of say, oh yeah, there's this theme and this theme and this theme. No, this one's much more focused, so we need to filter it. I like to use the Prisma flowchart to work out which ones I'm getting rid of and keep track of the ones I've got rid of and how much I've filtered it down. So a Prisma flowchart looks like this. We've got identification in the top here and then we've got records identified through database searching. In this case, they had 96. and then we've got other additional identified through other sources, and this was none in this bit. Then they removed duplicates, so there was two that were the same, so they removed one of them, and then they said, okay, we've got this many in screen, 95, and eligibility, full text articles assessed for eligibility, there was only five, and all of these were actually excluded because it didn't meet their criteria that they'd set out in part of their exclusion or inclusion criteria. So you can see we've got like examines treatment, not prevention. So this was like obviously like a health study where they were looking at treatment and not the prevention or something. So that was most of them, that was 52. Then one was pediatric, one was irrelevant. Oh no, loads were irrelevant, 37 were irrelevant. So you can see we've gone from 96 all the way down to five at this point. And then full text articles not included. Well, there was none there, which is great. but here we've got four which studies included in quantitative synthesis or a meta-analysis was only four, they got rid of 92 of them because they didn't meet the specific search and exclusion and inclusion criteria that they set. That is so important and that is very, very typical of a systematic review. So now it's about taking those special studies that you found and getting all of the important stuff out of them. you should read them, especially if there's only four. You should read them from end to beginning. No, don't read them like that. Read them however you want, normally with abstract, then to conclusions, then to introduction, then to method, anyway, you get the idea. Do you know what, actually, I've got another video on how to read like a PhD. Go check out that one there. It's much better than what I just said. But now you need to read them and you need to start thinking about how these studies are influencing your research question sort of response. Are they for it? Are they against it? Do they give you a new insight? Is there something sneaky in there when you look at them all together that is surprising? It's those sort of things that really should be sort of milling around in your head. We're not looking for any sort of definitive stuff just yet, but we just need to read, analyze, refine, understand, all of those stuff. Those words are very important, put them there. But now, we've got a couple of new ways that we can actually talk to all of our documents. So one place I really like is docanalyzer.ai and what you can do is upload your documents and tag them as, in this case I've got literature review, you can see I've got one, two, three, four, five, six here. So then we can go to labels and we can go chat with these six documents. And the one thing I love about docanalyzer is that it doesn't like try to make stuff up. If it doesn't understand what you're asking or it can't identify it in the documents that you've given it, it will just say, hey, I don't really know, can you give me a bit more information? It doesn't sort of like BS its way into chat, which I really like. So, for example here, it says to identify the important parts of the document, I would need more specific keywords or topics of interest. That's what I want from an AI, something that isn't just gonna make stuff up. Another thing you can do is head back over to size space, And in SciSpace, you can actually get results from my library. So if you put those very specific studies that you've filtered and found into your library, you can then ask it questions across that library, which I think is really, really fantastic. So not only do you read it all, if you can, if it's a sensible amount of papers, but then you can start chatting to all of the documents together in something like DocAnalyzer and SciSpace, and then you can get sort of further connections, further deeper inquiry into things that maybe you have missed. Or maybe there's just a question, you've read them all, and there's a question sort of in your mind. You're like, actually, does this apply to all of the papers or not? Put it into something like this and it will search across all of your documents. I absolutely love, I'm doing this today, Chef's Kiss, it's my new favorite thing. Chef's Kiss, yum, yum, yum, yum, yum. But doing that means that you're not gonna miss out on anything because you're going to use old school tactics by just reading, read, read, read, read, read, and new school tactics by using AI, AI, AI, AI. Together, they are the perfect combination, yes. And then it's all about writing it up, making sure that you actually talk about what your research question is, the methods you've used, the filtration criteria, and the exclusion and inclusion criteria, the keywords you search for, then what you've found, how they all sort of like relate together, and the outcome. What is the outcome of this literature review? Does it support your research questions? Does it give you a new insight? That is how you write this. That is the structure. It is so very sort of systematic. A systematic literature review has to be systematic, otherwise you'll just end up being completely lost in all of the papers. Oh, so many papers, so many papers. Filter them out, find the good ones, write it out. Brilliant. All right, if you like this video, Go check out this one where I talk about how to write an exceptional literature review with AI. It's going to be a great sort of addition to what you've learned in here. Go check it out.

techradar

A systematic literature review of deep learning-based text summarization: : Techniques, input representation, training strategies, mechanisms, datasets, evaluation, and challenges

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, recommendations, optimal features set for extractive automatic text summarization.

The goal of text summarization is to reduce the size of the text while preserving its important information and overall meaning. With the availability of internet, data is growing leaps and bounds and it is practically impossible summarizing all this ...

RankSum—An unsupervised extractive text summarization based on rank fusion

In this paper, we propose Ranksum, an approach for extractive text summarization of single documents based on the rank fusion of four multi-dimensional sentence features extracted for each sentence: topic information, semantic content, ...

Display Omitted

  • A unified summarization framework with multi-dimensional sentence features.

An Experimental Investigation on Unsupervised Text Summarization for Customer Reviews

People generally turn to websites such as Yelp to find reviews for the food/restaurant before trying it first-hand. However, some reviews are so long and ambiguous that users get confused about whether the review appreciates or disparages the ...

Information

Published in.

Pergamon Press, Inc.

United States

Publication History

Author tags.

  • Deep learning
  • Text summarization
  • Abstractive
  • Review-article

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Subscriptions
  • Advanced search

a systematic literature review strategy

Advanced Search

A systematic literature review of the clinical and socioeconomic burden of bronchiectasis

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: [email protected]
  • ORCID record for Marcus A. Mall
  • ORCID record for Michal Shteinberg
  • ORCID record for Sanjay H. Chotirmall
  • Figures & Data
  • Info & Metrics

Background The overall burden of bronchiectasis on patients and healthcare systems has not been comprehensively described. Here, we present the findings of a systematic literature review that assessed the clinical and socioeconomic burden of bronchiectasis with subanalyses by aetiology (PROSPERO registration: CRD42023404162).

Methods Embase, MEDLINE and the Cochrane Library were searched for publications relating to bronchiectasis disease burden (December 2017–December 2022). Journal articles and congress abstracts reporting on observational studies, randomised controlled trials and registry studies were included. Editorials, narrative reviews and systematic literature reviews were included to identify primary studies. PRISMA guidelines were followed.

Results 1585 unique publications were identified, of which 587 full texts were screened and 149 were included. A further 189 citations were included from reference lists of editorials and reviews, resulting in 338 total publications. Commonly reported symptoms and complications included dyspnoea, cough, wheezing, sputum production, haemoptysis and exacerbations. Disease severity across several indices and increased mortality compared with the general population was reported. Bronchiectasis impacted quality of life across several patient-reported outcomes, with patients experiencing fatigue, anxiety and depression. Healthcare resource utilisation was considerable and substantial medical costs related to hospitalisations, treatments and emergency department and outpatient visits were accrued. Indirect costs included sick pay and lost income.

Conclusions Bronchiectasis causes significant clinical and socioeconomic burden. Disease-modifying therapies that reduce symptoms, improve quality of life and reduce both healthcare resource utilisation and overall costs are needed. Further systematic analyses of specific aetiologies and paediatric disease may provide more insight into unmet therapeutic needs.

  • Shareable abstract

Bronchiectasis imposes a significant clinical and socioeconomic burden on patients, their families and employers, and on healthcare systems. Therapies that reduce symptoms, improve quality of life and reduce resource use and overall costs are needed. https://bit.ly/4bPCHlp

  • Introduction

Bronchiectasis is a heterogeneous chronic respiratory disease clinically characterised by chronic cough, excessive sputum production and recurrent pulmonary exacerbations [ 1 ], and radiologically characterised by the abnormal widening of the bronchi [ 2 ]. Bronchiectasis is associated with several genetic, autoimmune, airway and infectious disorders [ 3 ]. Regardless of the underlying cause, the defining features of bronchiectasis are chronic airway inflammation and infection, regionally impaired mucociliary clearance, mucus hypersecretion and mucus obstruction, as well as progressive structural lung damage [ 4 , 5 ]. These features perpetuate one another in a “vicious vortex” leading to a decline in lung function, pulmonary exacerbations and associated morbidity, mortality and worsened quality of life [ 4 , 5 ]. Bronchiectasis can be further categorised into several infective and inflammatory endotypes and is associated with multiple comorbidities and underlying aetiologies [ 6 ].

Bronchiectasis has been described as an emerging global epidemic [ 7 ], with prevalence and incidence rates increasing worldwide [ 8 – 12 ]. The prevalence of bronchiectasis, as well as of the individual aetiologies, varies widely across geographic regions [ 13 ]. In Europe, the reported prevalence ranges from 39.1 (females) and 33.3 (males) cases per 100 000 inhabitants in Spain and 68 (females) and 65 (males) cases per 100 000 inhabitants in Germany, to as high as 566 cases (females) and 486 cases (males) per 100 000 inhabitants in the UK [ 10 – 12 ]. In the US, the average overall prevalence was reported to be 139 cases per 100 000 [ 14 ], in Israel, the prevalence was reported to be 234 cases per 100 000 [ 15 ], and in China the prevalence was reported to be 174 per 100 000 [ 8 ]. Studies show that bronchiectasis prevalence increases with age [ 14 ]. This may increase the socioeconomic impact of bronchiectasis on countries with disproportionately higher number of older citizens. Large registry studies in patients with bronchiectasis have been published from the US (Bronchiectasis Research Registry) [ 16 ], Europe and Israel (European Multicentre Bronchiectasis Audit and Research Collaboration (EMBARC)); the largest and most comprehensive report available to date) [ 17 ], India (EMBARC-India) [ 18 , 19 ], Korea (Korean Multicentre Bronchiectasis Audit and Research Collaboration) [ 20 ] and Australia (Australian Bronchiectasis Registry) [ 21 ].

Although there are currently no approved disease-modifying therapies for bronchiectasis [ 4 ], comprehensive clinical care recommendations for the management of patients with bronchiectasis have been published [ 22 , 23 ]. However, the burden that bronchiectasis imposes on patients and their families, as well as on healthcare systems, payers and employers, remains poorly understood. No review to date has used a systematic method to evaluate the overall disease burden of bronchiectasis. This is the first systematic literature review aimed at investigating and synthesising the clinical and socioeconomic burden of bronchiectasis. A better understanding of the overarching burden of bronchiectasis, both overall and by individual aetiologies and associated diseases, will highlight the need for new therapies and assist healthcare systems in planning care and required resources.

The protocol of this systematic review was registered on PROSPERO (reference number: CRD42023404162).

Search strategy

This systematic literature review was conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines [ 24 ]. Embase, MEDLINE and the Cochrane Library were searched for studies related to the clinical and socioeconomic burden of bronchiectasis (noncystic fibrosis bronchiectasis (NCFBE) and cystic fibrosis bronchiectasis (CFBE)) using the search terms available in supplementary table S1 . Articles written in English and published over a 5-year period (December 2017–December 2022) were included.

Selection criteria

The following article types reporting on prospective and retrospective observational studies, registry studies and randomised controlled trials (only baseline data extracted) were included: journal articles, preprints, research letters, conference proceedings, conference papers, conference abstracts, meeting abstracts and meeting posters. Reviews, literature reviews, systematic reviews and meta-analyses, as well as editorials, commentaries, letters and letters to the editor, were included for the purpose of identifying primary studies. A manual search of references cited in selected articles was performed and references were only included if they were published within the 5 years prior to the primary article being published.

Screening and data extraction

A reviewer screened all titles and abstracts to identify publications for full-text review. These publications then underwent full-text screening by the same reviewer for potential inclusion. A second reviewer independently verified the results of both the title/abstract screen and the full-text screen. Any discrepancies were resolved by a third independent reviewer. Data relating to aetiology, symptoms, disease severity, exacerbations, lung function, infection, comorbidities, patient-reported outcomes (PROs), exercise capacity, mortality, impact on family and caregivers, healthcare resource utilisation (HCRU), treatment burden, medical costs, and indirect impacts and costs, as well as data relating to the patient population, study design, sample size and country/countries of origin, were extracted from the final set of publications into a standardised Excel spreadsheet by one reviewer. Studies were grouped based on the burden measure, and aggregate data (range of reported values) were summarised in table or figure format. For the economic burden section, costs extracted from studies reporting in currencies other than the euros were converted to euros based on the average exchange rate for the year in which the study was conducted.

Data from patients with specific bronchiectasis aetiologies and in children (age limits varied from study to study and included upper age limits of 15, 18, 19 and 20 years) were reported separately, where available. As literature relating to NCFBE and CFBE is generally distinct, any data related to CFBE are reported separately in the tables and text. We conducted subanalyses of key disease burden indicators, in which we extracted data from multicentre studies or those with a sample size >1000 subjects, to try to identify estimates from the most representative datasets. These data from larger and multicentre studies are reported in square brackets in tables 1 – 3 and supplementary tables S2–S7 , where available.

  • View inline

Prevalence and severity of bronchiectasis symptoms overall, in children, during exacerbations and in individual bronchiectasis aetiologies

Patient-reported outcome scores in patients with bronchiectasis overall and in individual bronchiectasis aetiologies

Healthcare resource utilisation (HCRU) in patients with bronchiectasis overall and in individual bronchiectasis aetiologies

Given the nature of the data included in this systematic literature review (that is, a broad range of patient clinical and socioeconomic characteristics rather than the outcome(s) of an intervention), in addition to the broad range of study types included, meta-analyses to statistically combine data of similar studies were not deemed appropriate and therefore not performed.

Summary of included studies

A total of 1834 citations were retrieved from the Embase, MEDLINE and Cochrane Library databases, of which 1585 unique citations were identified. Abstract/title screening led to the inclusion of 587 citations for full-text screening. Following full-text screening, 149 primary citations and 110 literature reviews, systematic reviews and meta-analyses as well as editorials and letters to the editor remained. From the reference lists of these 110 citations, a further 189 primary citations were identified. These articles were only included if 1) the primary articles contained data relating to the burden of bronchiectasis and 2) the primary articles were published within the 5 years prior to the original article's publication date. In total, 338 publications were considered eligible and included in this review ( supplementary figure S1 ). This included 279 journal articles, 46 congress abstracts and 13 letters to the editor or scientific/research letters. The results are summarised in the sections below. For the results from individual studies, including a description of the patient population, study design, sample size and country/countries of origin, please see the supplemental Excel file .

The most frequently reported aetiologies included post-infectious, genetic (primary ciliary dyskinesia (PCD), alpha-1 antitrypsin deficiency (AATD) and cystic fibrosis (CF)), airway diseases (COPD and asthma), allergic bronchopulmonary aspergillosis (ABPA), aspiration and reflux-related, immunodeficiency and autoimmune aetiologies ( supplementary figure S2 ). However, in up to 80.7% of adult cases and 53.3% of paediatric cases, the aetiology was not determined (referred to as “idiopathic bronchiectasis”) ( supplementary figure S2 ). When limited to larger or multicentre studies, the frequency of idiopathic bronchiectasis ranged from 11.5 to 66.0% in adults and from 16.5 to 29.4% in children. Further details and additional aetiologies can be seen in the supplemental Excel file .

Clinical burden

Symptom burden and severity.

Commonly reported symptoms in patients with bronchiectasis included cough, sputum production, dyspnoea, wheezing and haemoptysis, with these symptoms more prevalent in adults compared with children ( table 1 ). Other reported symptoms included chest discomfort, pain or tightness (both generally and during an exacerbation), fever and weight loss in both adults and children, and fatigue, tiredness or asthenia, appetite loss, and sweating in adults. In children, respiratory distress, hypoxia during an exacerbation, sneezing, nasal and ear discharge, thriving poorly including poor growth and weight loss, exercise intolerance, malaise, night sweats, abdominal pain, recurrent vomiting, and diarrhoea were reported ( supplemental Excel file ). Classic bronchiectasis symptoms such as sputum production (range of patients reporting sputum production across all studies: 22.0–92.7%) and cough (range of patients reporting cough across all studies: 24.0–98.5%) were not universally reported ( table 1 ).

In a study comparing bronchiectasis (excluding CFBE) in different age groups (younger adults (18–65 years), older adults (66–75 years) and elderly adults (≥76 years) [ 63 ]), no significant differences across age groups were reported for the presence of cough (younger adults: 73.9%; older adults: 72.8%; elderly adults: 72.9%; p=0.90), sputum production (younger adults: 57.8%; older adults: 63.8%; elderly adults: 6.0%; p=0.16) or haemoptysis (younger adults: 16.5%; older adults: 19.3%; elderly adults: 16.3%; p=0.47).

Disease severity

Disease severity was reported according to several measures including the bronchiectasis severity index (BSI), the forced expiratory volume in 1 s (FEV 1 ), Age, Chronic Colonisation, Extension, Dyspnoea (FACED) score and the Exacerbations-FACED (E-FACED) score, all of which are known to be associated with future exacerbations, hospitalisations and mortality ( supplementary table S2 and the supplemental Excel file ). Up to 78.7, 41.8 and 40.8% of patients with bronchiectasis reported severe disease according to the BSI, FACED score and E-FACED score, respectively ( supplementary table S2 ). In most studies, severity scores were greater among people with bronchiectasis secondary to COPD or post-tuberculosis (TB) than idiopathic bronchiectasis ( supplementary table S2 ). No data relating to disease severity were reported for CFBE specifically.

Exacerbations

The number of exacerbations experienced by patients with bronchiectasis in the previous year, per year and during follow-up are presented in figure 1 . For further details, please see the supplemental Excel file . Two studies reported exacerbation length in patients with bronchiectasis; this ranged from 11 to 16 days (both small studies; sample sizes of 191 and 32, respectively) [ 25 , 64 ]. A study in children with NCFBE reported a median of one exacerbation in the previous year. Additionally, the same study reported that 31.1% of children with bronchiectasis experienced ≥3 exacerbations per year [ 65 ].

  • Download figure
  • Open in new tab
  • Download powerpoint

Range of bronchiectasis exacerbations in the previous year, per year and in the first and second years of follow-up. # : Two studies reported significant differences in the number of exacerbations experienced in the previous year across individual aetiologies. Study 1 [ 90 ]: Patients with idiopathic bronchiectasis had significantly fewer exacerbations in the previous year compared with other aetiologies (primary ciliary dyskinesia (PCD), COPD and post-infectious) (p<0.021). Study 2 [ 33 ]: significant difference between post-tuberculosis (TB) bronchiectasis (mean: 2.8) and other aetiologies excluding idiopathic bronchiectasis (mean: 1.7) (p<0.05).

Lung function

Reduced lung function was reported across several different measures in adults and children with bronchiectasis overall, including FEV 1 (absolute values and % predicted), forced vital capacity (FVC; absolute values and % pred) and lung clearance index (adults only) ( supplementary table S3 and the supplemental Excel file ). In most studies, lung function was lowest among people with post-TB bronchiectasis and bronchiectasis secondary to COPD or PCD ( supplementary table S2 ). Additional measures of lung function are detailed in the supplemental Excel file . Lung clearance index, considered more sensitive than spirometry to early airway damage, was elevated in two studies in adults with bronchiectasis, with a range of 9.0–12.8 (normal: 6–7 or less) [ 66 , 67 ].

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups, elderly adults (≥76 years) had significantly lower FEV 1 % pred (median: 67) compared with both younger (18–65 years; median: 78) and older adults (66–75 years; median: 75) (p<0.017 for both comparisons) [ 63 ]. FVC % pred was found to be significantly lower in elderly adults (mean: 65) compared with both younger adults (median: 78) and older adults (median: 75) (p<0.017 for both comparisons) [ 63 ].

Chronic infection with at least one pathogen was reported in 22.3–79.6% of patients with bronchiectasis, although each study defined chronic infection differently (number of studies: 20). When limited to larger or multicentre studies, chronic infection with at least one pathogen was reported in 10.7–54.5% of patients with bronchiectasis (number of studies: 12). In two studies in NCFBE, significant differences in the proportion of patients chronically infected with at least one pathogen were reported across aetiologies (p<0.001 for both studies) [ 68 , 69 ]. Patients with post-infectious (other than TB) bronchiectasis (34.9%) [ 68 ] and patients with PCD-related bronchiectasis (68.3%) [ 69 ] had the highest prevalence of chronic infection.

The most commonly reported bacterial and fungal pathogens are shown in supplementary table S4 . The two most common bacterial pathogens were Pseudomonas ( P .) aeruginosa and Haemophilus ( H. ) influenzae . In several studies, more patients with PCD, TB and COPD as the aetiology of their bronchiectasis reported infection with P. aeruginosa . Additionally, in one study, significantly more children with CFBE had P. aeruginosa infection compared with children with NCFBE [ 70 ]. Further details and additional pathogens are reported in the supplemental Excel file .

Diversity of the sputum microbiome was assessed in two studies. In the first study in people with bronchiectasis (people with CFBE excluded), reduced microbiome alpha diversity (defined as the relative abundance of microbial species within a sample), particularly associated with Pseudomonas or Proteobacteria dominance, was associated with greater disease severity, increased frequency and severity of exacerbations, and a higher risk of mortality [ 71 ]. In the second study (unknown whether people with CFBE were excluded), a lower Shannon–Wiener diversity index (a measure of species diversity, with lower scores indicating lower diversity) score was associated with multiple markers of disease severity, including a higher BSI score (p=0.0003) and more frequent exacerbations (p=0.008) [ 72 ].

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups (younger adults: 18–65 years; older adults: 66–75 years; elderly adults: ≥76 years) [ 63 ], chronic infection with H. influenzae was reported in 18.3% of younger adults, 12.8% of older adults and 8.8% of elderly adults, and chronic infection with Streptococcus ( Str. ) pneumoniae was reported in 5.3% of younger adults, 2.8% of older adults and 1.3% of elderly adults. For both of the above, the prevalence was significantly higher in younger adults compared with elderly adults (p<0.017 for both comparisons). However, no significant differences across age groups were reported for P. aeruginosa , Moraxella catarrhalis or Staphylococcus ( Sta .) aureus chronic infection.

P. aeruginosa infection was significantly associated with reduced FEV 1 [ 73 ], more severe disease [ 74 ], more frequent exacerbations [ 35 , 49 , 75 , 76 ], increased hospital admissions, reduced quality of life based on St. George's Respiratory Questionnaire (SGRQ) and increased and 4-year mortality [ 49 , 76 ]. Additionally, in a study reporting healthcare use and costs in the US between 2007–2013, healthcare costs and hospitalisation costs were found to be increased in patients infected with P. aeruginosa ($56 499 and $41 972 more than patients not infected with P. aeruginosa , respectively) [ 77 ]. In the same study, HCRU was also higher in patients infected with P. aeruginosa (fivefold increase in the number of hospitalisations and 84% more emergency department (ED) visits compared with patients not infected with P. aeruginosa ) [ 77 ].

Comorbidities

The most frequently reported comorbidities included cardiovascular (including heart failure, cerebrovascular disease and hypertension), respiratory (including asthma, COPD and sinusitis), metabolic (including diabetes and dyslipidaemia), malignancy (including haematological and solid malignancies), bone and joint-related (including osteoporosis and rheumatological disease), neurological (including anxiety and depression), renal, hepatic, and gastrointestinal comorbidities ( supplementary table S5 ). No data relating to comorbidities were reported for CFBE specifically. For further details and additional comorbidities, please see the supplemental Excel file .

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups (younger adults: 18–65 years; older adults: 66–75 years; elderly adults: ≥76 years), younger adults had a significantly lower prevalence of diabetes compared with older adults, a significantly lower prevalence of stroke compared with elderly adults and a significantly lower prevalence of heart failure, solid tumours and renal failure compared with both older and elderly adults (p<0.0017 for all comparisons). Additionally, the prevalence of COPD was significantly lower in both younger and older adults compared with elderly adults (p<0.017) [ 63 ]. In studies reporting in children with bronchiectasis, the prevalence of comorbid asthma ranged from 22.2 to 25.8% [ 65 , 78 ] and the prevalence of sinusitis was reported to be 12.7% in a single study [ 79 ].

Charlson comorbidity index (CCI)

CCI scores can range from 0 to 37, with higher scores indicating a decreased estimate of 10-year survival. In this review, CCI scores ranged from 0.7 to 6.6 in studies reporting means (number of studies: 7). In one study, adults with bronchiectasis (people with CFBE excluded) who experienced ≥2 exacerbations per year were found to have significantly higher CCI scores (3.3) compared with patients who experienced less than two exacerbations per year (2.2) (p=0.001) [ 35 ]. In another study in adults with bronchiectasis (people with CFBE excluded), CCI scores increased significantly with increasing disease severity, with patients with mild (FACED score of 0–2), moderate (FACED score of 3–4) and severe (FACED score of 5–7) bronchiectasis reporting mean CCI scores of 3.9, 5.7 and 6.3, respectively [ 80 ]. No CCI scores were reported for CFBE specifically.

Prevalence of comorbidities in patients with bronchiectasis compared with control individuals

Several studies reported a higher prevalence of cardiovascular comorbidities. such as heart failure [ 81 ], stroke [ 82 , 83 ] and hypertension [ 82 – 84 ] in patients with bronchiectasis compared with a matched general population or healthy controls. Conversely, several additional studies reported no significant differences [ 81 , 85 , 86 ]. Two large studies reported an increased prevalence of diabetes in patients with bronchiectasis compared with nonbronchiectasis control groups [ 83 , 84 ]; however, three additional smaller studies reported no significant differences [ 81 , 82 , 86 ]. The prevalence of gastro–oesophageal reflux disease was found to be significantly higher in patients with bronchiectasis compared with matched nonbronchiectasis controls in one study [ 87 ], but no significant difference was reported in a second study [ 85 ]. Both anxiety and depression were found to be significantly more prevalent in patients with bronchiectasis compared with matched healthy controls in one study [ 55 ]. Lastly, two large studies reported an increased prevalence of asthma [ 84 , 87 ] and five studies reported a significantly higher prevalence of COPD [ 81 , 82 , 84 , 85 , 87 ] in patients with bronchiectasis compared with matched nonbronchiectasis controls or the general population. A smaller study reported conflicting evidence whereby no significant difference in the prevalence of asthma in patients with bronchiectasis compared with matched controls was reported [ 85 ].

Socioeconomic burden

Patient-reported outcomes.

Health-related quality of life (HRQoL), fatigue, anxiety and depression were reported across several PRO measures and domains. The most frequently reported PROs are discussed in further detail in the sections below ( table 2 ). Further details and additional PROs can be seen in the supplemental Excel file .

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups (younger adults: 18–65 years; older adults: 66–75 years; elderly adults: ≥76 years), the median SGRQ total score was significantly higher in elderly adults (50.8) compared with younger adults (36.1), indicating a higher degree of limitation (p=0.017) [ 63 ].

In a study that reported Leicester Cough Questionnaire (LCQ) scores in men and women with bronchiectasis (people with CFBE excluded) separately, women had significantly lower LCQ total scores (14.9) when compared with men (17.5) (p=0.006), indicating worse quality of life [ 88 ]. Additionally, women had significantly lower scores across all three LCQ domains (p=0.014, p=0.005 and p=0.011 for physical, psychological and social domains, respectively) [ 88 ].

Exercise capacity

Exercise capacity in patients with bronchiectasis was reported using walking tests namely the 6-minute walk test (6MWT) and the incremental shuttle walk test (ISWT) ( supplementary table S6 ). The 6MWT data from patients with bronchiectasis generally fell within the normal range for healthy people; however, the ISWT data was below the normal range for healthy people ( supplementary table S6 ). Studies also reported on daily physical activity, daily sedentary time and number of steps per day in patients with bronchiectasis, and in children specifically ( supplementary table S6 ). No data relating to disease severity were reported for CFBE specifically. Further details can be seen in the supplemental Excel file .

Exercise capacity in patients with bronchiectasis compared with control individuals

In one study, the ISWT distance was reported to be significantly lower in patients with NCFBE compared with healthy controls (592.6 m versus 882.9 m; difference of ∼290 m; p<0.001) [ 89 ]. Additionally, patients with bronchiectasis spent significantly less time on activities of moderate and vigorous intensity compared with healthy controls (p=0.030 and 0.044, respectively) [ 89 ]. Lastly, a study reported that patients with NCFBE had a significantly lower step count per day compared with healthy controls (p<0.001) [ 89 ].

Mortality rate during study period

Mortality ranged from 0.24 to 67.6%; however, it should be noted that the study duration differed across studies. When limited to larger or multicentre studies, the mortality rate ranged from 0.24 to 28.1%. One study reported more deaths in patients with NCFBE (9.1%; 5.9-year mean follow-up period) compared with patients without bronchiectasis (0.8%; 5.4-year mean follow-up period) [ 84 ]. In one study, significantly more patients with COPD-related bronchiectasis died (37.5%) compared with other aetiologies (19.0%) (3.4-year mean follow-up period; p<0.001). After adjusting for several factors, multivariate analysis showed that the diagnosis of COPD as the primary cause of bronchiectasis increased the risk of death by 1.77 compared with the patients with other aetiologies [ 41 ]. Similarly, in another study, COPD-associated bronchiectasis was associated with higher mortality (55%) in multivariate analysis as compared with other aetiologies (rheumatic disease: 20%; post-infectious: 16%; idiopathic: 14%; ABPA: 13%; immunodeficiency: 11%) (hazard ratio 2.12, 95% CI 1.04–4.30; p=0.038; 5.2-year median follow-up period) [ 90 ].

Mortality rates by year

The 1-, 2-, 3-, 4- and 5-year mortality rates in patients with bronchiectasis (people with CFBE excluded, unless unspecified) ranged from 0.0 to 12.3%, 0.0 to 13.0%, 0.0 to 21.0%, 5.5 to 39.1% and 12.4 to 53.0%, respectively (number of studies: 9, 4, 7, 1 and 4, respectively). When limited to larger or multicentre studies, the 1-, 2-, 3- and 5-year mortality rates ranges were 0.4–7.9%, 3.9–13.0%, 3.7–21.0% and 12.4–53.0% (no 4-year mortality data from larger or multicentre studies). No data relating to mortality rates were reported for CFBE specifically.

Two studies reported mortality rate by bronchiectasis aetiology (people with CFBE excluded). In the first study, no significant difference in the 4-year mortality rate was reported across aetiologies (p=0.7; inflammatory bowel disease: 14.3%; post-TB: 13.4%; rheumatoid arthritis: 11.4%; idiopathic or post-infectious: 10.1%; ABPA: 6.1%; other aetiologies: 6.1%) [ 49 ]. In the second study, patients with post-TB bronchiectasis had a significantly higher 5-year mortality rate (30.0%) compared with patients with idiopathic bronchiectasis (18.0%) and other aetiologies (10.0%) (p<0.05 for both comparisons) [ 32 ].

In-hospital and intensive care unit mortality

In-hospital mortality ranged from 2.9 to 59.3% in patients with bronchiectasis (people with CFBE excluded, unless unspecified) hospitalised for an exacerbation or for other reasons (number of studies: 7). When limited to larger or multicentre studies, in-hospital mortality rate was reported in only one study (33.0%). One study reported mortality in bronchiectasis patients admitted to a tertiary care centre according to aetiology; in-hospital mortality was highest in patients with post-pneumonia bronchiectasis (15.8%), followed by patients with idiopathic (7.1%) and post-TB (2.6%) bronchiectasis. No deaths were reported in patients with COPD, ABPA or PCD aetiologies [ 42 ]. Intensive care unit mortality was reported in two studies and ranged from 24.6 to 36.1% [ 62 , 91 ]. No data relating to mortality rates were reported for CFBE specifically.

Impact on family and caregivers

Only two studies discussed the impact that having a child with bronchiectasis has on parents/caregivers. In the first study, parents of children with bronchiectasis (not specified whether children with CFBE were excluded) were more anxious and more depressed according to both the Hospital Anxiety and Depression Scale (HADS) and the Centre of Epidemiological Studies depression scale, compared with parents of children without any respiratory conditions (both p<0.001; sample size of 29 participants) [ 53 ]. In the second study, parents or carers of children with bronchiectasis (multicentre study with a sample size of 141 participants; children with CFBE excluded) were asked to vote for their top five greatest concerns or worries; the most common worries or concerns that were voted for by over 15% of parents were “impact on his/her adult life in the future, long-term effects, normal life” (29.8%), “ongoing declining health” (25.5%), “the cough” (24.8%), “impact on his/her life now as a child (play, development)” (24.1%), “lack of sleep/being tired” (24.1%), “concerns over aspects of antibiotic use” (22.7%), “missing school or daycare” (17.7%) and “breathing difficulties/shortness of breath” (16.3%) [ 92 ].

HCRU in terms of hospitalisations, ED visits, outpatient visits and length of stay overall and by bronchiectasis aetiology are reported in table 3 . No data relating to HCRU were reported for CFBE specifically.

In a study in children with bronchiectasis (children with CFBE excluded), 30.0% of children were hospitalised at least once in the previous year [ 65 ]. The median number of hospitalisations per year was 0 (interquartile range: 0–1) [ 65 ]. In another study, the mean length of hospital stay for children with bronchiectasis was 6.7 days (standard deviation: 4.8 days) [ 93 ]. In a study comparing bronchiectasis (people with CFBE excluded) in different age groups, significantly more elderly adults (≥76 years; 26.0%) were hospitalised at least once during the first year of follow-up compared with younger adults (18–65 years; 17.0%) and older adults (66–75 years; 17.0%) (p<0.017 for both comparisons) [ 63 ]. Additionally, length of stay was found to be significantly longer in male patients (mean: 17.6 days) compared with female patients (mean: 12.5 days) (p=0.03) [ 94 ].

HCRU in patients with bronchiectasis compared with control individuals

Length of stay was found to be 38% higher in patients with bronchiectasis (mean: 15.4 days; people with CFBE excluded) compared with patients with any other respiratory illness (mean: 9.6 days) (p<0.001) [ 94 ]. In a study reporting on HCRU in patients with bronchiectasis (people with CFBE excluded) over a 3-year period (Germany; 2012–2015) [ 85 ], a mean of 24.7 outpatient appointments per patient were reported; there was no significant difference in the number of outpatient appointments between patients with bronchiectasis and matched controls (patients without bronchiectasis matched by age, sex and distribution, and level of comorbidities) (mean: 23.4) (p=0.12). When assessing specific outpatient appointments over the 3-year period, patients with bronchiectasis attended a mean of 9.2 general practitioner appointments, 2.9 radiology appointments, 2.5 chest physician appointments and 0.8 cardiologist appointments. Patients with bronchiectasis had significantly fewer general practitioner appointments compared with matched controls (mean: 9.8) (p=0.002); however, they had significantly more radiology appointments (mean for matched controls: 2.3) and chest physician appointments (mean for matched controls: 1.4) compared with matched controls (p<0.001 for both comparisons).

Hospital admission rates

In England, Wales and Northern Ireland, the crude hospital admission rate in 2013 was 88.4 (95% CI 74.0–105.6) per 100 000 person-years [ 91 ]. In New Zealand (2008–2013), the crude and adjusted hospital admission rates were 25.7 and 20.4 per 100 000 population, respectively [ 95 ]. Lastly, in Australia and New Zealand (2004–2008) the hospital admission rate ranged from 0.7 to 2.9 per person-year [ 96 ]. In all of the abovementioned studies, people with CFBE were excluded.

Treatment burden

In two studies, the percentage of patients with bronchiectasis receiving any respiratory medication at baseline ranged from 60.8 to 85.7% [ 97 , 98 ]. Additionally, in a study comparing healthcare costs in patients with bronchiectasis before and after confirmation of P. aeruginosa infection, mean pharmacy visits in the year preceding diagnosis were reported to be 23.2; this increased significantly by 56.5% to 36.2 in the year post-diagnosis (p<0.0001) [ 99 ]. In another study, patients with bronchiectasis were prescribed a mean of 12 medications for bronchiectasis and other comorbidities [ 100 ]. In all of the abovementioned studies, people with CFBE were excluded. The most frequently reported respiratory treatments can be seen in supplementary table S7 . These included antibiotics (including macrolides), corticosteroids, bronchodilators, mucolytics and oxygen. No treatment data were reported for CFBE specifically. Other respiratory treatments included saline, anticholinergics and leukotriene receptor antagonists ( supplemental Excel file ).

In studies reporting in children with bronchiectasis, 23.9% of children were receiving any bronchodilator at baseline [ 101 ], 9.0–21.7% of children were receiving inhaled corticosteroids (ICS) at baseline [ 101 , 102 ], 4.3% of children were receiving oral corticosteroids at baseline [ 101 ] and 12.1% of children were receiving long-term oxygen therapy [ 103 ].

Medical and nonmedical indirect impacts and costs

Medical costs for bronchiectasis included overall costs, hospitalisation costs, ED visits and outpatient visit costs and costs of treatment; indirect impacts and costs included sick leave and sick pay, missed work and income loss for caregivers, and missed school or childcare for children ( table 4 and the supplemental Excel file ). People with CFBE were excluded from all of the studies in table 4 below. In studies reporting in currencies other than the €, costs were converted to € based on the average exchange rate for the year in which the study was conducted.

Bronchiectasis-related medical costs and indirect impacts and costs (individual studies)

No review to date has systematically evaluated the overall disease burden of bronchiectasis. Here, we present the first systematic literature review that comprehensively describes the clinical and socioeconomic burden of bronchiectasis overall and across individual aetiologies and associated diseases. A total of 338 publications were included in the final analysis. Together, the results indicate that the burden of clinically significant bronchiectasis on patients and their families, as well as on healthcare systems, is substantial, highlighting the urgent need for new disease-modifying therapies for bronchiectasis.

Bronchiectasis is associated with genetic, autoimmune, airway and infectious disorders. However, in many patients with bronchiectasis, an underlying aetiology cannot be identified (idiopathic bronchiectasis) [ 1 , 3 , 4 ]. This is supported by the results of this systematic literature review, in which up to 80.7% of patients were reported to have idiopathic bronchiectasis. The results are in line with those reported in a systematic literature review of bronchiectasis aetiology conducted by G ao et al. [ 13 ] (studies from Asia, Europe, North and South America, Africa and Oceania included) in which an idiopathic aetiology was reported in approximately 45% of patients with bronchiectasis, with a range of 5–82%. The maximum of 80.7% of patients with idiopathic bronchiectasis identified by this systematic literature review is much higher than in the recent report on the disease characteristics of the EMBARC where idiopathic bronchiectasis was the most common aetiology and reported in only ∼38% of patients with bronchiectasis [ 17 ]. This highlights the importance of sample size and geographic variation (80.7% reported from a single-country study with a small sample size versus ∼38% reported from a continent-wide study with a large sample size). Nevertheless, identifying the underlying aetiology is a recommendation of bronchiectasis guidelines as this can considerably alter the clinical management and prognosis [ 23 , 110 ]. Specific therapeutic interventions may be required for specific aetiologies, such as ICS for people with asthma-related bronchiectasis, antifungal treatment for those with ABPA-associated bronchiectasis and immunoglobulin replacement therapy for those with common variable immunodeficiency-related bronchiectasis [ 23 , 111 ]. Indeed, an observational study has shown that identification of the underlying aetiology affected management in 37% of people with bronchiectasis [ 112 ]. Future studies to determine the impact of identifying the underlying aetiology on management and prognosis are needed to fully understand its importance.

Patients with bronchiectasis experienced a significant symptom burden, with dyspnoea, cough, wheezing, sputum production and haemoptysis reported most commonly. These symptoms were also reported in children with bronchiectasis at slightly lower frequencies. Dealing with bronchiectasis symptoms are some of the greatest concerns from a patient's perspective. In a study assessing the aspects of bronchiectasis that patients found most difficult to deal with, sputum, dyspnoea and cough were the first, fifth and sixth most common answers, respectively [ 113 ]. Some aetiologies were reported to have a higher prevalence of certain symptoms. For example, in single studies, patients with PCD-related bronchiectasis were found to have a significantly higher prevalence of cough and wheezing [ 39 ], patients with COPD-related bronchiectasis were found to have a significantly higher prevalence of sputum production [ 41 ], and patients with post-TB bronchiectasis were found to have a higher prevalence of haemoptysis [ 30 ] compared with other aetiologies. Together, these results highlight the need for novel treatments that reduce the symptom burden of bronchiectasis. They also highlight the importance of teaching patients to perform and adhere to regular nonpharmacological interventions, such as airway clearance using physiotherapy techniques, which have been shown to improve cough-related health status and chronic sputum production [ 110 ]. Future studies assessing when airway clearance techniques should be started, and which ones are the most effective, are a research priority [ 113 ].

The burden of exacerbations in patients with bronchiectasis was high, with patients experiencing three or more exacerbations in the previous year (up to 73.6%), per year (up to 55.6%) or in the first year of follow-up (up to 32.4%). Few studies reported significant differences between aetiologies. Importantly, exacerbations are the second-most concerning aspect of bronchiectasis from the patient's perspective [ 113 ]. Patients with frequent exacerbations have more frequent hospitalisations and increased 5-year mortality [ 114 ] and exacerbations are also associated with poorer quality of life [ 114 , 115 ]. Therefore, prevention of exacerbations is of great importance in the management of bronchiectasis [ 116 ]. The exact cause of exacerbations in bronchiectasis (believed to be multifactorial) is not fully understood due a lack of mechanistic studies [ 116 ]. Future studies into the causes and risk factors for exacerbations [ 113 ] may lead to improvements in their prevention.

Many patients with bronchiectasis, including children, experienced chronic infections with bacterial pathogens such as P. aeruginosa , H. influenzae , Sta. aureus and Str. pneumoniae as well as non-tuberculous mycobacteria. Importantly, P. aeruginosa infection was significantly associated with more severe disease, reduced lung function and quality of life, and increased exacerbations, hospital admission, morality, HCRU and healthcare costs. Due to the clear and consistent association between P. aeruginosa and poor outcomes, patients with chronic P. aeruginosa colonisation should be considered to be at a higher risk of bronchiectasis-related complications [ 110 ]. Additionally, regular sputum microbiology screening should be performed in people with clinically significant bronchiectasis to detect new isolation of P. aeruginosa [ 110 ]; in which case, patients should be offered eradication antibiotic treatment [ 23 ]. Eradication of P. aeruginosa is not only of clinical importance, but also of economic importance due to the associated HCRU and healthcare costs. As such, a better understanding of the key factors leading to P. aeruginosa infection is a priority for future research [ 113 ].

Bronchiectasis markedly impacted HRQoL across several PROs including the SGRQ, Quality of Life–Bronchiectasis score, LCQ, COPD Assessment Test and Bronchiectasis Health Questionnaire. In children with bronchiectasis, significantly lower quality of life (according to the Paediatric Quality of Life Inventory score) compared with age-matched controls was reported [ 53 ]. The majority of studies reporting HRQoL in individual aetiologies and associated diseases either reported in a single aetiology, did not perform any statistical analyses to compare aetiologies, or reported no significant differences across aetiologies. Patients also experienced mild-to-moderate anxiety and depression according to the HADS-Anxiety, HADS-Depression and 9-question Patient Health Questionnaire scores, with very limited data reported in individual aetiologies. When compared with healthy controls, anxiety and depression were found to be significantly more prevalent in patients with bronchiectasis [ 55 ]. Additionally, exercise capacity was reduced, with patients with bronchiectasis reported to spend significantly less time on activities of moderate and vigorous intensity and have a significantly lower step count per day compared with healthy controls [ 89 ]. Improvements in anxiety, depression and exercise capacity are important priorities for people with bronchiectasis; in a study assessing the aspects of bronchiectasis that patients found most difficult to manage, “not feeling fit for daily activities”, anxiety and depression were the fourth, eighth and ninth most common answers, respectively [ 113 ].

The studies relating to HCRU and costs in this review were heterogeneous in terms of methodology, time period, country and currency, making them challenging to compare. Nevertheless, this study found that HCRU was substantial, with patients reporting a maximum of 1.3 hospitalisation, 1.3 ED and 21.0 outpatient visits per year. Length of stay was found to be significantly longer in patients with bronchiectasis compared with patients with any other respiratory illness in one study [ 91 ]. In another study, patients with bronchiectasis reported significantly more specialist appointments (radiologist appointments and chest physician appointments) compared with matched controls [ 85 ]. Patients with bronchiectasis also experienced a significant treatment burden, with up to 36.4, 58.0 and 83.0% of patients receiving long-term inhaled antibiotics, oral antibiotics and macrolides, respectively, up to 80.4% receiving long-term ICS and up to 61.7% and 81.4% receiving long-term long-acting muscarinic antagonists and long-acting beta agonists, respectively. Wide ranges of treatment use were reported in this study, which may reflect geographic variation in treatment patterns. Heterogeneous treatment patterns across Europe were observed in the EMBARC registry data with generally higher medication use in the UK and Northern/Western Europe and lower medication use in Eastern Europe (inhaled antibiotics: 1.8–8.9%; macrolides: 0.9–24.4%; ICS: 37.2–58.5%; long-acting beta agonists: 42.7–52.8%; long-acting muscarinic antagonists: 26.5–29.8%) [ 17 ]. Similarly, data from the Indian bronchiectasis registry indicate that the treatment of bronchiectasis in India is also diverse [ 19 ]. Furthermore, in a comparison of the European and Indian registry data, both long-term oral and inhaled antibiotics were more commonly used in Europe compared with India [ 19 ].

Cost varied widely across studies. However, patients, payers and healthcare systems generally accrued substantial medical costs due to hospitalisations, ED visits, outpatient visits, hospital-in-the-home and treatment-related costs. Other medical costs incurred included physiotherapy and outpatient remedies (including breathing or drainage techniques), outpatient medical aids (including nebulisers and respiration therapy equipment) and the cost of attending convalescence centres. Only one study compared the medical costs in patients with bronchiectasis and matched controls (age, sex and comorbidities) and found that patients with bronchiectasis had significantly higher total direct medical expenditure, hospitalisation costs, treatment costs for certain medications and costs associated with outpatient remedies and medical aids [ 85 ]. Bronchiectasis was also associated with indirect impacts and costs, including sick leave, sick pay and income lost due to absenteeism and missed work, and lost wages for caregivers of patients with bronchiectasis. Children with bronchiectasis also reported absenteeism from school or childcare.

Our findings regarding HRCU and costs in bronchiectasis are mirrored by a recent systematic literature review by R oberts et al . [ 117 ] estimating the annual economic burden of bronchiectasis in adults and children over the 2001–2022 time period. R oberts et al . [ 117 ] found that annual total healthcare costs per adult patient ranged from €3027 to €69 817 (costs were converted from USD to € based on the average exchange rate in 2021), predominantly driven by hospitalisation costs. Likewise, we report annual costs per patient ranging from €218 to €51 033, with annual hospital costs ranging from €1215 to €27 612 (adults and children included) ( table 4 ). Further, R oberts et al . [ 117 ] reports a mean annual hospitalisation rate ranging from 0.11 to 2.9, which is similar to our finding of 0.03–1.3 hospitalisations per year ( table 3 ). With regard to outpatient visits, R oberts et al . [ 117 ] reports a mean annual outpatient respiratory physician attendance ranging from 0.83 to 6.8 visits, whereas we report a maximum of 21 visits per year ( table 3 ). It should be noted, however, that our value is not restricted to visits to a respiratory physician. With regard to indirect annual costs per adult patient, R oberts et al . [ 117 ] reports a loss of income because of illness of €1109–€2451 (costs were converted from USD to € based on the average exchange rate in 2021), whereas we report a figure of ∼€1410 ( table 4 ). Finally, burden on children is similarly reported by us and R oberts et al . [ 117 ], with children missing 12 days of school per year per child ( table 4 ).

Limitations of this review and the existing literature

Due to the nature of this systematic literature review, no formal statistical analyses or formal risk of bias assessments were performed.

Several limitations within the existing literature were identified. Firstly, the vast majority of studies reported patients with NCFBE overall, with limited availability of literature reporting on individual aetiologies and associated disease. Furthermore, where this literature was available, it was limited to a handful of individual aetiologies and associated diseases, and in many of these studies, no statistical analyses to compare different aetiologies and associated disease were performed. Additionally, the methods used to determine aetiologies within individual studies may have differed. Literature on NCFBE and CFBE has traditionally been very distinct; as such, most of the studies included in this review have excluded people with CF. As the general term “CF lung disease” was not included in our search string in order to limit the number of hits, limited data on CFBE are included in this review. Bronchiectasis remains largely under-recognised and underdiagnosed, thus limiting the availability of literature. There is a particular knowledge gap with respect to paediatric NCFBE; however, initiatives such as the Children's Bronchiectasis Education Advocacy and Research Network (Child-BEAR-Net) ( www.improvebe.org ) are aiming to create multinational registries for paediatric bronchiectasis.

There were variations in the amount of literature available for the individual burdens. While there was more literature available on the clinical burden of bronchiectasis, economic data (related to both medical costs and indirect costs) and data on the impact of bronchiectasis on families and caregivers, were limited. Additionally, cost comparisons across studies and populations were difficult due to differences in cost definitions, currencies and healthcare systems.

Sample sizes of the studies included in this systematic literature review varied greatly, with the majority of studies reporting on a small number of participants. Furthermore, many of the studies were single-centre studies, thus limiting the ability to make generalisations about the larger bronchiectasis population, and cross-sectional, thus limiting the ability to assess the clinical and socioeconomic burden of bronchiectasis over a patient's lifetime. Furthermore, there may be potential sex/gender bias in reporting that has not been considered in this systematic literature review.

Finally, for many of the reported outcomes, data varied greatly across studies, with wide estimates for the frequency of different aetiologies and comorbidities as well as disease characteristics such as exacerbations and healthcare costs noted. This reflects the heterogeneity of both the study designs (including sample size and inclusion and exclusion criteria) and the study populations themselves. Additionally, the use of non-standardised terms across articles posed a limitation for data synthesis. Systematic collection of standardised data across multiple centres, with standardised inclusion and exclusion criteria such as that being applied in international registries, is likely to provide more accurate estimates than those derived from small single-centre studies.

  • Conclusions

Collectively, the evidence identified and presented in this systematic literature review show that bronchiectasis imposes a significant clinical and socioeconomic burden on patients and their families and employers, as well as on healthcare systems. Disease-modifying therapies that reduce symptoms, improve quality of life, and reduce both HCRU and overall costs are urgently needed. Further systematic analyses of the disease burden of specific bronchiectasis aetiologies and associated disease (particularly PCD-, COPD- and post-TB-associated bronchiectasis, which appear to impose a greater burden in some aspects) and paediatric bronchiectasis (the majority of data included in this study were obtained from adults) may provide more insight into the unmet therapeutic needs for these specific patient populations.

Questions for future research

Further research into the clinical and socioeconomic burden of bronchiectasis for individual aetiologies and associated diseases is required.

  • Supplementary material

Supplementary Material

Please note: supplementary material is not edited by the Editorial Office, and is uploaded as it has been supplied by the author.

Supplementary figures and tables ERR-0049-2024.SUPPLEMENT

Supplementary Excel file ERR-0049-2024.SUPPLEMENT

  • Acknowledgements

Laura Cottino, PhD, of Nucleus Global, provided writing, editorial support, and formatting assistance, which was contracted and funded by Boehringer Ingelheim.

Provenance: Submitted article, peer reviewed.

Conflict of interest: The authors meet criteria for authorship as recommended by the International Committee of Medical Journal Editors (ICMJE). J.D. Chalmers has received research grants from AstraZeneca, Boehringer Ingelheim, GlaxoSmithKline, Gilead Sciences, Grifols, Novartis, Insmed and Trudell, and received consultancy or speaker fees from Antabio, AstraZeneca, Boehringer Ingelheim, Chiesi, GlaxoSmithKline, Insmed, Janssen, Novartis, Pfizer, Trudell and Zambon. M.A. Mall reports research grants paid to their institution from the German Research Foundation (DFG), German Ministry for Education and Research (BMBF), German Innovation Fund, Vertex Pharmaceuticals and Boehringer Ingelheim; consultancy fees from AbbVie, Antabio, Arrowhead, Boehringer Ingelheim, Enterprise Therapeutics, Kither Biotec, Prieris, Recode, Santhera, Splisense and Vertex Pharmaceuticals; speaker fees from Vertex Pharmaceuticals; and travel support from Boehringer Ingelheim and Vertex Pharmaceuticals. M.A. Mall also reports advisory board participation for AbbVie, Antabio, Arrowhead, Boehringer Ingelheim, Enterprise Therapeutics, Kither Biotec, Pari and Vertex Pharmaceuticals and is a fellow of ERS (unpaid). P.J. McShane is an advisory board member for Boehringer Ingelheim's Airleaf trial and Insmed's Aspen trial. P.J. McShane is also a principal investigator for clinical trials with the following pharmaceutical companies: Insmed: Aspen, 416; Boehringer Ingelheim: Airleaf; Paratek: oral omadacycline; AN2 Therapeutics: epetraborole; Renovian: ARINA-1; Redhill; Spero; and Armata. K.G. Nielsen reports advisory board membership for Boehringer Ingelheim. M. Shteinberg reports having received research grants from Novartis, Trudell Pharma and GlaxoSmithKline; travel grants from Novartis, Actelion, Boehringer Ingelheim, GlaxoSmithKline and Rafa; speaker fees from AstraZeneca, Boehringer Ingelheim, GlaxoSmithKline, Insmed, Teva, Novartis, Kamada and Sanofi; and advisory fees (including steering committee membership) from GlaxoSmithKline, Boehringer Ingelheim, Kamada, Syncrony Medical, Zambon and Vertex Pharmaceuticals. M. Shteinberg also reports data and safety monitoring board participation for Bonus Therapeutics, Israel and is an ERS Task Force member on bronchiectasis guideline development. S.D. Sullivan has participated in advisory boards for Boehringer Ingelheim and has research grants from Pfizer, Bayer and GlaxoSmithKline. S.H. Chotirmall is on advisory boards for CSL Behring, Boehringer Ingelheim and Pneumagen Ltd, served on a data and safety monitoring board for Inovio Pharmaceuticals Inc., and has received personal fees from AstraZeneca and Chiesi Farmaceutici.

Support statement: This systematic literature review was funded by Boehringer Ingelheim International GmbH. The authors did not receive payment related to the development of the manuscript. Boehringer Ingelheim was given the opportunity to review the manuscript for medical and scientific accuracy as well as intellectual property considerations. Funding information for this article has been deposited with the Crossref Funder Registry .

  • Received March 8, 2024.
  • Accepted June 4, 2024.
  • Copyright ©The authors 2024

This version is distributed under the terms of the Creative Commons Attribution Licence 4.0.

  • Murray MP ,
  • Chalmers JD ,
  • Aliberti S ,
  • McShane PJ ,
  • Naureckas ET ,
  • Tino G , et al.
  • Martins M ,
  • Chalmers JD
  • Chotirmall SH ,
  • Sun X , et al.
  • Curtis JR , et al.
  • Monteagudo M ,
  • Rodríguez-Blanco T ,
  • Barrecheguren M , et al.
  • Millett ERC ,
  • Joshi M , et al.
  • Ringshausen FC ,
  • de Roux A ,
  • Diel R , et al.
  • Liu S-X , et al.
  • Weycker D ,
  • Hansen GL ,
  • Shteinberg M ,
  • Adir Y , et al.
  • Aksamit TR ,
  • O'Donnell AE ,
  • Barker A , et al.
  • Polverino E ,
  • Crichton ML , et al.
  • Talwar D , et al.
  • Chalmers JD , et al.
  • Visser SK ,
  • Fox GJ , et al.
  • Sullivan AL ,
  • Goeminne PC ,
  • McDonnell MJ , et al.
  • Liberati A ,
  • Altman DG ,
  • Tetzlaff J , et al.
  • Scioscia G ,
  • Alcaraz-Serrano V , et al.
  • Bilotta M ,
  • Bartoli ML , et al.
  • Rosales-Mayor E ,
  • Benegas M , et al.
  • Mackay IM ,
  • Sloots TP , et al.
  • Alcaraz-Serrano V ,
  • Gimeno-Santos E ,
  • Scioscia G , et al.
  • Al-Harbi A ,
  • Al-Ghamdi M ,
  • Khan M , et al.
  • de Gracia J ,
  • Giron R , et al.
  • Sunjaya A ,
  • Reddel H , et al.
  • Raguer L , et al.
  • Martinez-Garcia MÁ ,
  • Athanazio R ,
  • Gramblicka G , et al.
  • Ailiyaer Y ,
  • Zhang Y , et al.
  • Stockley R ,
  • De Soyza A ,
  • Gunawardena K , et al.
  • de la Rosa Carrillo D ,
  • Navarro Rolon A ,
  • Girón Moreno RM , et al.
  • de la Rosa D ,
  • Martínez-Garcia M-A ,
  • Giron RM , et al.
  • Sharif S , et al.
  • Pottier H ,
  • Marquette CH , et al.
  • Nagelschmitz J ,
  • Kirsten A , et al.
  • Artaraz A ,
  • Crichton ML ,
  • Finch S , et al.
  • Aksamit T ,
  • Bandel TJ , et al.
  • Liu R , et al.
  • Olveira C ,
  • Olveira G ,
  • Gaspar I , et al.
  • Goeminne P ,
  • Aliberti S , et al.
  • Chalmers J ,
  • Dimakou K , et al.
  • Mitchelmore P ,
  • Rademacher J , et al.
  • Loebinger M ,
  • Menendez R , et al.
  • Bennett K , et al.
  • Barker RE , et al.
  • Zhu YN , et al.
  • Yong SJ , et al.
  • Inal-Ince D ,
  • Cakmak A , et al.
  • Araújo AS ,
  • Figueiredo MR ,
  • Lomonaco I , et al.
  • Navas-Bueno B ,
  • Casas-Maldonado F ,
  • Padilla-Galo A , et al.
  • Li T , et al.
  • Gatheral T ,
  • Sansom B , et al.
  • Leem AY , et al.
  • Bellelli G ,
  • Sotgiu G , et al.
  • Patel ARC ,
  • Singh R , et al.
  • Stroil-Salama E ,
  • Morgan L , et al.
  • Bradley JM ,
  • Bradbury I , et al.
  • Lo CY , et al.
  • Padilla A ,
  • Martínez-García M-Á , et al.
  • Dicker AJ ,
  • Lonergan M ,
  • Keir HR , et al.
  • Crichton M ,
  • Cassidy A , et al.
  • de Boer S ,
  • Fergusson W , et al.
  • Goeminne PC , et al.
  • Suarez-Cuartin G ,
  • Rodrigo-Troyano A , et al.
  • Abo-Leyah H , et al.
  • Blanchette CM ,
  • Stone G , et al.
  • Grimwood K ,
  • Ware RS , et al.
  • Kim HY , et al.
  • Martínez-Garcia MA ,
  • Olveira C , et al.
  • Lin CS , et al.
  • Navaratnam V ,
  • Millett ER ,
  • Hurst JR , et al.
  • Kim JM , et al.
  • Rabe KF , et al.
  • Wang LY , et al.
  • Schwartz BS ,
  • Al-Sayouri SA ,
  • Pollak JS , et al.
  • Girón Moreno RM ,
  • Sánchez Azofra A ,
  • Aldave Orzaiz B , et al.
  • Sonbahar-Ulu H , et al.
  • Nawrot TS ,
  • Ruttens D , et al.
  • Muirhead CR ,
  • Hubbard RB , et al.
  • Marchant JM ,
  • Roberts J , et al.
  • Lovie-Toon YG ,
  • Byrnes CA , et al.
  • Costa JdC ,
  • Blackall SR ,
  • King P , et al.
  • Jiang N , et al.
  • Jayaram L ,
  • Karalus N , et al.
  • McCullough AR ,
  • Tunney MM ,
  • Stuart Elborn J , et al.
  • Joschtel B ,
  • Gomersall SR ,
  • Tweedy S , et al.
  • Pizzutto SJ ,
  • Bauert P , et al.
  • Nam H , et al.
  • Navarro-Rolon A ,
  • Rosa-Carrillo D ,
  • Esquinas C , et al.
  • Seifer FD ,
  • Ji Y , et al.
  • McPhail SM ,
  • Hurley F , et al.
  • McCallum GB ,
  • Singleton RJ ,
  • Redding GJ , et al.
  • Contarini M ,
  • Shoemark A ,
  • Ozerovitch L ,
  • Masefield S ,
  • Polverino E , et al.
  • Filonenko A , et al.
  • Xu G , et al.
  • Roberts JM ,
  • Kularatna S , et al.

European Respiratory Review: 33 (173)

  • Table of Contents
  • Index by author

Thank you for your interest in spreading the word on European Respiratory Society .

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager

del.icio.us logo

  • CF and non-CF bronchiectasis
  • Tweet Widget
  • Facebook Like
  • Google Plus One

More in this TOC Section

  • PM 2.5 and microbial pathogenesis in the respiratory tract
  • Identifying limitations to exercise with incremental CPET

Related Articles

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

healthcare-logo

Article Menu

a systematic literature review strategy

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The state-of-the-art of mycobacterium chimaera infections and the causal link with health settings: a systematic review.

a systematic literature review strategy

1. Introduction

2. materials and methods, 4. discussion, 4.1. mycobacterium chimaera’s characteristics and ecosystem, 4.2. heater-cooler units, medical devices, water, and air-conditioned implants, 4.3. incubation period and symptoms presentation, 4.4. presence in the lung system, 4.5. modality of transmission, 4.6. detection, 4.7. disinfection, 4.8. causal link assessment, 5. limitations, 6. conclusions, supplementary materials, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest, abbreviations.

MAC mycobacterium avium complex
NTM non-tuberculosis mycobacterium
M. chimaeraMycobacterium chimaera
HCU heater-cooler units
OPPP opportunistic premise plumbing pathogens
ECMO extra-corporal mechanical oxygenation
HAI healthcare-associated infection
  • Vendramin, I.; Peghin, M.; Tascini, C.; Livi, U. Longest Incubation Period of Mycobacterium chimaera Infection after Cardiac Surgery. Eur. J. Cardio-Thoracic Surg. 2021 , 59 , 506–508. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Biswas, M.; Rahaman, S.; Biswas, T.K.; Haque, Z.; Ibrahim, B. Association of Sex, Age, and Comorbidities with Mortality in COVID-19 Patients: A Systematic Review and Meta-Analysis. Intervirology 2020 , 64 , 36–47. [ Google Scholar ] [ CrossRef ]
  • Treglia, M.; Pallocci, M.; Passalacqua, P.; Sabatelli, G.; De Luca, L.; Zanovello, C.; Messineo, A.; Quintavalle, G.; Cisterna, A.M.; Marsella, L.T. Medico-Legal Aspects of Hospital-Acquired Infections: 5-Years of Judgements of the Civil Court of Rome. Healthcare 2022 , 10 , 1336. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bolcato, V.; Tronconi, L.P.; Odone, A.; Blandi, L. Healthcare-Acquired Sars-Cov-2 Infection: A Viable Legal Category? Int. J. Risk Saf. Med. 2023 , 34 , 129–134. [ Google Scholar ] [ CrossRef ]
  • Tattoli, L.; Dell’erba, A.; Ferorelli, D.; Gasbarro, A.; Solarino, B. Sepsis and Nosocomial Infections: The Role of Medico-Legal Experts in Italy. Antibiotics 2019 , 8 , 199. [ Google Scholar ] [ CrossRef ]
  • Barranco, R.; Caristo, I.; Spigno, F.; Ponzano, M.; Trevisan, A.; Signori, A.; Di Biagio, A.; Ventura, F. Management of the Medico-Legal Dispute of Healthcare-Related SARS-CoV-2 Infections: Evaluation Criteria and Case Study in a Large University Hospital in Northwest Italy from 2020 to 2021. Int. J. Environ. Res. Public Health 2022 , 19 , 16764. [ Google Scholar ] [ CrossRef ]
  • Goldenberg, S.D.; Volpé, H.; French, G.L. Clinical Negligence, Litigation and Healthcare-Associated Infections. J. Hosp. Infect. 2012 , 81 , 156–162. [ Google Scholar ] [ CrossRef ]
  • Rizzo, N. La Causalità Civile ; Jus Civile; Giappichelli Editore: Torino, Italy, 2022. [ Google Scholar ]
  • Riccardi, N.; Monticelli, J.; Antonello, R.M.; Luzzati, R.; Gabrielli, M.; Ferrarese, M.; Codecasa, L.; Di Bella, S.; Giacobbe, D.R. Mycobacterium chimaera Infections: An Update. J. Infect. Chemother. 2020 , 26 , 199–205. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Natanti, A.; Palpacelli, M.; Valsecchi, M.; Tagliabracci, A.; Pesaresi, M. Mycobacterium chimaera : A Report of 2 New Cases and Literature Review. Int. J. Leg. Med. 2021 , 135 , 2667–2679. [ Google Scholar ] [ CrossRef ]
  • Wetzstein, N.; Kohl, T.A.; Diricks, M.; Mas-Peiro, S.; Holubec, T.; Kessel, J.; Graf, C.; Koch, B.; Herrmann, E.; Vehreschild, M.J.G.T.; et al. Clinical Characteristics and Outcome of Mycobacterium chimaera Infections after Cardiac Surgery: Systematic Review and Meta-Analysis of 180 Heater-Cooler Unit-Associated Cases. Clin. Microbiol. Infect. 2023 , 29 , 1008–1014. [ Google Scholar ] [ CrossRef ]
  • Desai, A.N.; Hurtado, R.M. Infections and Outbreaks of Nontuberculous Mycobacteria in Hospital Settings. Curr. Treat. Options Infect. Dis. 2018 , 10 , 169–181. [ Google Scholar ] [ CrossRef ]
  • van Ingen, J.; Kohl, T.A.; Kranzer, K.; Hasse, B.; Keller, P.M.; Katarzyna Szafrańska, A.; Hillemann, D.; Chand, M.; Schreiber, P.W.; Sommerstein, R.; et al. Global Outbreak of Severe Mycobacterium chimaera Disease after Cardiac Surgery: A Molecular Epidemiological Study. Lancet Infect. Dis. 2017 , 17 , 1033–1041. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bisognin, F.; Messina, F.; Butera, O.; Nisii, C.; Mazzarelli, A.; Cristino, S.; Pascale, M.R.; Lombardi, G.; Cannas, A.; Dal Monte, P. Investigating the Origin of Mycobacterium chimaera Contamination in Heater-Cooler Units: Integrated Analysis with Fourier Transform Infrared Spectroscopy and Whole-Genome Sequencing. Microbiol. Spectr. 2022 , 10 , e0289322. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Pinzauti, D.; De Giorgi, S.; Fox, V.; Lazzeri, E.; Messina, G.; Santoro, F.; Iannelli, F.; Ricci, S.; Pozzi, G. Complete Genome Sequences of Mycobacterium chimaera Strains 850 and 852, Isolated from Heater-Cooler Unit Water. Microbiol. Resour. Announc. 2022 , 11 , e0102121. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hasan, N.A.; Warren, R.L.; Elaine Epperson, L.; Malecha, A.; Alexander, D.C.; Turenne, C.Y.; MacMillan, D.; Birol, I.; Pleasance, S.; Coope, R.; et al. Complete Genome Sequence of Mycobacterium chimaera SJ42, a Nonoutbreak Strain from an Immunocompromised Patient with Pulmonary Disease. Genome Announc. 2017 , 5 , e00963-17. [ Google Scholar ] [ CrossRef ]
  • Wallace, R.J.; Iakhiaeva, E.; Williams, M.D.; Brown-Elliott, B.A.; Vasireddy, S.; Vasireddy, R.; Lande, L.; Peterson, D.D.; Sawicki, J.; Kwait, R.; et al. Absence of Mycobacterium Intracellulare and Presence of Mycobacterium chimaera in Household Water and Biofilm Samples of Patients in the United States with Mycobacterium avium Complex Respiratory Disease. J. Clin. Microbiol. 2013 , 51 , 1747–1752. [ Google Scholar ] [ CrossRef ]
  • Falkinham, J.O.; Hilborn, E.D.; Arduino, M.J.; Pruden, A.; Edwards, M.A. Epidemiology and Ecology of Opportunistic Premise Plumbing Pathogens: Legionella pneumophila , Mycobacterium avium , and Pseudomonas aeruginosa . Environ. Health Perspect. 2015 , 123 , 749–758. [ Google Scholar ] [ CrossRef ]
  • European Centre for Disease Prevention and Control. EU Protocol for Testing of M. chimaera Infections Potentially Associated with Heater-Cooler Units Environmental Microbiology Investigations ; Technical Document; European Centre for Disease Prevention and Control: Solna, Sweden, 2015. [ Google Scholar ]
  • Ministero della Salute. Raccomandazioni per Il Controllo Dell’infezione da Mycobacterium chimaera in Italia ; Ministero della Salute: Roma, Italy, 2019.
  • Bolcato, M.; Rodriguez, D.; Aprile, A. Risk Management in the New Frontier of Professional Liability for Nosocomial Infection: Review of the Literature on Mycobacterium chimaera . Int. J. Environ. Res. Public Health 2020 , 17 , 7328. [ Google Scholar ] [ CrossRef ]
  • Achermann, Y.; Rössle, M.; Hoffmann, M.; Deggim, V.; Kuster, S.; Zimmermann, D.R.; Bloemberg, G.; Hombach, M.; Hasse, B. Prosthetic Valve Endocarditis and Bloodstream Infection Due to Mycobacterium chimaera . J. Clin. Microbiol. 2013 , 51 , 1769–1773. [ Google Scholar ] [ CrossRef ]
  • Zabost, A.T.; Szturmowicz, M.; Brzezińska, S.A.; Klatt, M.D.; Augustynowicz-Kopeć, E.M. Mycobacterium chimaera as an Underestimated Cause of NTM Lung Diseases in Patients Hospitalized in Pulmonary Wards. Pol. J. Microbiol. 2021 , 70 , 315–320. [ Google Scholar ] [ CrossRef ]
  • Truden, S.; Žolnir-Dovč, M.; Sodja, E.; Starčič Erjavec, M. Nationwide Analysis of Mycobacterium chimaera and Mycobacterium intracellulare Isolates: Frequency, Clinical Importance, and Molecular and Phenotypic Resistance Profiles. Infect. Genet. Evol. 2020 , 82 , 104311. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021 , 372 , 71. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bills, N.D.; Hinrichs, S.H.; Aden, T.A.; Wickert, R.S.; Iwen, P.C. Molecular Identification of Mycobacterium chimaera as a Cause of Infection in a Patient with Chronic Obstructive Pulmonary Disease. Diagn. Microbiol. Infect. Dis. 2009 , 63 , 292–295. [ Google Scholar ] [ CrossRef ]
  • Cohen-Bacrie, S.; David, M.; Stremler, N.; Dubus, J.-C.; Rolain, J.-M.; Drancourt, M. Mycobacterium chimaera Pulmonary Infection Complicating Cystic Fibrosis: A Case Report. J. Med. Case Rep. 2011 , 5 , 473. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Alhanna, J.; Purucker, M.; Steppert, C.; Grigull-Daborn, A.; Schiffel, G.; Gruber, H.; Borgmann, S. Mycobacterium chimaera Causes Tuberculosis-like Infection in a Male Patient with Anorexia Nervosa. Int. J. Eat. Disord. 2012 , 45 , 450–452. [ Google Scholar ] [ CrossRef ]
  • Gunaydin, M.; Yanik, K.; Eroglu, C.; Sanic, A.; Ceyhan, I.; Erturan, Z.; Durmaz, R. Distribution of Nontuberculous Mycobacteria Strains. Ann. Clin. Microbiol. Antimicrob. 2013 , 12 , 33. [ Google Scholar ] [ CrossRef ]
  • Boyle, D.P.; Zembower, T.R.; Reddy, S.; Qi, C. Comparison of Clinical Features, Virulence, and Relapse among Mycobacterium avium Complex Species. Am. J. Respir. Crit. Care Med. 2015 , 191 , 1310–1317. [ Google Scholar ] [ CrossRef ]
  • Mwikuma, G.; Kwenda, G.; Hang’ombe, B.M.; Simulundu, E.; Kaile, T.; Nzala, S.; Siziya, S.; Suzuki, Y. Molecular Identification of Non-Tuberculous Mycobacteria Isolated from Clinical Specimens in Zambia. Ann. Clin. Microbiol. Antimicrob. 2015 , 14 , 1. [ Google Scholar ] [ CrossRef ]
  • Moon, S.M.; Kim, S.Y.; Jhun, B.W.; Lee, H.; Park, H.Y.; Jeon, K.; Huh, H.J.; Ki, C.S.; Lee, N.Y.; Shin, S.J.; et al. Clinical Characteristics and Treatment Outcomes of Pulmonary Disease Caused by Mycobacterium chimaera . Diagn. Microbiol. Infect. Dis. 2016 , 86 , 382–384. [ Google Scholar ] [ CrossRef ]
  • Moutsoglou, D.M.; Merritt, F.; Cumbler, E. Disseminated Mycobacterium chimaera Presenting as Vertebral Osteomyelitis. Case Rep. Infect. Dis. 2017 , 2017 , 9893743. [ Google Scholar ] [ CrossRef ]
  • Bursle, E.; Playford, E.G.; Coulter, C.; Griffin, P. First Australian Case of Disseminated Mycobacterium chimaera Infection Post-Cardiothoracic Surgery. Infect. Dis. Health 2017 , 22 , 1–5. [ Google Scholar ] [ CrossRef ]
  • Kim, S.-Y.; Shin, S.H.; Moon, S.M.; Yang, B.; Kim, H.; Kwon, O.J.; Huh, H.J.; Ki, C.-S.; Lee, N.Y.; Shin, S.J.; et al. Distribution and Clinical Significance of Mycobacterium avium Complex Species Isolated from Respiratory Specimens. Diagn. Microbiol. Infect. Dis. 2017 , 88 , 125–137. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chand, M.; Lamagni, T.; Kranzer, K.; Hedge, J.; Moore, G.; Parks, S.; Collins, S.; Del Ojo Elias, C.; Ahmed, N.; Brown, T.; et al. Insidious Risk of Severe Mycobacterium chimaera Infection in Cardiac Surgery Patients. Clin. Infect. Dis. 2017 , 64 , 335–342. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Truden, S.; Žolnir-Dovč, M.; Sodja, E.; Starčič Erjavec, M. Retrospective Analysis of Slovenian Mycobacterium avium Complex and Mycobacterium abscessus Complex Isolates and Molecular Resistance Profile. Russ. J. Infect. Immun. 2018 , 8 , 447–451. [ Google Scholar ] [ CrossRef ]
  • Larcher, R.; Lounnas, M.; Dumont, Y.; Michon, A.L.; Bonzon, L.; Chiron, R.; Carriere, C.; Klouche, K.; Godreuil, S. Mycobacterium chimaera Pulmonary Disease in Cystic Fibrosis Patients, France, 2010–2017. Emerg. Infect. Dis. 2019 , 25 , 611–613. [ Google Scholar ] [ CrossRef ]
  • Shafizadeh, N.; Hale, G.; Bhatnagar, J.; Alshak, N.S.; Nomura, J. Mycobacterium chimaera Hepatitis: A New Disease Entity. Am. J. Surg. Pathol. 2019 , 43 , 244–250. [ Google Scholar ] [ CrossRef ]
  • Rosero, C.I.; Shams, W.E. Mycobacterium chimaera Infection Masquerading as a Lung Mass in a Healthcare Worker. IDCases 2019 , 15 , e00526. [ Google Scholar ] [ CrossRef ]
  • Watanabe, R.; Seino, H.; Taniuchi, S.; Igusa, R. Mycobacterium chimaera -Induced Tenosynovitis in a Patient with Rheumatoid Arthritis. BMJ Case Rep. 2020 , 13 , e233868. [ Google Scholar ] [ CrossRef ]
  • Chen, L.C.; Huang, H.N.; Yu, C.J.; Chien, J.Y.; Hsueh, P.R. Clinical Features and Treatment Outcomes of Mycobacterium chimaera Lung Disease and Antimicrobial Susceptibility of the Mycobacterial Isolates. J. Infect. 2020 , 80 , 437–443. [ Google Scholar ] [ CrossRef ]
  • Maalouly, C.; Devresse, A.; Martin, A.; Rodriguez-Villalobos, H.; Kanaan, N.; Belkhir, L. Coinfection of Mycobacterium malmoense and Mycobacterium chimaera in a Kidney Transplant Recipient: A Case Report and Review of the Literature. Transpl. Infect. Dis. 2020 , 22 , e13241. [ Google Scholar ] [ CrossRef ]
  • de Melo Carvalho, R.; Nunes, A.L.; Sa, R.; Ramos, I.; Valente, C.; Saraiva da Cunha, J. Mycobacterium chimaera Disseminated Infection. J. Med. Cases 2020 , 11 , 35–36. [ Google Scholar ] [ CrossRef ]
  • Sharma, K.; Sharma, M.; Modi, M.; Joshi, H.; Goyal, M.; Sharma, A.; Ray, P.; Rowlinson, M.C. Mycobacterium chimaera and Chronic Meningitis. QJM Int. J. Med. 2020 , 113 , 563–564. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kim, M.J.; Kim, K.M.; Shin, J.I.; Ha, J.H.; Lee, D.H.; Choi, J.G.; Park, J.S.; Byun, J.H.; Yoo, J.W.; Eum, S.; et al. Identification of Nontuberculous Mycobacteria in Patients with Pulmonary Diseases in Gyeongnam, Korea, Using Multiplex PCR and Multigene Sequence-Based Analysis. Can. J. Infect. Dis. Med. Microbiol. 2021 , 2021 , 8844306. [ Google Scholar ] [ CrossRef ]
  • Kavvalou, A.; Stehling, F.; Tschiedel, E.; Kehrmann, J.; Walkenfort, B.; Hasenberg, M.; Olivier, M.; Steindor, M. Biofilm Infection of a Central Venous Port-Catheter Caused by Mycobacterium avium Complex in an Immunocompetent Child with Cystic Fibrosis. BMC Infect. Dis. 2022 , 22 , 321. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Robinson, B.; Chaudhri, M.; Miskoff, J.A. A Case of Cavitary Mycobacterium chimaera . Cureus 2022 , 14 , e26984. [ Google Scholar ] [ CrossRef ]
  • Ahmad, M.; Yousaf, A.; Khan, H.M.W.; Munir, A.; Chandran, A. Mycobacterium chimaera Lung Infection and Empyema in a Patient without Cardiopulmonary Bypass. Bayl. Univ. Med. Cent. Proc. 2022 , 35 , 817–819. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • George, M.; Afra, T.P.; Santhosh, P.; Nandakumar, G.; Balagopalan, D.; Sreedharan, S. Ulcerating Nodules on the Face Due to Mycobacterium chimaera in a Patient with Diabetes. Clin. Exp. Dermatol. 2022 , 47 , 587–589. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lin, Y.F.; Lee, T.F.; Wu, U.I.; Huang, C.F.; Cheng, A.; Lin, K.Y.; Hung, C.C. Disseminated Mycobacterium chimaera Infection in a Patient with Adult-Onset Immunodeficiency Syndrome: Case Report. BMC Infect. Dis. 2022 , 22 , 665. [ Google Scholar ] [ CrossRef ]
  • Łyżwa, E.; Siemion-Szcześniak, I.; Sobiecka, M.; Lewandowska, K.; Zimna, K.; Bartosiewicz, M.; Jakubowska, L.; Augustynowicz-Kopeć, E.; Tomkowski, W. An Unfavorable Outcome of M. Chimaera Infection in Patient with Silicosis. Diagnostics 2022 , 12 , 1826. [ Google Scholar ] [ CrossRef ]
  • McLaughlin, C.M.; Schade, M.; Cochran, E.; Taylor, K.F. A Case Report of a Novel Atypical Mycobacterial Infection: Mycobacterium chimaera Hand Tenosynovitis. JBJS Case Connect. 2022 , 12 , e22. [ Google Scholar ] [ CrossRef ]
  • Gross, J.E.; Teneback, C.C.; Sweet, J.G.; Caceres, S.M.; Poch, K.R.; Hasan, N.A.; Jia, F.; Epperson, L.E.; Lipner, E.M.; Vang, C.K.; et al. Molecular Epidemiologic Investigation of Mycobacterium intracellulare Subspecies Chimaera Lung Infections at an Adult Cystic Fibrosis Program. Ann. Am. Thorac. Soc. 2023 , 20 , 677–686. [ Google Scholar ] [ CrossRef ]
  • Azzarà, C.; Lombardi, A.; Gramegna, A.; Ori, M.; Gori, A.; Blasi, F.; Bandera, A. Non-Tuberculous Mycobacteria Lung Disease Due to Mycobacterium chimaera in a 67-Year-Old Man Treated with Immune Checkpoint Inhibitors for Lung Adenocarcinoma: Infection Due to Dysregulated Immunity? BMC Infect. Dis. 2023 , 23 , 573. [ Google Scholar ] [ CrossRef ]
  • Pradhan, A.; Martinez, E.; Sintchenko, V.; Post, J.; Overton, K. Case of Mycobacterium chimaera Vertebral Osteomyelitis Diagnosed 7 Years after Cardiac Surgery. Intern. Med. J. 2023 , 53 , 150–151. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Garcia-Prieto, F.; Rodríguez Perojo, A.; Río Ramírez, M.T. Endobronchial Fibroanthracosis Associated with Mycobacterium chimaera Infection: An Exceptional Case. Open Respir. Arch. 2024 , 6 , 100309. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Paul, S.; MacNair, A.; Lostarakos, V.; Capstick, R. Non-Tuberculous Mycobacterial Pulmonary Infection Presenting in a Patient with Unilateral Pulmonary Artery Agenesis. BMJ Case Rep. 2024 , 17 , e259125. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bittner, M.J.; Preheim, L.C. Other Slow-Growing Nontuberculous Mycobacteria. Microbiol. Spectr. 2016 , 4 , 767–776. [ Google Scholar ] [ CrossRef ]
  • Tortoli, E.; Rindi, L.; Garcia, M.J.; Chiaradonna, P.; Dei, R.; Garzelli, C.; Kroppenstedt, R.M.; Lari, N.; Mattei, R.; Mariottini, A.; et al. Proposal to Elevate the Genetic Variant MAC-A Included in the Mycobacterium avium Complex, to Species Rank as Mycobacterium chimaera sp. nov. Int. J. Syst. Evol. Microbiol. 2004 , 54 , 1277–1285. [ Google Scholar ] [ CrossRef ]
  • Turankar, R.P.; Singh, V.; Gupta, H.; Pathak, V.K.; Ahuja, M.; Singh, I.; Lavania, M.; Dinda, A.K.; Sengupta, U. Association of Non-Tuberculous Mycobacteria with Mycobacterium leprae in Environment of Leprosy Endemic Regions in India. Infect. Genet. Evol. 2019 , 72 , 191–198. [ Google Scholar ] [ CrossRef ]
  • Makovcova, J.; Slany, M.; Babak, V.; Slana, I.; Kralik, P. The Water Environment as a Source of Potentially Pathogenic Mycobacteria. J. Water Health 2014 , 12 , 254–263. [ Google Scholar ] [ CrossRef ]
  • Falkinham, J.O. Ecology of Nontuberculous Mycobacteria-Where Do Human Infections Come From? Semin. Respir. Crit. Care Med. 2013 , 34 , 95–102. [ Google Scholar ] [ CrossRef ]
  • Norton, G.J.; Williams, M.; Falkinham, J.O., III; Honda, J.R. Physical Measures to Reduce Exposure to Tap Water-Associated Nontuberculous Mycobacteria. Front. Public Health 2020 , 8 , 190. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Durnez, L.; Eddyani, M.; Mgode, G.F.; Katakweba, A.; Katholi, C.R.; Machang’u, R.R.; Kazwala, R.R.; Portaels, F.; Leirs, H. First Detection of Mycobacteria in African Rodents and Insectivores, Using Stratified Pool Screening. Appl. Environ. Microbiol. 2008 , 74 , 768. [ Google Scholar ] [ CrossRef ]
  • Sax, H.; Bloemberg, G.; Hasse, B.; Sommerstein, R.; Kohler, P.; Achermann, Y.; Rössle, M.; Falk, V.; Kuster, S.P.; Böttger, E.C.; et al. Prolonged Outbreak of Mycobacterium chimaera Infection after Open-Chest Heart Surgery. Clin. Infect. Dis. 2015 , 61 , 67–75. [ Google Scholar ] [ CrossRef ]
  • Falkinham, J.O.; Williams, M.D. Desiccation-Tolerance of Mycobacterium avium , Mycobacterium intracellulare , Mycobacterium chimaera , Mycobacterium abscessus and Mycobacterium chelonae . Pathogens 2022 , 11 , 463. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gebert, M.J.; Delgado-Baquerizo, M.; Oliverio, A.M.; Webster, T.M.; Nichols, L.M.; Honda, J.R.; Chan, E.D.; Adjemian, J.; Dunn, R.R.; Fierer, N. Ecological Analyses of Mycobacteria in Showerhead Biofilms and Their Relevance to Human Health. mBio 2018 , 9 , e01614–e01618. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cao, Y.; Yuan, S.; Pang, L.; Xie, J.; Gao, Y.; Zhang, J.; Zhao, Z.; Yao, S. Study on Microbial Diversity of Washing Machines. Biodegradation 2024 , 1–13. [ Google Scholar ] [ CrossRef ]
  • Liu, H.; Jiao, P.; Guan, L.; Wang, C.; Zhang, X.X.; Ma, L. Functional Traits and Health Implications of the Global Household Drinking-Water Microbiome Retrieved Using an Integrative Genome-Centric Approach. Water Res. 2024 , 250 , 121094. [ Google Scholar ] [ CrossRef ]
  • Choi, J.Y.; Sim, B.R.; Park, Y.; Yong, S.H.; Shin, S.J.; Kang, Y.A. Identification of Nontuberculous Mycobacteria Isolated from Household Showerheads of Patients with Nontuberculous Mycobacteria. Sci. Rep. 2022 , 12 , 8648. [ Google Scholar ] [ CrossRef ]
  • Shen, Y.; Haig, S.J.; Prussin, A.J.; Lipuma, J.J.; Marr, L.C.; Raskin, L. Shower Water Contributes Viable Nontuberculous Mycobacteria to Indoor Air. PNAS Nexus 2022 , 1 , pgac145. [ Google Scholar ] [ CrossRef ]
  • Struelens, M.J.; Plachouras, D. Mycobacterium chimaera Infections Associated with Heater-Cooler Units (HCU): Closing Another Loophole in Patient Safety. Eurosurveillance 2016 , 21 , 30397. [ Google Scholar ] [ CrossRef ]
  • Trudzinski, F.C.; Schlotthauer, U.; Kamp, A.; Hennemann, K.; Muellenbach, R.M.; Reischl, U.; Gärtner, B.; Wilkens, H.; Bals, R.; Herrmann, M.; et al. Clinical Implications of Mycobacterium chimaera Detection in Thermoregulatory Devices Used for Extracorporeal Membrane Oxygenation (ECMO), Germany, 2015 to 2016. Eurosurveillance 2016 , 21 , 30398. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Schnetzinger, M.; Heger, F.; Indra, A.; Kimberger, O. Bacterial Contamination of Water Used as Thermal Transfer Fluid in Fluid-Warming Devices. J. Hosp. Infect. 2023 , 141 , 49–54. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Barker, T.A.; Dandekar, U.; Fraser, N.; Dawkin, L.; Sweeney, P.; Heron, F.; Simmons, J.; Parmar, J. Minimising the Risk of Mycobacterium chimaera Infection during Cardiopulmonary Bypass by the Removal of Heater-Cooler Units from the Operating Room. Perfusion 2018 , 33 , 264–269. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Barnes, S.; Twomey, C.; Carrico, R.; Murphy, C.; Warye, K. OR Air Quality: Is It Time to Consider Adjunctive Air Cleaning Technology? AORN J. 2018 , 108 , 503–515. [ Google Scholar ] [ CrossRef ]
  • Walker, J.T.; Lamagni, T.; Chand, M. Evidence That Mycobacterium chimaera Aerosols Penetrate Laminar Airflow and Result in Infections at the Surgical Field. Lancet Infect. Dis. 2017 , 17 , 1019. [ Google Scholar ] [ CrossRef ]
  • Schlotthauer, U.; Hennemann, K.; Gärtner, B.C.; Schäfers, H.-J.; Becker, S.L. Microbiological Surveillance of Heater-Cooler Units Used in Cardiothoracic Surgery for Detection of Mycobacterium chimaera . Thorac. Cardiovasc. Surg. 2022 , 72 , 59–62. [ Google Scholar ] [ CrossRef ]
  • Gross, J.E.; Caceres, S.; Poch, K.; Epperson, L.E.; Hasan, N.A.; Jia, F.; de Moura, V.C.N.; Strand, M.; Lipner, E.M.; Honda, J.R.; et al. Prospective Healthcare-Associated Links in Transmission of Nontuberculous Mycobacteria among People with Cystic Fibrosis (PHALT NTM) Study: Rationale and Study Design. PLoS ONE 2023 , 18 , e0291910. [ Google Scholar ] [ CrossRef ]
  • Nakamura, S.; Azuma, M.; Sato, M.; Fujiwara, N.; Nishino, S.; Wada, T.; Yoshida, S. Pseudo-Outbreak of Mycobacterium chimaera through Aerators of Hand-Washing Machines at a Hematopoietic Stem Cell Transplantation Center. Infect. Control Hosp. Epidemiol. 2019 , 40 , 1433–1435. [ Google Scholar ] [ CrossRef ]
  • Kanamori, H.; Weber, D.J.; Rutala, W.A. Healthcare-Associated Mycobacterium chimaera Transmission and Infection Prevention Challenges: Role of Heater-Cooler Units as a Water Source in Cardiac Surgery. Clin. Infect. Dis. 2017 , 64 , 343–346. [ Google Scholar ] [ CrossRef ]
  • Rao, M.; Silveira, F.P. Non-Tuberculous Mycobacterial Infections in Thoracic Transplant Candidates and Recipients. Curr. Infect. Dis. Rep. 2018 , 20 , 14. [ Google Scholar ] [ CrossRef ]
  • Walker, J.; Moore, G.; Collins, S.; Parks, S.; Garvey, M.I.; Lamagni, T.; Smith, G.; Dawkin, L.; Goldenberg, S.; Chand, M. Microbiological Problems and Biofilms Associated with Mycobacterium chimaera in Heater–Cooler Units Used for Cardiopulmonary Bypass. J. Hosp. Infect. 2017 , 96 , 209–220. [ Google Scholar ] [ CrossRef ]
  • Born, F.; Wieser, A.; Oberbach, A.; Oberbach, A.; Ellgass, R.; Peterss, S.; Kur, F.; Grabein, B.; Hagl, C. Five Years without Mycobacterium chimaera . Thorac. Cardiovasc. Surg. 2020 , 68 , S1–S72. [ Google Scholar ] [ CrossRef ]
  • Scriven, J.E.; Scobie, A.; Verlander, N.Q.; Houston, A.; Collyns, T.; Cajic, V.; Kon, O.M.; Mitchell, T.; Rahama, O.; Robinson, A.; et al. Mycobacterium chimaera Infection Following Cardiac Surgery in the United Kingdom: Clinical Features and Outcome of the First 30 Cases. Clin. Microbiol. Infect. 2018 , 24 , 1164–1170. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Olatidoye, O.A.; Samat, S.H.; Yin, K.; Bates, M.J. Pulmonary Valve Infective Endocarditis Caused by Mycobacterium abscessus . J. Cardiothorac. Surg. 2023 , 18 , 221. [ Google Scholar ] [ CrossRef ]
  • Ganatra, S.; Sharma, A.; D’Agostino, R.; Gage, T.; Kinnunen, P. Mycobacterium chimaera Mimicking Sarcoidosis. Methodist Debakey Cardiovasc. J. 2018 , 14 , 301–302. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Buchanan, R.; Agarwal, A.; Mathai, E.; Cherian, B. Mycobacterium chimaera : A Novel Pathogen with Potential Risk to Cardiac Surgical Patients. Natl. Med. J. India 2020 , 33 , 284–287. [ Google Scholar ] [ CrossRef ]
  • McHugh, J.; Saleh, O.A. Updates in Culture-Negative Endocarditis. Pathogens 2023 , 12 , 1027. [ Google Scholar ] [ CrossRef ]
  • Delgado, V.; Ajmone Marsan, N.; De Waha, S.; Bonaros, N.; Brida, M.; Burri, H.; Caselli, S.; Doenst, T.; Ederhy, S.; Erba, P.A.; et al. 2023 ESC Guidelines for the Management of Endocarditis. Eur. Heart J. 2023 , 44 , 3948–4042. [ Google Scholar ] [ CrossRef ]
  • Kohler, P.; Kuster, S.P.; Bloemberg, G.; Schulthess, B.; Frank, M.; Tanner, F.C.; Rössle, M.; Böni, C.; Falk, V.; Wilhelm, M.J.; et al. Healthcare-Associated Prosthetic Heart Valve, Aortic Vascular Graft, and Disseminated Mycobacterium chimaera Infections Subsequent to Open Heart Surgery. Eur. Heart J. 2015 , 36 , 2745–2753. [ Google Scholar ] [ CrossRef ]
  • Wyrostkiewicz, D.; Opoka, L.; Filipczak, D.; Jankowska, E.; Skorupa, W.; Augustynowicz-Kopeć, E.; Szturmowicz, M. Nontuberculous Mycobacterial Lung Disease in the Patients with Cystic Fibrosis—A Challenging Diagnostic Problem. Diagnostics 2022 , 12 , 1514. [ Google Scholar ] [ CrossRef ]
  • Virdi, R.; Lowe, M.E.; Norton, G.J.; Dawrs, S.N.; Hasan, N.A.; Epperson, L.E.; Glickman, C.M.; Chan, E.D.; Strong, M.; Crooks, J.L.; et al. Lower Recovery of Nontuberculous Mycobacteria from Outdoor Hawai’i Environmental Water Biofilms Compared to Indoor Samples. Microorganisms 2021 , 9 , 224. [ Google Scholar ] [ CrossRef ]
  • Schweickert, B.; Goldenberg, O.; Richter, E.; Göbel, U.B.; Petrich, A.; Buchholz, P.; Moter, A. Occurrence and Clinical Relevance of Mycobacterium chimaera sp. nov., Germany. Emerg. Infect. Dis. 2008 , 14 , 1443–1446. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Boyle, D.P.; Zembower, T.R.; Qi, C. Evaluation of Vitek MS for Rapid Classification of Clinical Isolates Belonging to Mycobacterium avium Complex. Diagn. Microbiol. Infect. Dis. 2015 , 81 , 41–43. [ Google Scholar ] [ CrossRef ]
  • Sommerstein, R.; Rüegg, C.; Kohler, P.; Bloemberg, G.; Kuster, S.P.; Sax, H. Transmission of Mycobacterium chimaera from Heater-Cooler Units during Cardiac Surgery despite an Ultraclean Air Ventilation System. Emerg. Infect. Dis. 2016 , 22 , 1008–1013. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Balsam, L.B.; Louie, E.; Hill, F.; Levine, J.; Phillips, M.S. Mycobacterium chimaera Left Ventricular Assist Device Infections. J. Card. Surg. 2017 , 32 , 402–404. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sanchez-Nadales, A.; Diaz-Sierra, A.; Mocadie, M.; Asher, C.; Gordon, S.; Xu, B. Advanced Cardiovascular Imaging for the Diagnosis of Mycobacterium chimaera Prosthetic Valve Infective Endocarditis After Open-Heart Surgery: A Contemporary Systematic Review. Curr. Probl. Cardiol. 2022 , 47 , 101392. [ Google Scholar ] [ CrossRef ]
  • Cannas, A.; Campanale, A.; Minella, D.; Messina, F.; Butera, O.; Nisii, C.; Mazzarelli, A.; Fontana, C.; Lispi, L.; Maraglino, F.; et al. Epidemiological and Molecular Investigation of the Heater–Cooler Unit (HCU)-Related Outbreak of Invasive Mycobacterium chimaera Infection Occurred in Italy. Microorganisms 2023 , 11 , 2251. [ Google Scholar ] [ CrossRef ]
  • Schreiber, P.W.; Kohl, T.A.; Kuster, S.P.; Niemann, S.; Sax, H. The Global Outbreak of Mycobacterium chimaera Infections in Cardiac Surgery—A Systematic Review of Whole-Genome Sequencing Studies and Joint Analysis. Clin. Microbiol. Infect. 2021 , 27 , 1613–1620. [ Google Scholar ] [ CrossRef ]
  • Rubinstein, M.; Grossman, R.; Nissan, I.; Schwaber, M.J.; Carmeli, Y.; Kaidar-Shwartz, H.; Dveyrin, Z.; Rorman, E. Mycobacterium intracellulare Subsp. Chimaera from Cardio Surgery Heating-Cooling Units and from Clinical Samples in Israel Are Genetically Unrelated. Pathogens 2021 , 10 , 1392. [ Google Scholar ] [ CrossRef ]
  • Mercaldo, R.A.; Marshall, J.E.; Prevots, D.R.; Lipner, E.M.; French, J.P. Detecting Clusters of High Nontuberculous Mycobacteria Infection Risk for Persons with Cystic Fibrosis—An Analysis of U.S. Counties. Tuberculosis 2023 , 138 , 102296. [ Google Scholar ] [ CrossRef ]
  • Asadi, T.; Mullin, K.; Roselli, E.; Johnston, D.; Tan, C.D.; Rodriguez, E.R.; Gordon, S. Disseminated Mycobacterium chimaera Infection Associated with Heater-Cooler Units after Aortic Valve Surgery without Endocarditis. J. Thorac. Cardiovasc. Surg. 2018 , 155 , 2369–2374. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Clemente, T.; Spagnuolo, V.; Bottanelli, M.; Ripa, M.; Del Forno, B.; Busnardo, E.; Di Lucca, G.; Castagna, A.; Danise, A. Disseminated Mycobacterium chimaera Infection Favoring the Development of Kaposi’s Sarcoma: A Case Report. Ann. Clin. Microbiol. Antimicrob. 2022 , 21 , 57. [ Google Scholar ] [ CrossRef ]
  • Schaeffer, T.; Kuster, S.; Koechlin, L.; Khanna, N.; Eckstein, F.S.; Reuthebuch, O. Long-Term Follow-Up after Mycobacterium chimaera Infection Following Cardiac Surgery: Single-Center Experience. J. Clin. Med. 2023 , 12 , 948. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Trauth, J.; Matt, U.; Kohl, T.A.; Niemann, S.; Herold, S. Blind Spot in Endocarditis Guidelines: Mycobacterium chimaera Prosthetic Valve Endocarditis after Cardiac Surgery—A Case Series. Eur. Heart J. Case Rep. 2023 , 7 , ytad400. [ Google Scholar ] [ CrossRef ]
  • Sanavio, M.; Anna, A.; Bolcato, M. Mycobacterium chimaera : Clinical and Medico-Legal Considerations Starting from a Case of Sudden Acoustic Damage. Leg. Med. 2020 , 47 , 101747. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • van Ingen, J. Microbiological Diagnosis of Nontuberculous Mycobacterial Pulmonary Disease. Clin. Chest Med. 2015 , 36 , 43–54. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wang, H.; Bédard, E.; Prévost, M.; Camper, A.K.; Hill, V.R.; Pruden, A. Methodological Approaches for Monitoring Opportunistic Pathogens in Premise Plumbing: A Review. Water Res. 2017 , 117 , 68–86. [ Google Scholar ] [ CrossRef ]
  • Hasse, B.; Hannan, M.M.; Keller, P.M.; Maurer, F.P.; Sommerstein, R.; Mertz, D.; Wagner, D.; Fernández-Hidalgo, N.; Nomura, J.; Manfrin, V.; et al. International Society of Cardiovascular Infectious Diseases Guidelines for the Diagnosis, Treatment and Prevention of Disseminated Mycobacterium chimaera Infection Following Cardiac Surgery with Cardiopulmonary Bypass. J. Hosp. Infect. 2020 , 104 , 214–235. [ Google Scholar ] [ CrossRef ]
  • Schreiber, P.W.; Köhler, N.; Cervera, R.; Hasse, B.; Sax, H.; Keller, P.M. Detection Limit of Mycobacterium chimaera in Water Samples for Monitoring Medical Device Safety: Insights from a Pilot Experimental Series. J. Hosp. Infect. 2018 , 99 , 284–289. [ Google Scholar ] [ CrossRef ]
  • Daley, C.L.; Iaccarino, J.M.; Lange, C.; Cambau, E.; Wallace, R.J.; Andrejak, C.; Böttger, E.C.; Brozek, J.; Griffith, D.E.; Guglielmetti, L.; et al. Treatment of Nontuberculous Mycobacterial Pulmonary Disease: An Official ATS/ERS/ESCMID/IDSA Clinical Practice Guideline. Clin. Infect. Dis. 2020 , 71 , e1–e36. [ Google Scholar ] [ CrossRef ]
  • Lecorche, E.; Haenn, S.; Mougari, F.; Kumanski, S.; Veziris, N.; Benmansour, H.; Raskine, L.; Moulin, L.; Cambau, E.; Aubry, A.; et al. Comparison of Methods Available for Identification of Mycobacterium chimaera . Clin. Microbiol. Infect. 2018 , 24 , 409–413. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lyamin, A.V.; Ereshchenko, A.A.; Gusyakova, O.A.; Yanchenko, A.V.; Kozlov, A.V.; Khaliulin, A.V. Comparison of Laboratory Methods for Identifying Members of the Family Mycobacteriaceae. Int. J. Mycobacteriol 2023 , 12 , 129–134. [ Google Scholar ] [ CrossRef ]
  • Togawa, A.; Chikamatsu, K.; Takaki, A.; Matsumoto, Y.; Yoshimura, M.; Tsuchiya, S.; Nakamura, S.; Mitarai, S. Multiple Mutations of Mycobacterium ntracellulare Subsp. Chimaera Causing False-Negative Reaction to the Transcription-Reverse Transcription Concerted Method for Pathogen Detection. Int. J. Infect. Dis. 2023 , 133 , 14–17. [ Google Scholar ] [ CrossRef ]
  • Kuehl, R.; Banderet, F.; Egli, A.; Keller, P.M.; Frei, R.; Döbele, T.; Eckstein, F.; Widmer, A.F. Different Types of Heater-Cooler Units and Their Risk of Transmission of Mycobacterium chimaera during Open-Heart Surgery: Clues from Device Design. Infect. Control Hosp. Epidemiol. 2018 , 39 , 834–840. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Quintás Viqueira, A.; Pérez Romero, C.; Toro Rueda, C.; Sánchez Calles, A.M.; Blázquez González, J.A.; Alejandre Leyva, M. Mycobacterium chimaera in Heater-Cooler Devices: An Experience in a Tertiary Hospital in Spain. New Microbes New Infect. 2021 , 39 , 100757. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Falkinham, J.O., III. Disinfection and Cleaning of Heater-Cooler Units: Suspension- and Biofilm-Killing. J. Hosp. Infect. 2020 , 105 , 552–557. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hammer-Dedet, F.; Dupont, C.; Evrevin, M.; Jumas-Bilak, E.; Romano-Bertrand, S. Improved Detection of Non-Tuberculous Mycobacteria in Hospital Water Samples. Infect. Dis. Now 2021 , 51 , 488–491. [ Google Scholar ] [ CrossRef ]
  • Romano-Bertrand, S.; Evrevin, M.; Dupont, C.; Frapier, J.M.; Sinquet, J.C.; Bousquet, E.; Albat, B.; Jumas-Bilak, E. Persistent Contamination of Heater-Cooler Units for Extracorporeal Circulation Cured by Chlorhexidine-Alcohol in Water Tanks. J. Hosp. Infect. 2018 , 99 , 290–294. [ Google Scholar ] [ CrossRef ]
  • Colangelo, N.; Giambuzzi, I.; Moro, M.; Pasqualini, N.; Aina, A.; De Simone, F.; Blasio, A.; Alfieri, O.; Castiglioni, A.; De Bonis, M. Mycobacterium chimaera in Heater–Cooler Units: New Technical Approach for Treatment, Cleaning and Disinfection Protocol. Perfusion 2019 , 34 , 272–276. [ Google Scholar ] [ CrossRef ]
  • Ditommaso, S.; Giacomuzzi, M.; Memoli, G.; Garlasco, J.; Curtoni, A.; Iannaccone, M.; Zotti, C.M. Chemical Susceptibility Testing of Non-Tuberculous Mycobacterium strains and Other Aquatic Bacteria: Results of a Study for the Development of a More Sensitive and Simple Method for the Detection of NTM in Environmental Samples. J. Microbiol. Methods 2022 , 193 , 106405. [ Google Scholar ] [ CrossRef ]
  • Shrimpton, N.Y.R. Evaluation of Disinfection Processes for Water Heater Devices Used for Extracorporeal Life Support. Perfusion 2019 , 34 , 428–432. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sarink, M.J.; van Cappellen, W.A.; Tielens, A.G.M.; van Dijk, A.; Bogers, A.J.J.C.; de Steenwinkel, J.E.M.; Vos, M.C.; Severin, J.A.; van Hellemond, J.J. Vermamoeba Vermiformis Resides in Water-Based Heater–Cooler Units and Can Enhance Mycobacterium chimaera Survival after Chlorine Exposure. J. Hosp. Infect. 2023 , 132 , 73–77. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bengtsson, D.; Westerberg, M.; Nielsen, S.; Ridell, M.; Jönsson, B. Mycobacterium chimaera in Heater-Cooler Units Used during Cardiac Surgery–Growth and Decontamination. Infect. Dis. 2018 , 50 , 736–742. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Weitkemper, H.H.; Spilker, A.; Knobl, H.J.; Körfer, R. The Heater-Cooler Unit-A Conceivable Source of Infection. J. Extra Corpor. Technol. 2016 , 48 , 62–66. [ Google Scholar ] [ CrossRef ]
  • Foltan, M.; Nikisch, A.; Dembianny, J.; Miano, A.L.; Heinze, J.; Klar, D.; Göbölös, L.; Lehle, K.; Schmid, C. A Solution for Global Hygienic Challenges Regarding the Application of Heater-Cooler Systems in Cardiac Surgery. Perfusion 2023 , 38 , 28–36. [ Google Scholar ] [ CrossRef ]
  • Pradal, I.; Esteban, J.; Mediero, A.; García-Coca, M.; Aguilera-Correa, J.J. Contact Effect of a Methylobacterium sp. Extract on Biofilm of a Mycobacterium chimaera Strain Isolated from a 3T Heater-Cooler System. Antibiotics 2020 , 9 , 474. [ Google Scholar ] [ CrossRef ]
  • Masaka, E.; Reed, S.; Davidson, M.; Oosthuizen, J. Opportunistic Premise Plumbing Pathogens. A Potential Health Risk in Water Mist Systems Used as a Cooling Intervention. Pathogens 2021 , 10 , 462. [ Google Scholar ] [ CrossRef ]
  • Treglia, M.; Pallocci, M.; Ricciardi Tenore, G.; Castellani, P.; Pizzuti, F.; Bianco, G.; Passalacqua, P.; De Luca, L.; Zanovello, C.; Mazzuca, D.; et al. Legionella and Air Transport: A Study of Environmental Contamination. Int. J. Environ. Res. Public Health 2022 , 19 , 8069. [ Google Scholar ] [ CrossRef ]
  • Glassmeyer, S.T.; Burns, E.E.; Focazio, M.J.; Furlong, E.T.; Gribble, M.O.; Jahne, M.A.; Keely, S.P.; Kennicutt, A.R.; Kolpin, D.W.; Medlock Kakaley, E.K.; et al. Water, Water Everywhere, but Every Drop Unique: Challenges in the Science to Understand the Role of Contaminants of Emerging Concern in the Management of Drinking Water Supplies. Geohealth 2023 , 7 , e2022GH000716. [ Google Scholar ] [ CrossRef ]
  • Ortiz-Martínez, Y. Mycobacterium chimaera : An under-Diagnosed Pathogen in Developing Countries? J. Hosp. Infect. 2017 , 97 , 125–126. [ Google Scholar ] [ CrossRef ]
  • Becker, J.B.; Moisés, V.A.; Guerra-Martín, M.D.; Barbosa, D.A. Epidemiological Differences, Clinical Aspects, and Short-Term Prognosis of Patients with Healthcare-Associated and Community-Acquired Infective Endocarditis. Infect. Prev. Pract. 2024 , 6 , 100343. [ Google Scholar ] [ CrossRef ]
  • Ferrara, S.D.; Baccino, E.; Bajanowski, T.; Boscolo-Berto, R.; Castellano, M.; De Angel, R.; Pauliukevičius, A.; Ricci, P.; Vanezis, P.; Vieira, D.N.; et al. Malpractice and Medical Liability. European Guidelines on Methods of Ascertainment and Criteria of Evaluation. Int. J. Leg. Med. 2013 , 127 , 545–557. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

ReferencesAuthor, YearN. of Patients SurgeryMean Time of Presentation If Previous SurgerySetting (Country)Organ and/or Tissue Involved
[ ](Bills et al., 2009)1NoneNaNot healthcare (USA)Lung, nodules in chronic obstructive pulmonary disease
[ ](Cohen-Bacrie et al., 2011)1NoneNaPossible frequent healthcare contact (Réunion Island, FR)Lung infections in cystic fibrosis
[ ](Alhanna et al., 2012)1NoneNaNot healthcare (Germany)Lung infection
[ ](Gunaydin et al., 2013)5 (of 90)NoneNaPossible healthcare contact (Turkey)Lung (reassessment of sputum specimens)
[ ](Boyle et al., 2015)125 (of 448)NoneNaPossible healthcare contact (USA)Lung (reassessment of sputum specimens)
[ ](Mwikuma et al., 2015)
1 (of 54) NoneNaNot healthcare (Zambia)Lung (reassessment of sputum specimens)
[ ](Moon et al., 2016)11NoneNaNot healthcare (South Korea)Lung infection (reassessment of sputum specimens)
[ ](Moutsoglou et al., 2017)1NoneNaNot healthcare (USA)Disseminated with spinal osteomyelitis and discitis
[ ](Bursle et al., 2017)1Tricuspid valve repair and mitral annuloplasty13 monthsUnderwent surgery (Australia)Disseminated
[ ]Kim et al., 20178 (of 91)NoneNaPossible healthcare contact (Korea)Lung (reassessment of sputum specimens)
[ ](Chand et al., 2017) *4Valvular cardiac surgery 1.15 (0.25–5.1) yearsUnderwent surgery (UK)1 osteomyelitis and 3 disseminated
[ ](Truden et al., 2018)49 (of 102)NoneNaPossible healthcare contact (Slovenia)Lung (reassessment of sputum specimens)
[ ](Larcher et al., 2019) 4NoneNaPossible frequent healthcare contact (France)Lung (reassessment of sputum specimens in cystic fibrosis)
[ ](Shafizadeh et al., 2019) *5Valvular cardiac surgery20.6 (14–29) monthsUnderwent surgery (USA)Disseminated with liver infection
[ ](Rosero and Shams, 2019)1None but operating room nurse 10 years ago>10 yearsPossible frequent healthcare contact (USA)Lung infection
[ ](Watanabe et al., 2020)1NoneNaNot healthcare (Japan)Tendons, hand tenosynovitis
[ ](Chen et al., 2020)28NoneNaNot healthcare (Taiwan)Lung infection (reassessment of sputum specimens)
[ ](Maalouly et al., 2020)1Kidney transplantationOne weekUnderwent surgery (Belgium)Kidney, urinary tract infection in a kidney transplant recipient with concomitant Mycobacterium malmoense lung infection and fibro anthracosis
[ ](de Melo Carvalho et al., 2020)1NoneNaPossible healthcare contact (Portugal)Disseminated in B-cell lymphoma
[ ](Sharma et al., 2020)2NoneNaNot healthcare (India)Meninges, meningitis
[ ](Zabost et al., 2021)88 (of 200)NoneNaPossible healthcare contact (Poland)Lung infection (reassessment of sputum specimens)
[ ](Kim et al., 2021)4 (of 320) NoneNaPossible healthcare contact (Korea) Lung infection (reassessment of sputum specimens)
[ ](Kavvalou et al., 2022)1NoneNaPossible healthcare contact (Germany)Central venous catheter infection in cystic fibrosis
[ ](Robinson et al., 2022)1NoneNaNot healthcare (USA)Lung infection in drug abuser
[ ](Ahmad et al., 2022)1NoneNaNot healthcare (USA)Lung infection in sarcoidosis
[ ](George et al., 2022)1NoneNaNot healthcare (India)Skin, periapical abscess with chin ulcer
[ ](Lin et al., 2022)1NoneNaPossible frequent healthcare contact (Taiwan)Disseminated in adult-onset immunodeficiency syndrome
[ ](Łyżwa et al., 2022)1NoneNaNot healthcare (Poland)Lung infection in silicosis
[ ](McLaughlin et al., 2022)1Coronary artery bypass grafting1 yearUnderwent surgery (USA)Tendons, hand tenosynovitis in ipsilateral elbow wound in fisherman
[ ](Gross et al., 2023)23NoneNaHealthcare (USA)Lung infections in cystic fibrosis (genomic analysis for cluster correlation to hospital outbreaks)
[ ](Azzarà et al., 2023)1NoneNaPossible healthcare contact (Italy)Lung infection in lung adenocarcinoma treated with immune checkpoint inhibitors
[ ](Pradhan et al., 2023)1Bioprosthetic mitral valve replacement7 yearsUnderwent surgery (Australia)Spinal osteomyelitis and discitis
[ ](Garcia-Prieto et al., 2024)1NoneNaNot healthcare (Spain)Lung infection in fibro anthracosis
[ ](Paul et al., 2024)1NoneNaPossible healthcare contact (UK)Lung infection in unilateral pulmonary artery agenesis on the right side
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Bolcato, V.; Bassetti, M.; Basile, G.; Bianco Prevot, L.; Speziale, G.; Tremoli, E.; Maffessanti, F.; Tronconi, L.P. The State-of-the-Art of Mycobacterium chimaera Infections and the Causal Link with Health Settings: A Systematic Review. Healthcare 2024 , 12 , 1788. https://doi.org/10.3390/healthcare12171788

Bolcato V, Bassetti M, Basile G, Bianco Prevot L, Speziale G, Tremoli E, Maffessanti F, Tronconi LP. The State-of-the-Art of Mycobacterium chimaera Infections and the Causal Link with Health Settings: A Systematic Review. Healthcare . 2024; 12(17):1788. https://doi.org/10.3390/healthcare12171788

Bolcato, Vittorio, Matteo Bassetti, Giuseppe Basile, Luca Bianco Prevot, Giuseppe Speziale, Elena Tremoli, Francesco Maffessanti, and Livio Pietro Tronconi. 2024. "The State-of-the-Art of Mycobacterium chimaera Infections and the Causal Link with Health Settings: A Systematic Review" Healthcare 12, no. 17: 1788. https://doi.org/10.3390/healthcare12171788

Article Metrics

Article access statistics, supplementary material.

ZIP-Document (ZIP, 83 KiB)

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10248995

Logo of sysrev

Guidance to best tools and practices for systematic reviews

Kat kolaski.

1 Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC USA

Lynne Romeiser Logan

2 Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY USA

John P. A. Ioannidis

3 Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA USA

Associated Data

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Supplementary Information

The online version contains supplementary material available at 10.1186/s13643-023-02255-9.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 – 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 – 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 – 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table ​ (Table1). 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Guidance for development of evidence syntheses

 Cochrane (formerly Cochrane Collaboration)
 JBI (formerly Joanna Briggs Institute)
 National Institute for Health and Care Excellence (NICE)—United Kingdom
 Scottish Intercollegiate Guidelines Network (SIGN) —Scotland
 Agency for Healthcare Research and Quality (AHRQ)—United States

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 – 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 – 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 – 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 – 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Types of traditional systematic reviews

Review typeTopic assessedElements of research question (mnemonic)
Intervention [ , ]Benefits and harms of interventions used in healthcare. opulation, ntervention, omparator, utcome ( )
Diagnostic test accuracy [ ]How well a diagnostic test performs in diagnosing and detecting a particular disease. opulation, ndex test(s), and arget condition ( )
Qualitative
 Cochrane [ ]Questions are designed to improve understanding of intervention complexity, contextual variations, implementation, and stakeholder preferences and experiences.

etting, erspective, ntervention or Phenomenon of nterest, omparison, valuation ( )

ample, henomenon of nterest, esign, valuation, esearch type ( )

spective, etting, henomena of interest/Problem, nvironment, omparison (optional), me/timing, indings ( )

 JBI [ ]Questions inform meaningfulness and appropriateness of care and the impact of illness through documentation of stakeholder experiences, preferences, and priorities. opulation, the Phenomena of nterest, and the ntext
Prognostic [ ]Probable course or future outcome(s) of people with a health problem. opulation, ntervention (model), omparator, utcomes, iming, etting ( )
Etiology and risk [ ]The relationship (association) between certain factors (e.g., genetic, environmental) and the development of a disease or condition or other health outcome. opulation or groups at risk, xposure(s), associated utcome(s) (disease, symptom, or health condition of interest), the context/location or the time period and the length of time when relevant ( )
Measurement properties [ , ]What is the most suitable instrument to measure a construct of interest in a specific study population? opulation, nstrument, onstruct, utcomes ( )
Prevalence and incidence [ ]The frequency, distribution and determinants of specific factors, health states or conditions in a defined population: eg, how common is a particular disease or condition in a specific group of individuals?Factor, disease, symptom or health ndition of interest, the epidemiological indicator used to measure its frequency (prevalence, incidence), the ulation or groups at risk as well as the ntext/location and time period where relevant ( )

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Evidence syntheses published by Cochrane and JBI

Intervention857296.3Effectiveness43561.5
Diagnostic1761.9Diagnostic Test Accuracy91.3
Overview640.7Umbrella40.6
Methodology410.45Mixed Methods20.3
Qualitative170.19Qualitative15922.5
Prognostic110.12Prevalence and Incidence60.8
Rapid110.12Etiology and Risk71.0
Prototype 80.08Measurement Properties30.4
Economic60.6
Text and Opinion10.14
Scoping436.0
Comprehensive 324.5
Total = 8900Total = 707

a Data from https://www.cochranelibrary.com/cdsr/reviews . Accessed 17 Sep 2022

b Data obtained via personal email communication on 18 Sep 2022 with Emilie Francis, editorial assistant, JBI Evidence Synthesis

c Includes the following categories: prevalence, scoping, mixed methods, and realist reviews

d This methodology is not supported in the current version of the JBI Manual for Evidence Synthesis

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. ​ Fig.1) 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

An external file that holds a picture, illustration, etc.
Object name is 13643_2023_2255_Fig1_HTML.jpg

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 – 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

Tools specifying standards for systematic reviews with and without meta-analysis

 Quality of Reporting of Meta-analyses (QUOROM) StatementMoher 1999 [ ]
 Meta-analyses Of Observational Studies in Epidemiology (MOOSE)Stroup 2000 [ ]
 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)Moher 2009 [ ]
 PRISMA 2020 Page 2021 [ ]
 Overview Quality Assessment Questionnaire (OQAQ)Oxman and Guyatt 1991 [ ]
 Systematic Review Critical Appraisal SheetCentre for Evidence-based Medicine 2005 [ ]
 A Measurement Tool to Assess Systematic Reviews (AMSTAR)Shea 2007 [ ]
 AMSTAR-2 Shea 2017 [ ]
 Risk of Bias in Systematic Reviews (ROBIS) Whiting 2016 [ ]

a Currently recommended

b Validated tool for systematic reviews of interventions developed for use by authors of overviews or umbrella reviews

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Comparison of AMSTAR-2 and ROBIS

Characteristic
ExtensiveExtensive
InterventionIntervention, diagnostic, etiology, prognostic
7 critical, 9 non-critical4
 Total number1629
 Response options

Items # 1, 3, 5, 6, 10, 13, 14, 16: rated or

Items # 2, 4, 7, 8, 9 : rated or

Items # 11 , 12, 15: rated or

24 assessment items: rated

5 items regarding level of concern: rated

 ConstructConfidence based on weaknesses in critical domainsLevel of concern for risk of bias
 CategoriesHigh, moderate, low, critically lowLow, high, unclear

a ROBIS includes an optional first phase to assess the applicability of the review to the research question of interest. The tool may be applicable to other review types in addition to the four specified, although modification of this initial phase will be needed (Personal Communication via email, Penny Whiting, 28 Jan 2022)

b AMSTAR-2 item #9 and #11 require separate responses for RCTs and NRSI

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 – 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 – 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

PRISMA extensions

PRISMA for systematic reviews with a focus on health equity [ ]PRISMA-E2012
Reporting systematic reviews in journal and conference abstracts [ ]PRISMA for Abstracts2015; 2020
PRISMA for systematic review protocols [ ]PRISMA-P2015
PRISMA for Network Meta-Analyses [ ]PRISMA-NMA2015
PRISMA for Individual Participant Data [ ]PRISMA-IPD2015
PRISMA for reviews including harms outcomes [ ]PRISMA-Harms2016
PRISMA for diagnostic test accuracy [ ]PRISMA-DTA2018
PRISMA for scoping reviews [ ]PRISMA-ScR2018
PRISMA for acupuncture [ ]PRISMA-A2019
PRISMA for reporting literature searches [ ]PRISMA-S2021

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

a Note the abstract reporting checklist is now incorporated into PRISMA 2020 [ 93 ]

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Systematic review components linked to appraisal with AMSTAR-2 and ROBIS a

Table Table
Methods for study selection#5#2.5All three components must be done in duplicate, and methods fully described.Helps to mitigate CoI and bias; also may improve accuracy.
Methods for data extraction#6#3.1
Methods for RoB assessmentNA#3.5
Study description#8#3.2Research design features, components of research question (eg, PICO), setting, funding sources.Allows readers to understand the individual studies in detail.
Sources of funding#10NAIdentified for all included studies.Can reveal CoI or bias.
Publication bias#15*#4.5Explored, diagrammed, and discussed.Publication and other selective reporting biases are major threats to the validity of systematic reviews.
Author CoI#16NADisclosed, with management strategies described.If CoI is identified, management strategies must be described to ensure confidence in the review.

CoI conflict of interest, MA meta-analysis, NA not addressed, PICO participant, intervention, comparison, outcome, PRISMA-P Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, RoB risk of bias

a Components shown in bold are chosen for elaboration in Part 4 for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors; and/or 2) the component is evaluated by standards of an AMSTAR-2 “critical” domain

b Critical domains of AMSTAR-2 are indicated by *

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Research question development

AcronymMeaning
feasible, interesting, novel, ethical, and relevant
specific, measurable, attainable, relevant, timely
time, outcomes, population, intervention, context, study design, plus (effect) moderators

a Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing clinical research: an epidemiological approach; 4th edn. Lippincott Williams & Wilkins; 2007. p. 14–22

b Doran, GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev. 1981;70:35-6.

c Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Options for protocol registration of evidence syntheses

 BMJ Open
 BioMed Central
 JMIR Research Protocols
 World Journal of Meta-analysis
 Cochrane
 JBI
 PROSPERO

 Research Registry-

 Registry of Systematic Reviews/Meta-Analyses

 International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY)
 Center for Open Science
 Protocols.io
 Figshare
 Open Science Framework
 Zenodo

a Authors are advised to contact their target journal regarding submission of systematic review protocols

b Registration is restricted to approved review projects

c The JBI registry lists review projects currently underway by JBI-affiliated entities. These records include a review’s title, primary author, research question, and PICO elements. JBI recommends that authors register eligible protocols with PROSPERO

d See Pieper and Rombey [ 137 ] for detailed characteristics of these five registries

e See Pieper and Rombey [ 137 ] for other systematic review data repository options

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 – 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 – 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Common methods for quantitative synthesis

Aggregate data

Individual

participant data

Weighted average of effect estimates

Pairwise comparisons of effect estimates, CI

Overall effect estimate, CI, value

Evaluation of heterogeneity

Forest plot with summary statistic for average effect estimate
Network Variable The interventions, which are compared directly indirectlyNetwork diagram or graph, tabular presentations
Comparisons of relative effects between any pair of interventionsEffect estimates for intervention pairings
Summary relative effects for pair-wise comparisons with evaluations of inconsistency and heterogeneityForest plot, other methods
Treatment rankings (ie, probability that an intervention is among the best options)Rankogram plot
Summarizing effect estimates from separate studies (without combination that would provide an average effect estimate)Range and distribution of observed effects such as median, interquartile range, range

Box-and-whisker plot, bubble plot

Forest plot (without summary effect estimate)

Combining valuesCombined value, number of studiesAlbatross plot (study sample size against values per outcome)
Vote counting by direction of effect (eg, favors intervention over the comparator)Proportion of studies with an effect in the direction of interest, CI, valueHarvest plot, effect direction plot

CI confidence interval (or credible interval, if analysis is done in Bayesian framework)

a See text for descriptions of the types of data combined in each of these approaches

b See Additional File 4  for guidance on the structure and presentation of forest plots

c General approach is similar to aggregate data meta-analysis but there are substantial differences relating to data collection and checking and analysis [ 162 ]. This approach to syntheses is applicable to intervention, diagnostic, and prognostic systematic reviews [ 163 ]

d Examples include meta-regression, hierarchical and multivariate approaches [ 164 ]

e In-depth guidance and illustrations of these methods are provided in Chapter 12 of the Cochrane Handbook [ 160 ]

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 – 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 – 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

GRADE criteria for rating certainty of evidence

[ ]
Risk of bias [ ]Large magnitude of effect
Imprecision [ ]Dose–response gradient
Inconsistency [ ]All residual confounding would decrease magnitude of effect (in situations with an effect)
Indirectness [ ]
Publication bias [ ]

a Applies to randomized studies

b Applies to non-randomized studies

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE certainty ratings and their interpretation symbols a

 ⊕  ⊕  ⊕  ⊕ High: We are very confident that the true effect lies close to that of the estimate of the effect
 ⊕  ⊕  ⊕ Moderate: We are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different
 ⊕  ⊕ Low: Our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect
 ⊕ Very low: We have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect

a From the GRADE Handbook [ 192 ]

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Criteria for using GRADE in a systematic review a

1. The certainty in the evidence (also known as quality of evidence or confidence in the estimates) should be defined consistently with the definitions used by the GRADE Working Group.
2. Explicit consideration should be given to each of the GRADE domains for assessing the certainty in the evidence (although different terminology may be used).
3. The overall certainty in the evidence should be assessed for each important outcome using four or three categories (such as high, moderate, low and/or very low) and definitions for each category that are consistent with the definitions used by the GRADE Working Group.
4. Evidence summaries … should be used as the basis for judgments about the certainty in the evidence.

a Adapted from the GRADE working group [ 206 ]; this list does not contain the additional criteria that apply to the development of a clinical practice guideline

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

Concise Guide to best practices for evidence syntheses, version 1.0 a

Cochrane , JBICochrane, JBICochraneCochrane, JBIJBIJBIJBICochrane, JBIJBI
 ProtocolPRISMA-P [ ]PRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-P
 Systematic reviewPRISMA 2020 [ ]PRISMA-DTA [ ]PRISMA 2020

eMERGe [ ]

ENTREQ [ ]

PRISMA 2020PRISMA 2020PRISMA 2020PRIOR [ ]PRISMA-ScR [ ]
 Synthesis without MASWiM [ ]PRISMA-DTA [ ]SWiM eMERGe [ ] ENTREQ [ ] SWiM SWiM SWiM PRIOR [ ]

For RCTs: Cochrane RoB2 [ ]

For NRSI:

ROBINS-I [ ]

Other primary research

QUADAS-2[ ]

Factor review QUIPS [ ]

Model review PROBAST [ ]

CASP qualitative checklist [ ]

JBI Critical Appraisal Checklist [ ]

JBI checklist for studies reporting prevalence data [ ]

For NRSI: ROBINS-I [ ]

Other primary research

COSMIN RoB Checklist [ ]AMSTAR-2 [ ] or ROBIS [ ]Not required
GRADE [ ]GRADE adaptation GRADE adaptation

CERQual [ ]

ConQual [ ]

GRADE adaptation Risk factors GRADE adaptation

GRADE (for intervention reviews)

Risk factors

Not applicable

AMSTAR A MeaSurement Tool to Assess Systematic Reviews, CASP Critical Appraisal Skills Programme, CERQual Confidence in the Evidence from Reviews of Qualitative research, ConQual Establishing Confidence in the output of Qualitative research synthesis, COSMIN COnsensus-based Standards for the selection of health Measurement Instruments, DTA diagnostic test accuracy, eMERGe meta-ethnography reporting guidance, ENTREQ enhancing transparency in reporting the synthesis of qualitative research, GRADE Grading of Recommendations Assessment, Development and Evaluation, MA meta-analysis, NRSI non-randomized studies of interventions, P protocol, PRIOR Preferred Reporting Items for Overviews of Reviews, PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PROBAST Prediction model Risk Of Bias ASsessment Tool, QUADAS quality assessment of studies of diagnostic accuracy included in systematic reviews, QUIPS Quality In Prognosis Studies, RCT randomized controlled trial, RoB risk of bias, ROBINS-I Risk Of Bias In Non-randomised Studies of Interventions, ROBIS Risk of Bias in Systematic Reviews, ScR scoping review, SWiM systematic review without meta-analysis

a Superscript numbers represent citations provided in the main reference list. Additional File 6 lists links to available online resources for the methods and tools included in the Concise Guide

b The MECIR manual [ 30 ] provides Cochrane’s specific standards for both reporting and conduct of intervention systematic reviews and protocols

c Editorial and peer reviewers can evaluate completeness of reporting in submitted manuscripts using these tools. Authors may be required to submit a self-reported checklist for the applicable tools

d The decision flowchart described by Flemming and colleagues [ 223 ] is recommended for guidance on how to choose the best approach to reporting for qualitative reviews

e SWiM was developed for intervention studies reporting quantitative data. However, if there is not a more directly relevant reporting guideline, SWiM may prompt reviewers to consider the important details to report. (Personal Communication via email, Mhairi Campbell, 14 Dec 2022)

f JBI recommends their own tools for the critical appraisal of various quantitative primary study designs included in systematic reviews of intervention effectiveness, prevalence and incidence, and etiology and risk as well as for the critical appraisal of systematic reviews included in umbrella reviews. However, except for the JBI Checklists for studies reporting prevalence data and qualitative research, the development, validity, and reliability of these tools are not well documented

g Studies that are not RCTs or NRSI require tools developed specifically to evaluate their design features. Examples include single case experimental design [ 155 , 156 ] and case reports and series [ 82 ]

h The evaluation of methodological quality of studies included in a synthesis of qualitative research is debatable [ 224 ]. Authors may select a tool appropriate for the type of qualitative synthesis methodology employed. The CASP Qualitative Checklist [ 218 ] is an example of a published, commonly used tool that focuses on assessment of the methodological strengths and limitations of qualitative studies. The JBI Critical Appraisal Checklist for Qualitative Research [ 219 ] is recommended for reviews using a meta-aggregative approach

i Consider including risk of bias assessment of included studies if this information is relevant to the research question; however, scoping reviews do not include an assessment of the overall certainty of a body of evidence

j Guidance available from the GRADE working group [ 225 , 226 ]; also recommend consultation with the Cochrane diagnostic methods group

k Guidance available from the GRADE working group [ 227 ]; also recommend consultation with Cochrane prognostic methods group

l Used for syntheses in reviews with a meta-aggregative approach [ 224 ]

m Chapter 5 in the JBI Manual offers guidance on how to adapt GRADE to prevalence and incidence reviews [ 69 ]

n Janiaud and colleagues suggest criteria for evaluating evidence certainty for meta-analyses of non-randomized studies evaluating risk factors [ 228 ]

o The COSMIN user manual provides details on how to apply GRADE in systematic reviews of measurement properties [ 229 ]

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

Terms relevant to the reporting of health care–related evidence syntheses a

A review that uses explicit, systematic methods to collate and synthesize findings of studies that address a clearly formulated question.
The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates and other methods, such as combining values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect.
A statistical technique used to synthesize results when study effect estimates and their variances are available, yielding a quantitative summary of results.
An event or measurement collected for participants in a study (such as quality of life, mortality).
The combination of a point estimate (such as a mean difference, risk ratio or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome.
A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information.
The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.
An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses.

a Reproduced from Page and colleagues [ 93 ]

Terminology suggestions for health care–related evidence syntheses

PreferredPotentially problematic

Evidence synthesis with meta-analysis

Systematic review with meta-analysis

Meta-analysis
Overview or umbrella review

Systematic review of systematic reviews

Review of reviews

Meta-review

RandomizedExperimental
Non-randomizedObservational
Single case experimental design

Single-subject research

N-of-1 design

Case report or case seriesDescriptive study
Methodological qualityQuality
Certainty of evidence

Quality of evidence

Grade of evidence

Level of evidence

Strength of evidence

Qualitative systematic reviewQualitative synthesis
Synthesis of qualitative data Qualitative synthesis
Synthesis without meta-analysis

Narrative synthesis , narrative summary

Qualitative synthesis

Descriptive synthesis, descriptive summary

a For example, meta-aggregation, meta-ethnography, critical interpretative synthesis, realist synthesis

b This term may best apply to the synthesis in a mixed methods systematic review in which data from different types of evidence (eg, qualitative, quantitative, economic) are summarized [ 64 ]

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

Authors’ contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Declarations

The authors declare no competing interests.

This article has been published simultaneously in BMC Systematic Reviews, Acta Anaesthesiologica Scandinavica, BMC Infectious Diseases, British Journal of Pharmacology, JBI Evidence Synthesis, the Journal of Bone and Joint Surgery Reviews , and the Journal of Pediatric Rehabilitation Medicine .

Publisher’ s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

IMAGES

  1. The Systematic Review Process

    a systematic literature review strategy

  2. systematic literature review databases

    a systematic literature review strategy

  3. systematic literature review steps

    a systematic literature review strategy

  4. 21 Systematic Literature Review Template

    a systematic literature review strategy

  5. Systematic literature review search strategy.

    a systematic literature review strategy

  6. Systematic reviews

    a systematic literature review strategy

VIDEO

  1. Systematic Literature Review Paper

  2. Systematic literature review in Millitary Studies'...free webinar

  3. Systematic Literature Review Part2 March 20, 2023 Joseph Ntayi

  4. Systematic Literature Review

  5. Introduction to Literature Review, Systematic Review, and Meta-analysis

  6. Systematic Literature Review part1 March 16, 2023 Prof Joseph Ntayi

COMMENTS

  1. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  2. Systematic Review

    Systematic Review | Definition, Example & Guide

  3. How-to conduct a systematic literature review: A quick guide for

    Method Article How-to conduct a systematic literature review

  4. Ten Steps to Conduct a Systematic Review

    Ten Steps to Conduct a Systematic Review - PMC

  5. Guidance on Conducting a Systematic Literature Review

    Guidance on Conducting a Systematic Literature Review

  6. Guidelines for writing a systematic review

    A Systematic Review (SR) is a synthesis of evidence that is identified and critically appraised to understand a specific topic. SRs are more comprehensive than a Literature Review, which most academics will be familiar with, as they follow a methodical process to identify and analyse existing literature (Cochrane, 2022).

  7. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  8. Systematic reviews: Structure, form and content

    A systematic review differs from other types of literature review in several major ways. It requires a transparent, reproducible methodology which indicates how studies were identified and the criteria upon which they were included or excluded. ... The search strategy for systematic reviews is the main method of collecting the data which will ...

  9. How to Do a Systematic Review: A Best Practice Guide ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  10. Systematic Reviews: Constructing a Search Strategy and Searc ...

    The third article in a series from the Joanna Briggs Institute details how to develop a comprehensive search strategy for a systematic review. The systematic literature review, widely regarded as the gold standard for determining evidence-based practice, is increasingly used to guide policy decisions and the direction of future research. ...

  11. Defining the process to literature searching in systematic reviews: a

    Defining the process to literature searching in systematic ...

  12. Literature review as a research methodology: An overview and guidelines

    Literature review as a research methodology: An overview ...

  13. Sample Selection in Systematic Literature Reviews of Management

    The present methodological literature review (cf. Aguinis et al., 2020) addresses this void and aims to identify the dominant approaches to sample selection and provide insights into essential choices in this step of systematic reviews, with a particular focus on management research.To follow these objectives, I have critically reviewed systematic reviews published in the two most prominent ...

  14. What is a systematic review?

    A systematic review is a tightly structured literature review which aims to analyse and appraise all and synthesise all evidence available on a particular question or topic to arrive at a considered judgement or set of conclusions. Researchers conducting systematic reviews use explicit, systematic methods documented in advance with a protocol ...

  15. How-to conduct a systematic literature review: A quick guide for

    Abstract. Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in ...

  16. Search Strategy

    Search Strategy - Systematic Reviews - Research Guides

  17. A systematic approach to searching: an efficient and complete method to

    A systematic approach to searching: an efficient and ...

  18. Exploring organizational career growth: a systematic literature review

    Despite extensive discussions on OCG, there are few review studies on this topic, especially those adhering strictly to the systematic literature review (SLR) methodologies. Drawing from research conducted between 2009 and 2020, a five-perspective OCG map was devised to clarify the construct's connotation (Japor, Citation 2021).

  19. Blockchain Forensics: A Systematic Literature Review of Techniques

    Based on the selected search strategy, 46 articles (out of 672) were chosen for closer examination. The contributions of these articles were discussed and summarized, highlighting their strengths and limitations. ... Hence, this paper provides a systematic literature review and examination of state-of-the-art studies in blockchain forensics to ...

  20. Mastering Systematic Literature Reviews: Steps, Tools, and AI

    A systematic literature review has to be systematic, otherwise you'll just end up being completely lost in all of the papers. Oh, so many papers, so many papers. Filter them out, find the good ones, write it out. Brilliant. All right, if you like this video, Go check out this one where I talk about how to write an exceptional literature review ...

  21. A systematic literature review of deep learning-based text

    Therefore, this paper provides a Systematic Literature Review (SLR) of deep learning-based text summarization in both types (extractive and abstractive) between 2014 and 2023. To the best of our knowledge, this is the first SLR that offers a comprehensive overview of extractive and abstractive text summarization techniques based on Deep ...

  22. Guidelines for the search strategy to update systematic literature

    The research team for formulating the guidelines concerning the search strategy for updating systematic literature reviews emerged from a joint interest in the topic. The four authors of this paper have all been involved in conducting SLRs. ... An Update to the Systematic Literature Review of Empirical Evidence of the Impacts and Outcomes of ...

  23. Dynamic Swarm Orchestration and Semantics in IoT Edge Devices: A

    Given the inherent limitations of storage, power, or computation of IoT devices, delegation and cooperation strategies, including intermediary nodes in the network, can significantly optimize the usage of resources. ... enabled behaviors. This study proposes a Systematic Literature Review (SLR) investigating different solutions and approaches ...

  24. A systematic literature review of the clinical and socioeconomic burden

    Search strategy. This systematic literature review was conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines [].Embase, MEDLINE and the Cochrane Library were searched for studies related to the clinical and socioeconomic burden of bronchiectasis (noncystic fibrosis bronchiectasis (NCFBE) and cystic fibrosis bronchiectasis (CFBE)) using ...

  25. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  26. Incidence of antidepressant discontinuation symptoms: a systematic

    Considering non-specific effects, as evidenced in placebo groups, the incidence of antidepressant discontinuation symptoms is approximately 15%, affecting one in six to seven patients who discontinue their medication. Subgroup analyses and heterogeneity figures point to factors not accounted for by diagnosis, medication, or trial-related characteristics, and might indicate subjective factors ...

  27. Using Walking Interviews in Migration Research: A Systematic Review of

    A qualitative systematic review integrates and compares findings from qualitative research and, when conducted rigorously, reveals new insights that can illuminate underlying reasons and contribute to theory building (Grant & Booth, 2009; Seers, 2015). This approach is particularly well-suited for this review of the evidence given that some ...

  28. The State-of-the-Art of Mycobacterium chimaera Infections and the

    (1) Background. A definition of healthcare-associated infections is essential also for the attribution of the restorative burden to healthcare facilities in case of harm and for clinical risk management strategies. Regarding M. chimaera infections, there remains several issues on the ecosystem and pathogenesis. We aim to review the scientific evidence on M. chimaera beyond cardiac surgery, and ...

  29. Models of instructional design in gamification: A systematic review of

    Gamification allows for the implementation of experiences that simulate the design of (video) games, giving individuals the opportunity to be the protagonists in them. Its inclusion in the educational environment responds to the need to adapt teaching-learning processes to the characteristics of homo videoludens, placing value once again on the role of playful action in the personal ...

  30. Guidance to best tools and practices for systematic reviews

    Guidance to best tools and practices for systematic reviews