Covidence website will be inaccessible as we upgrading our platform on Monday 23rd August at 10am AEST, / 2am CEST/1am BST (Sunday, 15th August 8pm EDT/5pm PDT) 

How to write the methods section of a systematic review

Home | Blog | How To | How to write the methods section of a systematic review

Covidence breaks down how to write a methods section

The methods section of your systematic review describes what you did, how you did it, and why. Readers need this information to interpret the results and conclusions of the review. Often, a lot of information needs to be distilled into just a few paragraphs. This can be a challenging task, but good preparation and the right tools will help you to set off in the right direction 🗺️🧭.

Systematic reviews are so-called because they are conducted in a way that is rigorous and replicable. So it’s important that these methods are reported in a way that is thorough, clear, and easy to navigate for the reader – whether that’s a patient, a healthcare worker, or a researcher. 

Like most things in a systematic review, the methods should be planned upfront and ideally described in detail in a project plan or protocol. Reviews of healthcare interventions follow the PRISMA guidelines for the minimum set of items to report in the methods section. But what else should be included? It’s a good idea to consider what readers will want to know about the review methods and whether the journal you’re planning to submit the work to has expectations on the reporting of methods. Finding out in advance will help you to plan what to include.

how to write methodology for systematic review

Describe what happened

While the research plan sets out what you intend to do, the methods section is a write-up of what actually happened. It’s not a simple case of rewriting the plan in the past tense – you will also need to discuss and justify deviations from the plan and describe the handling of issues that were unforeseen at the time the plan was written. For this reason, it is useful to make detailed notes before, during, and after the review is completed. Relying on memory alone risks losing valuable information and trawling through emails when the deadline is looming can be frustrating and time consuming! 

Keep it brief

The methods section should be succinct but include all the noteworthy information. This can be a difficult balance to achieve. A useful strategy is to aim for a brief description that signposts the reader to a separate section or sections of supporting information. This could include datasets, a flowchart to show what happened to the excluded studies, a collection of search strategies, and tables containing detailed information about the studies.This separation keeps the review short and simple while enabling the reader to drill down to the detail as needed. And if the methods follow a well-known or standard process, it might suffice to say so and give a reference, rather than describe the process at length. 

Follow a structure

A clear structure provides focus. Use of descriptive headings keeps the writing on track and helps the reader get to key information quickly. What should the structure of the methods section look like? As always, a lot depends on the type of review but it will certainly contain information relating to the following areas:

  • Selection criteria ⭕
  • Data collection and analysis 👩‍💻
  • Study quality and risk of bias ⚖️

Let’s look at each of these in turn.

1. Selection criteria ⭕

The criteria for including and excluding studies are listed here. This includes detail about the types of studies, the types of participants, the types of interventions and the types of outcomes and how they were measured. 

2. Search 🕵🏾‍♀️

Comprehensive reporting of the search is important because this means it can be evaluated and replicated. The search strategies are included in the review, along with details of the databases searched. It’s also important to list any restrictions on the search (for example, language), describe how resources other than electronic databases were searched (for example,  non-indexed journals), and give the date that the searches were run. The PRISMA-S extension provides guidance on reporting literature searches. 

how to write methodology for systematic review

Systematic reviewer pro-tip:

 Copy and paste the search strategy to avoid introducing typos

3. Data collection and analysis 👩‍💻

This section describes:

  • how studies were selected for inclusion in the review
  • how study data were extracted from the study reports
  • how study data were combined for analysis and synthesis

To describe how studies were selected for inclusion , review teams outline the screening process. Covidence uses reviewers’ decision data to automatically populate a PRISMA flow diagram for this purpose. Covidence can also calculate Cohen’s kappa to enable review teams to report the level of agreement among individual reviewers during screening.

To describe how study data were extracted from the study reports , reviewers outline the form that was used, any pilot-testing that was done, and the items that were extracted from the included studies. An important piece of information to include here is the process used to resolve conflict among the reviewers. Covidence’s data extraction tool saves reviewers’ comments and notes in the system as they work. This keeps the information in one place for easy retrieval ⚡.

To describe how study data were combined for analysis and synthesis, reviewers outline the type of synthesis (narrative or quantitative, for example), the methods for grouping data, the challenges that came up, and how these were dealt with. If the review includes a meta-analysis, it will detail how this was performed and how the treatment effects were measured.

4. Study quality and risk of bias ⚖️

Because the results of systematic reviews can be affected by many types of bias, reviewers make every effort to minimise it and to show the reader that the methods they used were appropriate. This section describes the methods used to assess study quality and an assessment of the risk of bias across a range of domains. 

Steps to assess the risk of bias in studies include looking at how study participants were assigned to treatment groups and whether patients and/or study assessors were blinded to the treatment given. Reviewers also report their assessment of the risk of bias due to missing outcome data, whether that is due to participant drop-out or non-reporting of the outcomes by the study authors.

Covidence’s default template for assessing study quality is Cochrane’s risk of bias tool but it is also possible to start from scratch and build a tool with a set of custom domains if you prefer.

Careful planning, clear writing, and a structured approach are key to a good methods section. A methodologist will be able to refer review teams to examples of good methods reporting in the literature. Covidence helps reviewers to screen references, extract data and complete risk of bias tables quickly and efficiently. Sign up for a free trial today!

Picture of Laura Mellor. Portsmouth, UK

Laura Mellor. Portsmouth, UK

Perhaps you'd also like....

how to write methodology for systematic review

Top 5 Tips for High-Quality Systematic Review Data Extraction

Data extraction can be a complex step in the systematic review process. Here are 5 top tips from our experts to help prepare and achieve high quality data extraction.

how to write methodology for systematic review

How to get through study quality assessment Systematic Review

Find out 5 tops tips to conducting quality assessment and why it’s an important step in the systematic review process.

how to write methodology for systematic review

How to extract study data for your systematic review

Learn the basic process and some tips to build data extraction forms for your systematic review with Covidence.

Better systematic review management

Head office, working for an institution or organisation.

Find out why over 350 of the world’s leading institutions are seeing a surge in publications since using Covidence!

Request a consultation with one of our team members and start empowering your researchers:

By using our site you consent to our use of cookies to measure and improve our site’s performance. Please see our Privacy Policy for more information. 

Jump to navigation

Home

Cochrane Training

Chapter 1: starting a review.

Toby J Lasserson, James Thomas, Julian PT Higgins

Key Points:

  • Systematic reviews address a need for health decision makers to be able to access high quality, relevant, accessible and up-to-date information.
  • Systematic reviews aim to minimize bias through the use of pre-specified research questions and methods that are documented in protocols, and by basing their findings on reliable research.
  • Systematic reviews should be conducted by a team that includes domain expertise and methodological expertise, who are free of potential conflicts of interest.
  • People who might make – or be affected by – decisions around the use of interventions should be involved in important decisions about the review.
  • Good data management, project management and quality assurance mechanisms are essential for the completion of a successful systematic review.

Cite this chapter as: Lasserson TJ, Thomas J, Higgins JPT. Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

1.1 Why do a systematic review?

Systematic reviews were developed out of a need to ensure that decisions affecting people’s lives can be informed by an up-to-date and complete understanding of the relevant research evidence. With the volume of research literature growing at an ever-increasing rate, it is impossible for individual decision makers to assess this vast quantity of primary research to enable them to make the most appropriate healthcare decisions that do more good than harm. By systematically assessing this primary research, systematic reviews aim to provide an up-to-date summary of the state of research knowledge on an intervention, diagnostic test, prognostic factor or other health or healthcare topic. Systematic reviews address the main problem with ad hoc searching and selection of research, namely that of bias. Just as primary research studies use methods to avoid bias, so should summaries and syntheses of that research.

A systematic review attempts to collate all the empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman et al 1992, Oxman and Guyatt 1993). Systematic review methodology, pioneered and developed by Cochrane, sets out a highly structured, transparent and reproducible methodology (Chandler and Hopewell 2013). This involves: the a priori specification of a research question; clarity on the scope of the review and which studies are eligible for inclusion; making every effort to find all relevant research and to ensure that issues of bias in included studies are accounted for; and analysing the included studies in order to draw conclusions based on all the identified research in an impartial and objective way.

This Handbook is about systematic reviews on the effects of interventions, and specifically about methods used by Cochrane to undertake them. Cochrane Reviews use primary research to generate new knowledge about the effects of an intervention (or interventions) used in clinical, public health or policy settings. They aim to provide users with a balanced summary of the potential benefits and harms of interventions and give an indication of how certain they can be of the findings. They can also compare the effectiveness of different interventions with one another and so help users to choose the most appropriate intervention in particular situations. The primary purpose of Cochrane Reviews is therefore to inform people making decisions about health or health care.

Systematic reviews are important for other reasons. New research should be designed or commissioned only if it does not unnecessarily duplicate existing research (Chalmers et al 2014). Therefore, a systematic review should typically be undertaken before embarking on new primary research. Such a review will identify current and ongoing studies, as well as indicate where specific gaps in knowledge exist, or evidence is lacking; for example, where existing studies have not used outcomes that are important to users of research (Macleod et al 2014). A systematic review may also reveal limitations in the conduct of previous studies that might be addressed in the new study or studies.

Systematic reviews are important, often rewarding and, at times, exciting research projects. They offer the opportunity for authors to make authoritative statements about the extent of human knowledge in important areas and to identify priorities for further research. They sometimes cover issues high on the political agenda and receive attention from the media. Conducting research with these impacts is not without its challenges, however, and completing a high-quality systematic review is often demanding and time-consuming. In this chapter we introduce some of the key considerations for potential review authors who are about to start a systematic review.

1.2 What is the review question?

Getting the research question right is critical for the success of a systematic review. Review authors should ensure that the review addresses an important question to those who are expected to use and act upon its conclusions.

We discuss the formulation of questions in detail in Chapter 2 . For a question about the effects of an intervention, the PICO approach is usually used, which is an acronym for Population, Intervention, Comparison(s) and Outcome. Reviews may have additional questions, for example about how interventions were implemented, economic issues, equity issues or patient experience.

To ensure that the review addresses a relevant question in a way that benefits users, it is important to ensure wide input. In most cases, question formulation should therefore be informed by people with various relevant – but potentially different – perspectives (see Chapter 2, Section 2.4 ).

1.3 Who should do a systematic review?

Systematic reviews should be undertaken by a team. Indeed, Cochrane will not publish a review that is proposed to be undertaken by a single person. Working as a team not only spreads the effort, but ensures that tasks such as the selection of studies for eligibility, data extraction and rating the certainty of the evidence will be performed by at least two people independently, minimizing the likelihood of errors. First-time review authors are encouraged to work with others who are experienced in the process of systematic reviews and to attend relevant training.

Review teams must include expertise in the topic area under review. Topic expertise should not be overly narrow, to ensure that all relevant perspectives are considered. Perspectives from different disciplines can help to avoid assumptions or terminology stemming from an over-reliance on a single discipline. Review teams should also include expertise in systematic review methodology, including statistical expertise.

Arguments have been made that methodological expertise is sufficient to perform a review, and that content expertise should be avoided because of the risk of preconceptions about the effects of interventions (Gøtzsche and Ioannidis 2012). However, it is important that both topic and methodological expertise is present to ensure a good mix of skills, knowledge and objectivity, because topic expertise provides important insight into the implementation of the intervention(s), the nature of the condition being treated or prevented, the relationships between outcomes measured, and other factors that may have an impact on decision making.

A Cochrane Review should represent an independent assessment of the evidence and avoiding financial and non-financial conflicts of interest often requires careful management. It will be important to consider if there are any relevant interests that may constitute a conflict of interest. There are situations where employment, holding of patents and other financial support should prevent people joining an author team. Funding of Cochrane Reviews by commercial organizations with an interest in the outcome of the review is not permitted. To ensure that any issues are identified early in the process, authors planning Cochrane Reviews should consult the Conflict of Interest Policy . Authors should make complete declarations of interest before registration of the review, and refresh these annually thereafter until publication and just prior to publication of the protocol and the review. For authors of review updates, this must be done at the time of the decision to update the review, annually thereafter until publication, and just prior to publication. Authors should also update declarations of interest at any point when their circumstances change.

1.3.1 Involving consumers and other stakeholders

Because the priorities of decision makers and consumers may be different from those of researchers, it is important that review authors consider carefully what questions are important to these different stakeholders. Systematic reviews are more likely to be relevant to a broad range of end users if they are informed by the involvement of people with a range of experiences, in terms of both the topic and the methodology (Thomas et al 2004, Rees and Oliver 2017). Engaging consumers and other stakeholders, such as policy makers, research funders and healthcare professionals, increases relevance, promotes mutual learning, improved uptake and decreases research waste.

Mapping out all potential stakeholders specific to the review question is a helpful first step to considering who might be invited to be involved in a review. Stakeholders typically include: patients and consumers; consumer advocates; policy makers and other public officials; guideline developers; professional organizations; researchers; funders of health services and research; healthcare practitioners, and, on occasion, journalists and other media professionals. Balancing seniority, credibility within the given field, and diversity should be considered. Review authors should also take account of the needs of resource-poor countries and regions in the review process (see Chapter 16 ) and invite appropriate input on the scope of the review and the questions it will address.

It is established good practice to ensure that consumers are involved and engaged in health research, including systematic reviews. Cochrane uses the term ‘consumers’ to refer to a wide range of people, including patients or people with personal experience of a healthcare condition, carers and family members, representatives of patients and carers, service users and members of the public. In 2017, a Statement of Principles for consumer involvement in Cochrane was agreed. This seeks to change the culture of research practice to one where both consumers and other stakeholders are joint partners in research from planning, conduct, and reporting to dissemination. Systematic reviews that have had consumer involvement should be more directly applicable to decision makers than those that have not (see online Chapter II ).

1.3.2 Working with consumers and other stakeholders

Methods for working with consumers and other stakeholders include surveys, workshops, focus groups and involvement in advisory groups. Decisions about what methods to use will typically be based on resource availability, but review teams should be aware of the merits and limitations of such methods. Authors will need to decide who to involve and how to provide adequate support for their involvement. This can include financial reimbursement, the provision of training, and stating clearly expectations of involvement, possibly in the form of terms of reference.

While a small number of consumers or other stakeholders may be part of the review team and become co-authors of the subsequent review, it is sometimes important to bring in a wider range of perspectives and to recognize that not everyone has the capacity or interest in becoming an author. Advisory groups offer a convenient approach to involving consumers and other relevant stakeholders, especially for topics in which opinions differ. Important points to ensure successful involvement include the following.

  • The review team should co-ordinate the input of the advisory group to inform key review decisions.
  • The advisory group’s input should continue throughout the systematic review process to ensure relevance of the review to end users is maintained.
  • Advisory group membership should reflect the breadth of the review question, and consideration should be given to involving vulnerable and marginalized people (Steel 2004) to ensure that conclusions on the value of the interventions are well-informed and applicable to all groups in society (see Chapter 16 ).

Templates such as terms of reference, job descriptions, or person specifications for an advisory group help to ensure clarity about the task(s) required and are available from INVOLVE . The website also gives further information on setting and organizing advisory groups. See also the Cochrane training website for further resources to support consumer involvement.

1.4 The importance of reliability

Systematic reviews aim to be an accurate representation of the current state of knowledge about a given issue. As understanding improves, the review can be updated. Nevertheless, it is important that the review itself is accurate at the time of publication. There are two main reasons for this imperative for accuracy. First, health decisions that affect people’s lives are increasingly taken based on systematic review findings. Current knowledge may be imperfect, but decisions will be better informed when taken in the light of the best of current knowledge. Second, systematic reviews form a critical component of legal and regulatory frameworks; for example, drug licensing or insurance coverage. Here, systematic reviews also need to hold up as auditable processes for legal examination. As systematic reviews need to be both correct, and be seen to be correct, detailed evidence-based methods have been developed to guide review authors as to the most appropriate procedures to follow, and what information to include in their reports to aid auditability.

1.4.1 Expectations for the conduct and reporting of Cochrane Reviews

Cochrane has developed methodological expectations for the conduct, reporting and updating of systematic reviews of interventions (MECIR) and their plain language summaries ( Plain Language Expectations for Authors of Cochrane Summaries ; PLEACS). Developed collaboratively by methodologists and Cochrane editors, they are intended to describe the desirable attributes of a Cochrane Review. The expectations are not all relevant at the same stage of review conduct, so care should be taken to identify those that are relevant at specific points during the review. Different methods should be used at different stages of the review in terms of the planning, conduct, reporting and updating of the review.

Each expectation has a title, a rationale and an elaboration. For the purposes of publication of a review with Cochrane, each has the status of either ‘mandatory’ or ‘highly desirable’. Items described as mandatory are expected to be applied, and if they are not then an appropriate justification should be provided; failure to implement such items may be used as a basis for deciding not to publish a review in the Cochrane Database of Systematic Reviews (CDSR). Items described as highly desirable should generally be implemented, but there are reasonable exceptions and justifications are not required.

All MECIR expectations for the conduct of a review are presented in the relevant chapters of this Handbook . Expectations for reporting of completed reviews (including PLEACS) are described in online Chapter III . The recommendations provided in the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement have been incorporated into the Cochrane reporting expectations, ensuring compliance with the PRISMA recommendations and summarizing attributes of reporting that should allow a full assessment of the methods and findings of the review (Moher et al 2009).

1.5 Protocol development

Preparing a systematic review is complex and involves many judgements. To minimize the potential for bias in the review process, these judgements should be made as far as possible in ways that do not depend on the findings of the studies included in the review. Review authors’ prior knowledge of the evidence may, for example, influence the definition of a systematic review question, the choice of criteria for study eligibility, or the pre-specification of intervention comparisons and outcomes to analyse. It is important that the methods to be used should be established and documented in advance (see MECIR Box 1.5.a , MECIR Box 1.5.b and MECIR Box 1.5.c ).

Publication of a protocol for a review that is written without knowledge of the available studies reduces the impact of review authors’ biases, promotes transparency of methods and processes, reduces the potential for duplication, allows peer review of the planned methods before they have been completed, and offers an opportunity for the review team to plan resources and logistics for undertaking the review itself. All chapters in the Handbook should be consulted when drafting the protocol. Since systematic reviews are by their nature retrospective, an element of knowledge of the evidence is often inevitable. This is one reason why non-content experts such as methodologists should be part of the review team (see Section 1.3 ). Two exceptions to the retrospective nature of a systematic review are a meta-analysis of a prospectively planned series of trials and some living systematic reviews, as described in Chapter 22 .

The review question should determine the methods used in the review, and not vice versa. The question may concern a relatively straightforward comparison of one treatment with another; or it may necessitate plans to compare different treatments as part of a network meta-analysis, or assess differential effects of an intervention in different populations or delivered in different ways.

The protocol sets out the context in which the review is being conducted. It presents an opportunity to develop ideas that are foundational for the review. This concerns, most explicitly, definition of the eligibility criteria such as the study participants and the choice of comparators and outcomes. The eligibility criteria may also be defined following the development of a logic model (or an articulation of the aspects of an extent logic model that the review is addressing) to explain how the intervention might work (see Chapter 2, Section 2.5.1 ).

MECIR Box 1.5.a Relevant expectations for conduct of intervention reviews

A key purpose of the protocol is to make plans to minimize bias in the eventual findings of the review. Reliable synthesis of available evidence requires a planned, systematic approach. Threats to the validity of systematic reviews can come from the studies they include or the process by which reviews are conducted. Biases within the studies can arise from the method by which participants are allocated to the intervention groups, awareness of intervention group assignment, and the collection, analysis and reporting of data. Methods for examining these issues should be specified in the protocol. Review processes can generate bias through a failure to identify an unbiased (and preferably complete) set of studies, and poor quality assurance throughout the review. The availability of research may be influenced by the nature of the results (i.e. reporting bias). To reduce the impact of this form of bias, searching may need to include unpublished sources of evidence (Dwan et al 2013) ( MECIR Box 1.5.b ).

MECIR Box 1.5.b Relevant expectations for the conduct of intervention reviews

Developing a protocol for a systematic review has benefits beyond reducing bias. Investing effort in designing a systematic review will make the process more manageable and help to inform key priorities for the review. Defining the question, referring to it throughout, and using appropriate methods to address the question focuses the analysis and reporting, ensuring the review is most likely to inform treatment decisions for funders, policy makers, healthcare professionals and consumers. Details of the planned analyses, including investigations of variability across studies, should be specified in the protocol, along with methods for interpreting the results through the systematic consideration of factors that affect confidence in estimates of intervention effect ( MECIR Box 1.5.c ).

MECIR Box 1.5.c Relevant expectations for conduct of intervention reviews

While the intention should be that a review will adhere to the published protocol, changes in a review protocol are sometimes necessary. This is also the case for a protocol for a randomized trial, which must sometimes be changed to adapt to unanticipated circumstances such as problems with participant recruitment, data collection or event rates. While every effort should be made to adhere to a predetermined protocol, this is not always possible or appropriate. It is important, however, that changes in the protocol should not be made based on how they affect the outcome of the research study, whether it is a randomized trial or a systematic review. Post hoc decisions made when the impact on the results of the research is known, such as excluding selected studies from a systematic review, or changing the statistical analysis, are highly susceptible to bias and should therefore be avoided unless there are reasonable grounds for doing this.

Enabling access to a protocol through publication (all Cochrane Protocols are published in the CDSR ) and registration on the PROSPERO register of systematic reviews reduces duplication of effort, research waste, and promotes accountability. Changes to the methods outlined in the protocol should be transparently declared.

This Handbook provides details of the systematic review methods developed or selected by Cochrane. They are intended to address the need for rigour, comprehensiveness and transparency in preparing a Cochrane systematic review. All relevant chapters – including those describing procedures to be followed in the later stages of the review – should be consulted during the preparation of the protocol. A more specific description of the structure of Cochrane Protocols is provide in online Chapter II .

1.6 Data management and quality assurance

Systematic reviews should be replicable, and retaining a record of the inclusion decisions, data collection, transformations or adjustment of data will help to establish a secure and retrievable audit trail. They can be operationally complex projects, often involving large research teams operating in different sites across the world. Good data management processes are essential to ensure that data are not inadvertently lost, facilitating the identification and correction of errors and supporting future efforts to update and maintain the review. Transparent reporting of review decisions enables readers to assess the reliability of the review for themselves.

Review management software, such as Covidence and EPPI-Reviewer , can be used to assist data management and maintain consistent and standardized records of decisions made throughout the review. These tools offer a central repository for review data that can be accessed remotely throughout the world by members of the review team. They record independent assessment of studies for inclusion, risk of bias and extraction of data, enabling checks to be made later in the process if needed. Research has shown that even experienced reviewers make mistakes and disagree with one another on risk-of-bias assessments, so it is particularly important to maintain quality assurance here, despite its cost in terms of author time. As more sophisticated information technology tools begin to be deployed in reviews (see Chapter 4, Section 4.6.6.2 and Chapter 22, Section 22.2.4 ), it is increasingly apparent that all review data – including the initial decisions about study eligibility – have value beyond the scope of the individual review. For example, review updates can be made more efficient through (semi-) automation when data from the original review are available for machine learning.

1.7 Chapter information

Authors: Toby J Lasserson, James Thomas, Julian PT Higgins

Acknowledgements: This chapter builds on earlier versions of the Handbook . We would like to thank Ruth Foxlee, Richard Morley, Soumyadeep Bhaumik, Mona Nasser, Dan Fox and Sally Crowe for their contributions to Section 1.3 .

Funding: JT is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. JPTH is a member of the NIHR Biomedical Research Centre at University Hospitals Bristol NHS Foundation Trust and the University of Bristol. JPTH received funding from National Institute for Health Research Senior Investigator award NF-SI-0617-10145. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

1.8 References

Antman E, Lau J, Kupelnick B, Mosteller F, Chalmers T. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts: treatment for myocardial infarction. JAMA 1992; 268 : 240–248.

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM, Howells DW, Ioannidis JP, Oliver S. How to increase value and reduce waste when research priorities are set. Lancet 2014; 383 : 156–165.

Chandler J, Hopewell S. Cochrane methods – twenty years experience in developing systematic review methods. Systematic Reviews 2013; 2 : 76.

Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias: an updated review. PloS One 2013; 8 : e66844.

Gøtzsche PC, Ioannidis JPA. Content area experts as authors: helpful or harmful for systematic reviews and meta-analyses? BMJ 2012; 345 .

Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JP, Al-Shahi Salman R, Chan AW, Glasziou P. Biomedical research: increasing value, reducing waste. Lancet 2014; 383 : 101–104.

Moher D, Liberati A, Tetzlaff J, Altman D, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Medicine 2009; 6 : e1000097.

Oxman A, Guyatt G. The science of reviewing research. Annals of the New York Academy of Sciences 1993; 703 : 125–133.

Rees R, Oliver S. Stakeholder perspectives and participation in reviews. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews . 2nd ed. London: Sage; 2017. p. 17–34.

Steel R. Involving marginalised and vulnerable people in research: a consultation document (2nd revision). INVOLVE; 2004.

Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J. Integrating qualitative research with trials in systematic reviews. BMJ 2004; 328 : 1010–1012.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews
  • Step 8: Write the Review

Systematic Reviews: Step 8: Write the Review

Created by health science librarians.

HSL Logo

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations
  • Step 6: Assess Quality of Included Studies
  • Step 7: Extract Data from Included Studies

About Step 8: Write the Review

Write your review, report your review with prisma, review sections, plain language summaries for systematic reviews, writing the review- webinars.

  • Writing the Review FAQs

  Check our FAQ's

   Email us

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

Search the FAQs

In Step 8, you will write an article or a paper about your systematic review.  It will likely have five sections: introduction, methods, results, discussion, and conclusion.  You will: 

  • Review the reporting standards you will use, such as PRISMA. 
  • Gather your completed data tables and PRISMA chart. 
  • Write the Introduction to the topic and your study, Methods of your research, Results of your research, and Discussion of your results.
  • Write an Abstract describing your study and a Conclusion summarizing your paper. 
  • Cite the studies included in your systematic review and any other articles you may have used in your paper. 
  • If you wish to publish your work, choose a target journal for your article.

The PRISMA Checklist will help you report the details of your systematic review. Your paper will also include a PRISMA chart that is an image of your research process. 

Click an item below to see how it applies to Step 8: Write the Review.

Reporting your review with PRISMA

To write your review, you will need the data from your PRISMA flow diagram .  Review the PRISMA checklist to see which items you should report in your methods section.

Managing your review with Covidence

When you screen in Covidence, it will record the numbers you need for your PRISMA flow diagram from duplicate removal through inclusion of studies.  You may need to add additional information, such as the number of references from each database, citations you find through grey literature or other searching methods, or the number of studies found in your previous work if you are updating a systematic review.

How a librarian can help with Step 8

A librarian can advise you on the process of organizing and writing up your systematic review, including: 

  • Applying the PRISMA reporting templates and the level of detail to include for each element
  • How to report a systematic review search strategy and your review methodology in the completed review
  • How to use prior published reviews to guide you in organizing your manuscript 

Reporting standards & guidelines

Be sure to reference reporting standards when writing your review. This helps ensure that you communicate essential components of your methods, results, and conclusions. There are a number of tools that can be used to ensure compliance with reporting guidelines. A few review-writing resources are listed below.

  • Cochrane Handbook - Chapter 15: Interpreting results and drawing conclusions
  • JBI Manual for Evidence Synthesis - Chapter 12.3 The systematic review
  • PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) The aim of the PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses.

Tools for writing your review

  • RevMan (Cochrane Training)
  • Methods Wizard (Systematic Review Accelerator) The Methods Wizard is part of the Systematic Review Accelerator created by Bond University and the Institute for Evidence-Based Healthcare.
  • UNC HSL Systematic Review Manuscript Template Systematic review manuscript template(.doc) adapted from the PRISMA 2020 checklist. This document provides authors with template for writing about their systematic review. Each table contains a PRISMA checklist item that should be written about in that section, the matching PRISMA Item number, and a box where authors can indicate if an item has been completed. Once text has been added, delete any remaining instructions and the PRISMA checklist tables from the end of each section.
  • The PRISMA 2020 statement: an updated guideline for reporting systematic reviews The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies.
  • PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews This document is intended to enhance the use, understanding and dissemination of the PRISMA 2020 Statement. Through examples and explanations, the meaning and rationale for each checklist item are presented.

The PRISMA checklist

The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) is a 27-item checklist used to improve transparency in systematic reviews. These items cover all aspects of the manuscript, including title, abstract, introduction, methods, results, discussion, and funding. The PRISMA checklist can be downloaded in PDF or Word files.

  • PRISMA 2020 Checklists Download the 2020 PRISMA Checklists in Word or PDF formats or download the expanded checklist (PDF).

The PRISMA flow diagram

The PRISMA Flow Diagram visually depicts the flow of studies through each phase of the review process. The PRISMA Flow Diagram can be downloaded in Word files.

  • PRISMA 2020 Flow Diagrams The flow diagram depicts the flow of information through the different phases of a systematic review. It maps out the number of records identified, included and excluded, and the reasons for exclusions. Different templates are available depending on the type of review (new or updated) and sources used to identify studies.

Documenting grey literature and/or hand searches

If you have also searched additional sources, such as professional organization websites, cited or citing references, etc., document your grey literature search using the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases, registers and other sources or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

Complete the boxes documenting your database searches,  Identification of studies via databases and registers, according to the PRISMA flow diagram instructions.  Complete the boxes documenting your grey literature and/or hand searches on the right side of the template, Identification of studies via other methods, using the steps below.

Need help completing the PRISMA flow diagram?

There are different PRISMA flow diagram templates for new and updated reviews, as well as different templates for reviews with and without grey literature searches. Be sure you download the correct template to match your review methods, then follow the steps below for each portion of the diagram you have available.

View the step-by-step explanation of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases and registers only or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases and registers only . 

View the step-by-step explanation of the grey literature & hand searching portion of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases, registers and other sources or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

View the step-by-step explanation of review update portion of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases and registers only or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

For more information about updating your systematic review, see the box Updating Your Review? on the Step 3: Conduct Literature Searches page of the guide.

Sections of a Scientific Manuscript

Scientific articles often follow the IMRaD format: Introduction, Methods, Results, and Discussion.  You will also need a title and an abstract to summarize your research.

You can read more about scientific writing through the library guides below.

  • Structure of Scholarly Articles & Peer Review • Explains the standard parts of a medical research article • Compares scholarly journals, professional trade journals, and magazines • Explains peer review and how to find peer reviewed articles and journals
  • Writing in the Health Sciences (For Students and Instructors)
  • Citing & Writing Tools & Guides Includes links to guides for popular citation managers such as EndNote, Sciwheel, Zotero; copyright basics; APA & AMA Style guides; Plagiarism & Citing Sources; Citing & Writing: How to Write Scientific Papers

Sections of a Systematic Review Manuscript

Systematic reviews follow the same structure as original research articles, but you will need to report on your search instead of on details like the participants or sampling. Sections of your manuscript are shown as bold headings in the PRISMA checklist.

Refer to the PRISMA checklist for more information.

Consider including a Plain Language Summary (PLS) when you publish your systematic review. Like an abstract, a PLS gives an overview of your study, but is specifically written and formatted to be easy for non-experts to understand. 

Tips for writing a PLS:

  • Use clear headings e.g. "why did we do this study?"; "what did we do?"; "what did we find?"
  • Use active voice e.g. "we searched for articles in 5 databases instead of "5 databases were searched"
  • Consider need-to-know vs. nice-to-know: what is most important for readers to understand about your study? Be sure to provide the most important points without misrepresenting your study or misleading the reader. 
  • Keep it short: Many journals recommend keeping your plain language summary less than 250 words. 
  • Check journal guidelines: Your journal may have specific guidelines about the format of your plain language summary and when you can publish it. Look at journal guidelines before submitting your article. 

Learn more about Plain Language Summaries: 

  • Rosenberg, A., Baróniková, S., & Feighery, L. (2021). Open Pharma recommendations for plain language summaries of peer-reviewed medical journal publications. Current Medical Research and Opinion, 37(11), 2015–2016.  https://doi.org/10.1080/03007995.2021.1971185
  • Lobban, D., Gardner, J., & Matheis, R. (2021). Plain language summaries of publications of company-sponsored medical research: what key questions do we need to address? Current Medical Research and Opinion, 1–12. https://doi.org/10.1080/03007995.2021.1997221
  • Cochrane Community. (2022, March 21). Updated template and guidance for writing Plain Language Summaries in Cochrane Reviews now available. https://community.cochrane.org/news/updated-template-and-guidance-writing-plain-language-summaries-cochrane-reviews-now-available
  • You can also look at our Health Literacy LibGuide:  https://guides.lib.unc.edu/healthliteracy 

How to Approach Writing a Background Section

What Makes a Good Discussion Section

Writing Up Risk of Bias

Developing Your Implications for Research Section

  • << Previous: Step 7: Extract Data from Included Studies
  • Next: FAQs >>
  • Last Updated: May 16, 2024 3:24 PM
  • URL: https://guides.lib.unc.edu/systematic-reviews

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?

Steps of a Systematic Review

  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Forms and templates

Logos of MS Word and MS Excel

Image: David Parmenter's Shop

  • PICO Template
  • Inclusion/Exclusion Criteria
  • Database Search Log
  • Review Matrix
  • Cochrane Tool for Assessing Risk of Bias in Included Studies

   • PRISMA Flow Diagram  - Record the numbers of retrieved references and included/excluded studies. You can use the Create Flow Diagram tool to automate the process.

   •  PRISMA Checklist - Checklist of items to include when reporting a systematic review or meta-analysis

PRISMA 2020 and PRISMA-S: Common Questions on Tracking Records and the Flow Diagram

  • PROSPERO Template
  • Manuscript Template
  • Steps of SR (text)
  • Steps of SR (visual)
  • Steps of SR (PIECES)

Adapted from  A Guide to Conducting Systematic Reviews: Steps in a Systematic Review by Cornell University Library

Source: Cochrane Consumers and Communications  (infographics are free to use and licensed under Creative Commons )

Check the following visual resources titled " What Are Systematic Reviews?"

  • Video  with closed captions available
  • Animated Storyboard
  • << Previous: What is a Systematic Review (SR)?
  • Next: Framing a Research Question >>
  • Last Updated: May 8, 2024 1:44 PM
  • URL: https://lib.guides.umd.edu/SR

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 3 June 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

  • Open access
  • Published: 01 August 2019

A step by step guide for conducting a systematic review and meta-analysis with simulation data

  • Gehad Mohamed Tawfik 1 , 2 ,
  • Kadek Agus Surya Dila 2 , 3 ,
  • Muawia Yousif Fadlelmola Mohamed 2 , 4 ,
  • Dao Ngoc Hien Tam 2 , 5 ,
  • Nguyen Dang Kien 2 , 6 ,
  • Ali Mahmoud Ahmed 2 , 7 &
  • Nguyen Tien Huy 8 , 9 , 10  

Tropical Medicine and Health volume  47 , Article number:  46 ( 2019 ) Cite this article

810k Accesses

193 Citations

94 Altmetric

Metrics details

The massive abundance of studies relating to tropical medicine and health has increased strikingly over the last few decades. In the field of tropical medicine and health, a well-conducted systematic review and meta-analysis (SR/MA) is considered a feasible solution for keeping clinicians abreast of current evidence-based medicine. Understanding of SR/MA steps is of paramount importance for its conduction. It is not easy to be done as there are obstacles that could face the researcher. To solve those hindrances, this methodology study aimed to provide a step-by-step approach mainly for beginners and junior researchers, in the field of tropical medicine and other health care fields, on how to properly conduct a SR/MA, in which all the steps here depicts our experience and expertise combined with the already well-known and accepted international guidance.

We suggest that all steps of SR/MA should be done independently by 2–3 reviewers’ discussion, to ensure data quality and accuracy.

SR/MA steps include the development of research question, forming criteria, search strategy, searching databases, protocol registration, title, abstract, full-text screening, manual searching, extracting data, quality assessment, data checking, statistical analysis, double data checking, and manuscript writing.

Introduction

The amount of studies published in the biomedical literature, especially tropical medicine and health, has increased strikingly over the last few decades. This massive abundance of literature makes clinical medicine increasingly complex, and knowledge from various researches is often needed to inform a particular clinical decision. However, available studies are often heterogeneous with regard to their design, operational quality, and subjects under study and may handle the research question in a different way, which adds to the complexity of evidence and conclusion synthesis [ 1 ].

Systematic review and meta-analyses (SR/MAs) have a high level of evidence as represented by the evidence-based pyramid. Therefore, a well-conducted SR/MA is considered a feasible solution in keeping health clinicians ahead regarding contemporary evidence-based medicine.

Differing from a systematic review, unsystematic narrative review tends to be descriptive, in which the authors select frequently articles based on their point of view which leads to its poor quality. A systematic review, on the other hand, is defined as a review using a systematic method to summarize evidence on questions with a detailed and comprehensive plan of study. Furthermore, despite the increasing guidelines for effectively conducting a systematic review, we found that basic steps often start from framing question, then identifying relevant work which consists of criteria development and search for articles, appraise the quality of included studies, summarize the evidence, and interpret the results [ 2 , 3 ]. However, those simple steps are not easy to be reached in reality. There are many troubles that a researcher could be struggled with which has no detailed indication.

Conducting a SR/MA in tropical medicine and health may be difficult especially for young researchers; therefore, understanding of its essential steps is crucial. It is not easy to be done as there are obstacles that could face the researcher. To solve those hindrances, we recommend a flow diagram (Fig. 1 ) which illustrates a detailed and step-by-step the stages for SR/MA studies. This methodology study aimed to provide a step-by-step approach mainly for beginners and junior researchers, in the field of tropical medicine and other health care fields, on how to properly and succinctly conduct a SR/MA; all the steps here depicts our experience and expertise combined with the already well known and accepted international guidance.

figure 1

Detailed flow diagram guideline for systematic review and meta-analysis steps. Note : Star icon refers to “2–3 reviewers screen independently”

Methods and results

Detailed steps for conducting any systematic review and meta-analysis.

We searched the methods reported in published SR/MA in tropical medicine and other healthcare fields besides the published guidelines like Cochrane guidelines {Higgins, 2011 #7} [ 4 ] to collect the best low-bias method for each step of SR/MA conduction steps. Furthermore, we used guidelines that we apply in studies for all SR/MA steps. We combined these methods in order to conclude and conduct a detailed flow diagram that shows the SR/MA steps how being conducted.

Any SR/MA must follow the widely accepted Preferred Reporting Items for Systematic Review and Meta-analysis statement (PRISMA checklist 2009) (Additional file 5 : Table S1) [ 5 ].

We proposed our methods according to a valid explanatory simulation example choosing the topic of “evaluating safety of Ebola vaccine,” as it is known that Ebola is a very rare tropical disease but fatal. All the explained methods feature the standards followed internationally, with our compiled experience in the conduct of SR beside it, which we think proved some validity. This is a SR under conduct by a couple of researchers teaming in a research group, moreover, as the outbreak of Ebola which took place (2013–2016) in Africa resulted in a significant mortality and morbidity. Furthermore, since there are many published and ongoing trials assessing the safety of Ebola vaccines, we thought this would provide a great opportunity to tackle this hotly debated issue. Moreover, Ebola started to fire again and new fatal outbreak appeared in the Democratic Republic of Congo since August 2018, which caused infection to more than 1000 people according to the World Health Organization, and 629 people have been killed till now. Hence, it is considered the second worst Ebola outbreak, after the first one in West Africa in 2014 , which infected more than 26,000 and killed about 11,300 people along outbreak course.

Research question and objectives

Like other study designs, the research question of SR/MA should be feasible, interesting, novel, ethical, and relevant. Therefore, a clear, logical, and well-defined research question should be formulated. Usually, two common tools are used: PICO or SPIDER. PICO (Population, Intervention, Comparison, Outcome) is used mostly in quantitative evidence synthesis. Authors demonstrated that PICO holds more sensitivity than the more specific SPIDER approach [ 6 ]. SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) was proposed as a method for qualitative and mixed methods search.

We here recommend a combined approach of using either one or both the SPIDER and PICO tools to retrieve a comprehensive search depending on time and resources limitations. When we apply this to our assumed research topic, being of qualitative nature, the use of SPIDER approach is more valid.

PICO is usually used for systematic review and meta-analysis of clinical trial study. For the observational study (without intervention or comparator), in many tropical and epidemiological questions, it is usually enough to use P (Patient) and O (outcome) only to formulate a research question. We must indicate clearly the population (P), then intervention (I) or exposure. Next, it is necessary to compare (C) the indicated intervention with other interventions, i.e., placebo. Finally, we need to clarify which are our relevant outcomes.

To facilitate comprehension, we choose the Ebola virus disease (EVD) as an example. Currently, the vaccine for EVD is being developed and under phase I, II, and III clinical trials; we want to know whether this vaccine is safe and can induce sufficient immunogenicity to the subjects.

An example of a research question for SR/MA based on PICO for this issue is as follows: How is the safety and immunogenicity of Ebola vaccine in human? (P: healthy subjects (human), I: vaccination, C: placebo, O: safety or adverse effects)

Preliminary research and idea validation

We recommend a preliminary search to identify relevant articles, ensure the validity of the proposed idea, avoid duplication of previously addressed questions, and assure that we have enough articles for conducting its analysis. Moreover, themes should focus on relevant and important health-care issues, consider global needs and values, reflect the current science, and be consistent with the adopted review methods. Gaining familiarity with a deep understanding of the study field through relevant videos and discussions is of paramount importance for better retrieval of results. If we ignore this step, our study could be canceled whenever we find out a similar study published before. This means we are wasting our time to deal with a problem that has been tackled for a long time.

To do this, we can start by doing a simple search in PubMed or Google Scholar with search terms Ebola AND vaccine. While doing this step, we identify a systematic review and meta-analysis of determinant factors influencing antibody response from vaccination of Ebola vaccine in non-human primate and human [ 7 ], which is a relevant paper to read to get a deeper insight and identify gaps for better formulation of our research question or purpose. We can still conduct systematic review and meta-analysis of Ebola vaccine because we evaluate safety as a different outcome and different population (only human).

Inclusion and exclusion criteria

Eligibility criteria are based on the PICO approach, study design, and date. Exclusion criteria mostly are unrelated, duplicated, unavailable full texts, or abstract-only papers. These exclusions should be stated in advance to refrain the researcher from bias. The inclusion criteria would be articles with the target patients, investigated interventions, or the comparison between two studied interventions. Briefly, it would be articles which contain information answering our research question. But the most important is that it should be clear and sufficient information, including positive or negative, to answer the question.

For the topic we have chosen, we can make inclusion criteria: (1) any clinical trial evaluating the safety of Ebola vaccine and (2) no restriction regarding country, patient age, race, gender, publication language, and date. Exclusion criteria are as follows: (1) study of Ebola vaccine in non-human subjects or in vitro studies; (2) study with data not reliably extracted, duplicate, or overlapping data; (3) abstract-only papers as preceding papers, conference, editorial, and author response theses and books; (4) articles without available full text available; and (5) case reports, case series, and systematic review studies. The PRISMA flow diagram template that is used in SR/MA studies can be found in Fig. 2 .

figure 2

PRISMA flow diagram of studies’ screening and selection

Search strategy

A standard search strategy is used in PubMed, then later it is modified according to each specific database to get the best relevant results. The basic search strategy is built based on the research question formulation (i.e., PICO or PICOS). Search strategies are constructed to include free-text terms (e.g., in the title and abstract) and any appropriate subject indexing (e.g., MeSH) expected to retrieve eligible studies, with the help of an expert in the review topic field or an information specialist. Additionally, we advise not to use terms for the Outcomes as their inclusion might hinder the database being searched to retrieve eligible studies because the used outcome is not mentioned obviously in the articles.

The improvement of the search term is made while doing a trial search and looking for another relevant term within each concept from retrieved papers. To search for a clinical trial, we can use these descriptors in PubMed: “clinical trial”[Publication Type] OR “clinical trials as topic”[MeSH terms] OR “clinical trial”[All Fields]. After some rounds of trial and refinement of search term, we formulate the final search term for PubMed as follows: (ebola OR ebola virus OR ebola virus disease OR EVD) AND (vaccine OR vaccination OR vaccinated OR immunization) AND (“clinical trial”[Publication Type] OR “clinical trials as topic”[MeSH Terms] OR “clinical trial”[All Fields]). Because the study for this topic is limited, we do not include outcome term (safety and immunogenicity) in the search term to capture more studies.

Search databases, import all results to a library, and exporting to an excel sheet

According to the AMSTAR guidelines, at least two databases have to be searched in the SR/MA [ 8 ], but as you increase the number of searched databases, you get much yield and more accurate and comprehensive results. The ordering of the databases depends mostly on the review questions; being in a study of clinical trials, you will rely mostly on Cochrane, mRCTs, or International Clinical Trials Registry Platform (ICTRP). Here, we propose 12 databases (PubMed, Scopus, Web of Science, EMBASE, GHL, VHL, Cochrane, Google Scholar, Clinical trials.gov , mRCTs, POPLINE, and SIGLE), which help to cover almost all published articles in tropical medicine and other health-related fields. Among those databases, POPLINE focuses on reproductive health. Researchers should consider to choose relevant database according to the research topic. Some databases do not support the use of Boolean or quotation; otherwise, there are some databases that have special searching way. Therefore, we need to modify the initial search terms for each database to get appreciated results; therefore, manipulation guides for each online database searches are presented in Additional file 5 : Table S2. The detailed search strategy for each database is found in Additional file 5 : Table S3. The search term that we created in PubMed needs customization based on a specific characteristic of the database. An example for Google Scholar advanced search for our topic is as follows:

With all of the words: ebola virus

With at least one of the words: vaccine vaccination vaccinated immunization

Where my words occur: in the title of the article

With all of the words: EVD

Finally, all records are collected into one Endnote library in order to delete duplicates and then to it export into an excel sheet. Using remove duplicating function with two options is mandatory. All references which have (1) the same title and author, and published in the same year, and (2) the same title and author, and published in the same journal, would be deleted. References remaining after this step should be exported to an excel file with essential information for screening. These could be the authors’ names, publication year, journal, DOI, URL link, and abstract.

Protocol writing and registration

Protocol registration at an early stage guarantees transparency in the research process and protects from duplication problems. Besides, it is considered a documented proof of team plan of action, research question, eligibility criteria, intervention/exposure, quality assessment, and pre-analysis plan. It is recommended that researchers send it to the principal investigator (PI) to revise it, then upload it to registry sites. There are many registry sites available for SR/MA like those proposed by Cochrane and Campbell collaborations; however, we recommend registering the protocol into PROSPERO as it is easier. The layout of a protocol template, according to PROSPERO, can be found in Additional file 5 : File S1.

Title and abstract screening

Decisions to select retrieved articles for further assessment are based on eligibility criteria, to minimize the chance of including non-relevant articles. According to the Cochrane guidance, two reviewers are a must to do this step, but as for beginners and junior researchers, this might be tiresome; thus, we propose based on our experience that at least three reviewers should work independently to reduce the chance of error, particularly in teams with a large number of authors to add more scrutiny and ensure proper conduct. Mostly, the quality with three reviewers would be better than two, as two only would have different opinions from each other, so they cannot decide, while the third opinion is crucial. And here are some examples of systematic reviews which we conducted following the same strategy (by a different group of researchers in our research group) and published successfully, and they feature relevant ideas to tropical medicine and disease [ 9 , 10 , 11 ].

In this step, duplications will be removed manually whenever the reviewers find them out. When there is a doubt about an article decision, the team should be inclusive rather than exclusive, until the main leader or PI makes a decision after discussion and consensus. All excluded records should be given exclusion reasons.

Full text downloading and screening

Many search engines provide links for free to access full-text articles. In case not found, we can search in some research websites as ResearchGate, which offer an option of direct full-text request from authors. Additionally, exploring archives of wanted journals, or contacting PI to purchase it if available. Similarly, 2–3 reviewers work independently to decide about included full texts according to eligibility criteria, with reporting exclusion reasons of articles. In case any disagreement has occurred, the final decision has to be made by discussion.

Manual search

One has to exhaust all possibilities to reduce bias by performing an explicit hand-searching for retrieval of reports that may have been dropped from first search [ 12 ]. We apply five methods to make manual searching: searching references from included studies/reviews, contacting authors and experts, and looking at related articles/cited articles in PubMed and Google Scholar.

We describe here three consecutive methods to increase and refine the yield of manual searching: firstly, searching reference lists of included articles; secondly, performing what is known as citation tracking in which the reviewers track all the articles that cite each one of the included articles, and this might involve electronic searching of databases; and thirdly, similar to the citation tracking, we follow all “related to” or “similar” articles. Each of the abovementioned methods can be performed by 2–3 independent reviewers, and all the possible relevant article must undergo further scrutiny against the inclusion criteria, after following the same records yielded from electronic databases, i.e., title/abstract and full-text screening.

We propose an independent reviewing by assigning each member of the teams a “tag” and a distinct method, to compile all the results at the end for comparison of differences and discussion and to maximize the retrieval and minimize the bias. Similarly, the number of included articles has to be stated before addition to the overall included records.

Data extraction and quality assessment

This step entitles data collection from included full-texts in a structured extraction excel sheet, which is previously pilot-tested for extraction using some random studies. We recommend extracting both adjusted and non-adjusted data because it gives the most allowed confounding factor to be used in the analysis by pooling them later [ 13 ]. The process of extraction should be executed by 2–3 independent reviewers. Mostly, the sheet is classified into the study and patient characteristics, outcomes, and quality assessment (QA) tool.

Data presented in graphs should be extracted by software tools such as Web plot digitizer [ 14 ]. Most of the equations that can be used in extraction prior to analysis and estimation of standard deviation (SD) from other variables is found inside Additional file 5 : File S2 with their references as Hozo et al. [ 15 ], Xiang et al. [ 16 ], and Rijkom et al. [ 17 ]. A variety of tools are available for the QA, depending on the design: ROB-2 Cochrane tool for randomized controlled trials [ 18 ] which is presented as Additional file 1 : Figure S1 and Additional file 2 : Figure S2—from a previous published article data—[ 19 ], NIH tool for observational and cross-sectional studies [ 20 ], ROBINS-I tool for non-randomize trials [ 21 ], QUADAS-2 tool for diagnostic studies, QUIPS tool for prognostic studies, CARE tool for case reports, and ToxRtool for in vivo and in vitro studies. We recommend that 2–3 reviewers independently assess the quality of the studies and add to the data extraction form before the inclusion into the analysis to reduce the risk of bias. In the NIH tool for observational studies—cohort and cross-sectional—as in this EBOLA case, to evaluate the risk of bias, reviewers should rate each of the 14 items into dichotomous variables: yes, no, or not applicable. An overall score is calculated by adding all the items scores as yes equals one, while no and NA equals zero. A score will be given for every paper to classify them as poor, fair, or good conducted studies, where a score from 0–5 was considered poor, 6–9 as fair, and 10–14 as good.

In the EBOLA case example above, authors can extract the following information: name of authors, country of patients, year of publication, study design (case report, cohort study, or clinical trial or RCT), sample size, the infected point of time after EBOLA infection, follow-up interval after vaccination time, efficacy, safety, adverse effects after vaccinations, and QA sheet (Additional file 6 : Data S1).

Data checking

Due to the expected human error and bias, we recommend a data checking step, in which every included article is compared with its counterpart in an extraction sheet by evidence photos, to detect mistakes in data. We advise assigning articles to 2–3 independent reviewers, ideally not the ones who performed the extraction of those articles. When resources are limited, each reviewer is assigned a different article than the one he extracted in the previous stage.

Statistical analysis

Investigators use different methods for combining and summarizing findings of included studies. Before analysis, there is an important step called cleaning of data in the extraction sheet, where the analyst organizes extraction sheet data in a form that can be read by analytical software. The analysis consists of 2 types namely qualitative and quantitative analysis. Qualitative analysis mostly describes data in SR studies, while quantitative analysis consists of two main types: MA and network meta-analysis (NMA). Subgroup, sensitivity, cumulative analyses, and meta-regression are appropriate for testing whether the results are consistent or not and investigating the effect of certain confounders on the outcome and finding the best predictors. Publication bias should be assessed to investigate the presence of missing studies which can affect the summary.

To illustrate basic meta-analysis, we provide an imaginary data for the research question about Ebola vaccine safety (in terms of adverse events, 14 days after injection) and immunogenicity (Ebola virus antibodies rise in geometric mean titer, 6 months after injection). Assuming that from searching and data extraction, we decided to do an analysis to evaluate Ebola vaccine “A” safety and immunogenicity. Other Ebola vaccines were not meta-analyzed because of the limited number of studies (instead, it will be included for narrative review). The imaginary data for vaccine safety meta-analysis can be accessed in Additional file 7 : Data S2. To do the meta-analysis, we can use free software, such as RevMan [ 22 ] or R package meta [ 23 ]. In this example, we will use the R package meta. The tutorial of meta package can be accessed through “General Package for Meta-Analysis” tutorial pdf [ 23 ]. The R codes and its guidance for meta-analysis done can be found in Additional file 5 : File S3.

For the analysis, we assume that the study is heterogenous in nature; therefore, we choose a random effect model. We did an analysis on the safety of Ebola vaccine A. From the data table, we can see some adverse events occurring after intramuscular injection of vaccine A to the subject of the study. Suppose that we include six studies that fulfill our inclusion criteria. We can do a meta-analysis for each of the adverse events extracted from the studies, for example, arthralgia, from the results of random effect meta-analysis using the R meta package.

From the results shown in Additional file 3 : Figure S3, we can see that the odds ratio (OR) of arthralgia is 1.06 (0.79; 1.42), p value = 0.71, which means that there is no association between the intramuscular injection of Ebola vaccine A and arthralgia, as the OR is almost one, and besides, the P value is insignificant as it is > 0.05.

In the meta-analysis, we can also visualize the results in a forest plot. It is shown in Fig. 3 an example of a forest plot from the simulated analysis.

figure 3

Random effect model forest plot for comparison of vaccine A versus placebo

From the forest plot, we can see six studies (A to F) and their respective OR (95% CI). The green box represents the effect size (in this case, OR) of each study. The bigger the box means the study weighted more (i.e., bigger sample size). The blue diamond shape represents the pooled OR of the six studies. We can see the blue diamond cross the vertical line OR = 1, which indicates no significance for the association as the diamond almost equalized in both sides. We can confirm this also from the 95% confidence interval that includes one and the p value > 0.05.

For heterogeneity, we see that I 2 = 0%, which means no heterogeneity is detected; the study is relatively homogenous (it is rare in the real study). To evaluate publication bias related to the meta-analysis of adverse events of arthralgia, we can use the metabias function from the R meta package (Additional file 4 : Figure S4) and visualization using a funnel plot. The results of publication bias are demonstrated in Fig. 4 . We see that the p value associated with this test is 0.74, indicating symmetry of the funnel plot. We can confirm it by looking at the funnel plot.

figure 4

Publication bias funnel plot for comparison of vaccine A versus placebo

Looking at the funnel plot, the number of studies at the left and right side of the funnel plot is the same; therefore, the plot is symmetry, indicating no publication bias detected.

Sensitivity analysis is a procedure used to discover how different values of an independent variable will influence the significance of a particular dependent variable by removing one study from MA. If all included study p values are < 0.05, hence, removing any study will not change the significant association. It is only performed when there is a significant association, so if the p value of MA done is 0.7—more than one—the sensitivity analysis is not needed for this case study example. If there are 2 studies with p value > 0.05, removing any of the two studies will result in a loss of the significance.

Double data checking

For more assurance on the quality of results, the analyzed data should be rechecked from full-text data by evidence photos, to allow an obvious check for the PI of the study.

Manuscript writing, revision, and submission to a journal

Writing based on four scientific sections: introduction, methods, results, and discussion, mostly with a conclusion. Performing a characteristic table for study and patient characteristics is a mandatory step which can be found as a template in Additional file 5 : Table S3.

After finishing the manuscript writing, characteristics table, and PRISMA flow diagram, the team should send it to the PI to revise it well and reply to his comments and, finally, choose a suitable journal for the manuscript which fits with considerable impact factor and fitting field. We need to pay attention by reading the author guidelines of journals before submitting the manuscript.

The role of evidence-based medicine in biomedical research is rapidly growing. SR/MAs are also increasing in the medical literature. This paper has sought to provide a comprehensive approach to enable reviewers to produce high-quality SR/MAs. We hope that readers could gain general knowledge about how to conduct a SR/MA and have the confidence to perform one, although this kind of study requires complex steps compared to narrative reviews.

Having the basic steps for conduction of MA, there are many advanced steps that are applied for certain specific purposes. One of these steps is meta-regression which is performed to investigate the association of any confounder and the results of the MA. Furthermore, there are other types rather than the standard MA like NMA and MA. In NMA, we investigate the difference between several comparisons when there were not enough data to enable standard meta-analysis. It uses both direct and indirect comparisons to conclude what is the best between the competitors. On the other hand, mega MA or MA of patients tend to summarize the results of independent studies by using its individual subject data. As a more detailed analysis can be done, it is useful in conducting repeated measure analysis and time-to-event analysis. Moreover, it can perform analysis of variance and multiple regression analysis; however, it requires homogenous dataset and it is time-consuming in conduct [ 24 ].

Conclusions

Systematic review/meta-analysis steps include development of research question and its validation, forming criteria, search strategy, searching databases, importing all results to a library and exporting to an excel sheet, protocol writing and registration, title and abstract screening, full-text screening, manual searching, extracting data and assessing its quality, data checking, conducting statistical analysis, double data checking, manuscript writing, revising, and submitting to a journal.

Availability of data and materials

Not applicable.

Abbreviations

Network meta-analysis

Principal investigator

Population, Intervention, Comparison, Outcome

Preferred Reporting Items for Systematic Review and Meta-analysis statement

Quality assessment

Sample, Phenomenon of Interest, Design, Evaluation, Research type

Systematic review and meta-analyses

Bello A, Wiebe N, Garg A, Tonelli M. Evidence-based decision-making 2: systematic reviews and meta-analysis. Methods Mol Biol (Clifton, NJ). 2015;1281:397–416.

Article   Google Scholar  

Khan KS, Kunz R, Kleijnen J, Antes G. Five steps to conducting a systematic review. J R Soc Med. 2003;96(3):118–21.

Rys P, Wladysiuk M, Skrzekowska-Baran I, Malecki MT. Review articles, systematic reviews and meta-analyses: which can be trusted? Polskie Archiwum Medycyny Wewnetrznej. 2009;119(3):148–56.

PubMed   Google Scholar  

Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. 2011.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Gross L, Lhomme E, Pasin C, Richert L, Thiebaut R. Ebola vaccine development: systematic review of pre-clinical and clinical studies, and meta-analysis of determinants of antibody response variability after vaccination. Int J Infect Dis. 2018;74:83–96.

Article   CAS   Google Scholar  

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, ... Henry DA. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008.

Giang HTN, Banno K, Minh LHN, Trinh LT, Loc LT, Eltobgy A, et al. Dengue hemophagocytic syndrome: a systematic review and meta-analysis on epidemiology, clinical signs, outcomes, and risk factors. Rev Med Virol. 2018;28(6):e2005.

Morra ME, Altibi AMA, Iqtadar S, Minh LHN, Elawady SS, Hallab A, et al. Definitions for warning signs and signs of severe dengue according to the WHO 2009 classification: systematic review of literature. Rev Med Virol. 2018;28(4):e1979.

Morra ME, Van Thanh L, Kamel MG, Ghazy AA, Altibi AMA, Dat LM, et al. Clinical outcomes of current medical approaches for Middle East respiratory syndrome: a systematic review and meta-analysis. Rev Med Virol. 2018;28(3):e1977.

Vassar M, Atakpo P, Kash MJ. Manual search approaches used by systematic reviewers in dermatology. Journal of the Medical Library Association: JMLA. 2016;104(4):302.

Naunheim MR, Remenschneider AK, Scangas GA, Bunting GW, Deschler DG. The effect of initial tracheoesophageal voice prosthesis size on postoperative complications and voice outcomes. Ann Otol Rhinol Laryngol. 2016;125(6):478–84.

Rohatgi AJaiWa. Web Plot Digitizer. ht tp. 2014;2.

Hozo SP, Djulbegovic B, Hozo I. Estimating the mean and variance from the median, range, and the size of a sample. BMC Med Res Methodol. 2005;5(1):13.

Wan X, Wang W, Liu J, Tong T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med Res Methodol. 2014;14(1):135.

Van Rijkom HM, Truin GJ, Van’t Hof MA. A meta-analysis of clinical studies on the caries-inhibiting effect of fluoride gel treatment. Carries Res. 1998;32(2):83–92.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Tawfik GM, Tieu TM, Ghozy S, Makram OM, Samuel P, Abdelaal A, et al. Speech efficacy, safety and factors affecting lifetime of voice prostheses in patients with laryngeal cancer: a systematic review and network meta-analysis of randomized controlled trials. J Clin Oncol. 2018;36(15_suppl):e18031-e.

Wannemuehler TJ, Lobo BC, Johnson JD, Deig CR, Ting JY, Gregory RL. Vibratory stimulus reduces in vitro biofilm formation on tracheoesophageal voice prostheses. Laryngoscope. 2016;126(12):2752–7.

Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355.

RevMan The Cochrane Collaboration %J Copenhagen TNCCTCC. Review Manager (RevMan). 5.0. 2008.

Schwarzer GJRn. meta: An R package for meta-analysis. 2007;7(3):40-45.

Google Scholar  

Simms LLH. Meta-analysis versus mega-analysis: is there a difference? Oral budesonide for the maintenance of remission in Crohn’s disease: Faculty of Graduate Studies, University of Western Ontario; 1998.

Download references

Acknowledgements

This study was conducted (in part) at the Joint Usage/Research Center on Tropical Disease, Institute of Tropical Medicine, Nagasaki University, Japan.

Author information

Authors and affiliations.

Faculty of Medicine, Ain Shams University, Cairo, Egypt

Gehad Mohamed Tawfik

Online research Club http://www.onlineresearchclub.org/

Gehad Mohamed Tawfik, Kadek Agus Surya Dila, Muawia Yousif Fadlelmola Mohamed, Dao Ngoc Hien Tam, Nguyen Dang Kien & Ali Mahmoud Ahmed

Pratama Giri Emas Hospital, Singaraja-Amlapura street, Giri Emas village, Sawan subdistrict, Singaraja City, Buleleng, Bali, 81171, Indonesia

Kadek Agus Surya Dila

Faculty of Medicine, University of Khartoum, Khartoum, Sudan

Muawia Yousif Fadlelmola Mohamed

Nanogen Pharmaceutical Biotechnology Joint Stock Company, Ho Chi Minh City, Vietnam

Dao Ngoc Hien Tam

Department of Obstetrics and Gynecology, Thai Binh University of Medicine and Pharmacy, Thai Binh, Vietnam

Nguyen Dang Kien

Faculty of Medicine, Al-Azhar University, Cairo, Egypt

Ali Mahmoud Ahmed

Evidence Based Medicine Research Group & Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City, 70000, Vietnam

Nguyen Tien Huy

Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City, 70000, Vietnam

Department of Clinical Product Development, Institute of Tropical Medicine (NEKKEN), Leading Graduate School Program, and Graduate School of Biomedical Sciences, Nagasaki University, 1-12-4 Sakamoto, Nagasaki, 852-8523, Japan

You can also search for this author in PubMed   Google Scholar

Contributions

NTH and GMT were responsible for the idea and its design. The figure was done by GMT. All authors contributed to the manuscript writing and approval of the final version.

Corresponding author

Correspondence to Nguyen Tien Huy .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Figure S1. Risk of bias assessment graph of included randomized controlled trials. (TIF 20 kb)

Additional file 2:

Figure S2. Risk of bias assessment summary. (TIF 69 kb)

Additional file 3:

Figure S3. Arthralgia results of random effect meta-analysis using R meta package. (TIF 20 kb)

Additional file 4:

Figure S4. Arthralgia linear regression test of funnel plot asymmetry using R meta package. (TIF 13 kb)

Additional file 5:

Table S1. PRISMA 2009 Checklist. Table S2. Manipulation guides for online database searches. Table S3. Detailed search strategy for twelve database searches. Table S4. Baseline characteristics of the patients in the included studies. File S1. PROSPERO protocol template file. File S2. Extraction equations that can be used prior to analysis to get missed variables. File S3. R codes and its guidance for meta-analysis done for comparison between EBOLA vaccine A and placebo. (DOCX 49 kb)

Additional file 6:

Data S1. Extraction and quality assessment data sheets for EBOLA case example. (XLSX 1368 kb)

Additional file 7:

Data S2. Imaginary data for EBOLA case example. (XLSX 10 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Tawfik, G.M., Dila, K.A.S., Mohamed, M.Y.F. et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health 47 , 46 (2019). https://doi.org/10.1186/s41182-019-0165-6

Download citation

Received : 30 January 2019

Accepted : 24 May 2019

Published : 01 August 2019

DOI : https://doi.org/10.1186/s41182-019-0165-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Tropical Medicine and Health

ISSN: 1349-4147

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

how to write methodology for systematic review

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

how to write methodology for systematic review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J R Soc Med
  • v.96(3); 2003 Mar

Five steps to conducting a systematic review

Regina kunz.

1 German Cochrane Centre, Freiburg and Department of Nephrology, Charité, Berlin, Germany

Jos Kleijnen

2 Centre for Reviews and Dissemination, York, UK

3 German Cochrane Centre, Freiburg, Germany

Systematic reviews and meta-analyses are a key element of evidence-based healthcare, yet they remain in some ways mysterious. Why did the authors select certain studies and reject others? What did they do to pool results? How did a bunch of insignificant findings suddenly become significant? This paper, along with a book 1 that goes into more detail, demystifies these and other related intrigues.

A review earns the adjective systematic if it is based on a clearly formulated question, identifies relevant studies, appraises their quality and summarizes the evidence by use of explicit methodology. It is the explicit and systematic approach that distinguishes systematic reviews from traditional reviews and commentaries. Whenever we use the term review in this paper it will mean a systematic review . Reviews should never be done in any other way.

In this paper we provide a step-by-step explanation—there are just five steps—of the methods behind reviewing, and the quality elements inherent in each step (Box 1). For purposes of illustration we use a published review concerning the safety of public water fluoridation, but we must emphasize that our subject is review methodology, not fluoridation.

EXAMPLE: SAFETY OF PUBLIC WATER FLUORIDATION

You are a public health professional in a locality that has public water fluoridation. For many years, your colleagues and you have believed that it improves dental health. Recently there has been pressure from various interest groups to consider the safety of this public health intervention because they fear that it is causing cancer. Public health decisions have been based on professional judgment and practical feasibility without explicit consideration of the scientific evidence. (This was yesterday; today the evidence is available in a York review 2 , 3 , identifiable on MEDLINE through the freely accessible PubMed clinical queries interface [ http://www.ncbi.nlm.nib.gov/entrez/query/static/clinical.html ], under ‘systematic reviews’.)

STEP 1: FRAMING THE QUESTION

The research question may initially be stated as a query in free form but reviewers prefer to pose it in a structured and explicit way. The relations between various components of the question and the structure of the research design are shown in Figure 1 . This paper focuses only on the question of safety related to the outcomes described below.

An external file that holds a picture, illustration, etc.
Object name is 119f1l.jpg

Structured questions for systematic reviews and relations between question components in a comparative study

Box 1 The steps in a systematic review

The problems to be addressed by the review should be specified in the form of clear, unambiguous and structured questions before beginning the review work. Once the review questions have been set, modifications to the protocol should be allowed only if alternative ways of defining the populations, interventions, outcomes or study designs become apparent

The search for studies should be extensive. Multiple resources (both computerized and printed) should be searched without language restrictions. The study selection criteria should flow directly from the review questions and be specified a priori . Reasons for inclusion and exclusion should be recorded

Study quality assessment is relevant to every step of a review. Question formulation (Step 1) and study selection criteria (Step 2) should describe the minimum acceptable level of design. Selected studies should be subjected to a more refined quality assessment by use of general critical appraisal guides and design-based quality checklists (Step 3). These detailed quality assessments will be used for exploring heterogeneity and informing decisions regarding suitability of meta-analysis (Step 4). In addition they help in assessing the strength of inferences and making recommendations for future research (Step 5)

Data synthesis consists of tabulation of study characteristics, quality and effects as well as use of statistical methods for exploring differences between studies and combining their effects (meta-analysis). Exploration of heterogeneity and its sources should be planned in advance (Step 3). If an overall meta-analysis cannot be done, subgroup meta-analysis may be feasible

The issues highlighted in each of the four steps above should be met. The risk of publication bias and related biases should be explored. Exploration for heterogeneity should help determine whether the overall summary can be trusted, and, if not, the effects observed in high-quality studies should be used for generating inferences. Any recommendations should be graded by reference to the strengths and weaknesses of the evidence

Free-form question

Is it safe to provide population-wide drinking water fluoridation to prevent caries?

Structured question

  • The populations —Populations receiving drinking water sourced through a public water supply
  • The interventions or exposures —Fluoridation of drinking water (natural or artificial) compared with non-fluoridated water
  • The outcomes —Cancer is the main outcome of interest for the debate in your health authority
  • The study designs —Comparative studies of any design examining the harmful outcomes in at least two population groups, one with fluoridated drinking water and the other without. Harmful outcomes can be rare and they may develop over a long time. There are considerable difficulties in designing and conducting safety studies to capture these outcomes, since a large number of people need to be observed over a long period. These circumstances demand observational, not randomized studies. With this background, systematic reviews on safety have to include evidence from studies with a range of designs.

STEP 2: IDENTIFYING RELEVANT PUBLICATIONS

To capture as many relevant citations as possible, a wide range of medical, environmental and scientific databases were searched to identify primary studies of the effects of water fluoridation. The electronic searches were supplemented by hand searching of Index Medicus and Excerpta Medica back to 1945. Furthermore, various internet engines were searched for web pages that might provide references. This effort resulted in 3246 citations from which relevant studies were selected for the review. Their potential relevance was examined, and 2511 citations were excluded as irrelevant. The full papers of the remaining 735 citations were assessed to select those primary studies in man that directly related to fluoride in drinking water supplies, comparing at least two groups. These criteria excluded 481 studies and left 254 in the review. They came from thirty countries, published in fourteen languages between 1939 and 2000. Of these studies 175 were relevant to the question of safety, of which 26 used cancer as an outcome.

STEP 3: ASSESSING STUDY QUALITY

Design threshold for study selection.

Adequate study design as a marker of quality, is listed as an inclusion criterion in Box 1. This approach is most applicable when the main source of evidence is randomized studies. However, randomized studies are almost impossible to conduct at community level for a public health intervention such as water fluoridation. Thus, systematic reviews assessing the safety of such interventions have to include evidence from a broader range of study designs. Consideration of the type and amount of research likely to be available led to inclusion of comparative studies of any design. In this way, selected studies provided information about the harmful effects of exposure to fluoridated water compared with non-exposure.

Quality assessment of safety studies

After studies of an acceptable design have been selected, their in-depth assessment for the risk of various biases allows us to gauge the quality of the evidence in a more refined way. Biases either exaggerate or underestimate the ‘true’ effect of an exposure. The objective of the included studies was to compare groups exposed to fluoridated drinking water and those without such exposure for rates of undesirable outcomes, without bias. Safety studies should ascertain exposures and outcomes in such a way that the risk of misclassification is minimized. The exposure is likely to be more accurately ascertained if the study was prospective rather than retrospective and if it was started soon after water fluoridation rather than later. The outcomes of those developing cancer (and remaining free of cancer) are likely to be more accurately ascertained if the follow-up was long and if the assessment was blind to exposure status.

When examining how the effect of exposure on outcome was established, reviewers assessed whether the comparison groups were similar in all respects other than their exposure to fluoridated water. This is because the other differences may be related to the outcomes of interest independent of the drinking-water fluoridation, and this would bias the comparison. For example, if the people exposed to fluoridated water had other risk factors that made them more prone to have cancer, the apparent association between exposure and outcome might be explained by the more frequent occurrence of these factors among the exposed group. The technical word for such defects is confounding. In a randomized study, confounding factors are expected to be roughly equally distributed between groups. In observational studies their distribution may be unequal. Primary researchers can statistically adjust for these differences, when estimating the effect of exposure on outcomes, by use of multivariable modelling.

Put simply, use of a prospective design, robust ascertainment of exposure and outcomes, and control for confounding are the generic issues one would look for in quality assessment of studies on safety. Consequently, studies may range from satisfactorily meeting quality criteria, to having some deficiencies, to not meeting the criteria at all, and they can be assigned to one of three prespecified quality categories as shown in Table 1 . A quality hierarchy can then be developed, based on the degree to which studies comply with the criteria. None of the studies on cancer were in the high-quality category, but this was because randomized studies were non-existent and control for confounding was not always ideal in the observational studies. There were 8 studies of moderate quality and 18 of low quality.

Description of quality assessment of studies on safety of public water fluoridation

STEP 4: SUMMARIZING THE EVIDENCE

To summarize the evidence from studies of variable design and quality is not easy. The original review 3 provides details of how the differences between study results were investigated and how they were summarized (with or without meta-analysis). This paper restricts itself to summarizing the findings narratively. The association between exposure to fluoridated water and cancer in general was examined in 26 studies. Of these, 10 examined all-cause cancer incidence or mortality, in 22 analyses. Of these, 11 analyses found a negative association (fewer cancers due to exposure), 9 found a positive one and 2 found no association. Only 2 studies reported statistically significant differences. Thus no clear association between water fluoridation and increased cancer incidence or mortality was apparent. Bone/joint and thyroid cancers were of particular concern because of fluoride uptake by these organs. Neither the 6 studies of osteosarcoma nor the 2 studies of thyroid cancer and water fluoridation revealed significant differences. Overall no association was detected between water fluoridation and mortality from any cancer. These findings were also borne out in the moderate-quality subgroup of studies.

STEP 5: INTERPRETING THE FINDINGS

In the fluoridation example, the focus was on the safety of a community-based public health intervention. The generally low quality of available studies means that the results must be interpreted with caution. However, the elaborate efforts in searching an unusually large number of databases provide some safeguard against missing relevant studies. Thus the evidence summarized in this review is likely to be as good as it will get in the foreseeable future. Cancer was the harmful outcome of most interest in this instance. No association was found between exposure to fluoridated water and specific cancers or all cancers. The interpretation of the results may be generally limited because of the low quality of studies, but the findings for the cancer outcomes are supported by the moderate-quality studies.

After having spent some time reading and understanding the review, you are impressed by the sheer amount of published work relevant to the question of safety. However, you are somewhat disappointed by the poor quality of the primary studies. Of course, examination of safety only makes sense in a context where the intervention has some beneficial effect. Benefit and harm have to be compared to provide the basis for decision making. On the issue of the beneficial effect of public water fluoridation, the review 3 reassures you that the health authority was correct in judging that fluoridation of drinking water prevents caries. From the review you also discovered that dental fluorosis (mottled teeth) was related to concentration of fluoride. When the interest groups raise the issue of safety again, you will be able to declare that there is no evidence to link cancer with drinking-water fluoridation; however, you will have to come clean about the risk of dental fluorosis, which appears to be dose dependent, and you may want to measure the fluoride concentration in the water supply and share this information with the interest groups.

The ability to quantify the safety concerns of your population through a review, albeit from studies of moderate to low quality, allows your health authority, the politicians and the public to consider the balance between beneficial and harmful effects of water fluoridation. Those who see the prevention of caries as of primary importance will favour fluoridation. Others, worried about the disfigurement of mottled teeth, may prefer other means of fluoride administration or even occasional treatment for dental caries. Whatever the opinions on this matter, you are able to reassure all parties that there is no evidence that fluoridation of drinking water increases the risk of cancer.

With increasing focus on generating guidance and recommendations for practice through systematic reviews, healthcare professionals need to understand the principles of preparing such reviews. Here we have provided a brief step-by-step explanation of the principles. Our book 1 describes them in detail.

Methodology of a systematic review

Affiliations.

  • 1 Hospital Universitario La Paz, Madrid, España. Electronic address: [email protected].
  • 2 Hospital Universitario Fundación Alcorcón, Madrid, España.
  • 3 Instituto Valenciano de Oncología, Valencia, España.
  • 4 Hospital Universitario de Cabueñes, Gijón, Asturias, España.
  • 5 Hospital Universitario Ramón y Cajal, Madrid, España.
  • 6 Hospital Universitario Gregorio Marañón, Madrid, España.
  • 7 Hospital Universitario de Canarias, Tenerife, España.
  • 8 Hospital Clínic, Barcelona, España; EAU Guidelines Office Board Member.
  • PMID: 29731270
  • DOI: 10.1016/j.acuro.2018.01.010

Context: The objective of evidence-based medicine is to employ the best scientific information available to apply to clinical practice. Understanding and interpreting the scientific evidence involves understanding the available levels of evidence, where systematic reviews and meta-analyses of clinical trials are at the top of the levels-of-evidence pyramid.

Acquisition of evidence: The review process should be well developed and planned to reduce biases and eliminate irrelevant and low-quality studies. The steps for implementing a systematic review include (i) correctly formulating the clinical question to answer (PICO), (ii) developing a protocol (inclusion and exclusion criteria), (iii) performing a detailed and broad literature search and (iv) screening the abstracts of the studies identified in the search and subsequently of the selected complete texts (PRISMA).

Synthesis of the evidence: Once the studies have been selected, we need to (v) extract the necessary data into a form designed in the protocol to summarise the included studies, (vi) assess the biases of each study, identifying the quality of the available evidence, and (vii) develop tables and text that synthesise the evidence.

Conclusions: A systematic review involves a critical and reproducible summary of the results of the available publications on a particular topic or clinical question. To improve scientific writing, the methodology is shown in a structured manner to implement a systematic review.

Keywords: Meta-analysis; Metaanálisis; Methodology; Metodología; Revisión sistemática; Systematic review.

Copyright © 2018 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  • Systematic Reviews as Topic*
  • Open access
  • Published: 03 June 2024

Interventions to increase vaccination in vulnerable groups: rapid overview of reviews

  • Gill Norman 1 , 2 , 3 ,
  • Maartje Kletter 3 &
  • Jo Dumville 3  

BMC Public Health volume  24 , Article number:  1479 ( 2024 ) Cite this article

Metrics details

Groups which are marginalised, disadvantaged or otherwise vulnerable have lower uptake of vaccinations. This differential has been amplified in COVID-19 vaccination compared to (e.g.) influenza vaccination. This overview assessed the effectiveness of interventions to increase vaccination in underserved, minority or vulnerable groups.

In November 2022 we searched four databases for systematic reviews that included RCTs evaluating any intervention to increase vaccination in underserved, minority or vulnerable groups; our primary outcome was vaccination. We used rapid review methods to screen, extract data and assess risk of bias in identified reviews. We undertook narrative synthesis using an approach modified from SWiM guidance. We categorised interventions as being high, medium or low intensity, and as targeting vaccine demand, access, or providers.

We included 23 systematic reviews, including studies in high and low or middle income countries, focused on children, adolescents and adults. Groups were vulnerable based on socioeconomic status, minority ethnicity, migrant/refugee status, age, location or LGBTQ identity. Pregnancy/maternity sometimes intersected with vulnerabilities. Evidence supported interventions including: home visits to communicate/educate and to vaccinate, and facilitator visits to practices (high intensity); telephone calls to communicate/educate, remind/book appointments (medium intensity); letters, postcards or text messages to communicate/educate, remind/book appointments and reminder/recall interventions for practices (low intensity). Many studies used multiple interventions or components.

There was considerable evidence supporting the effectiveness of communication in person, by phone or in writing to increase vaccination. Both high and low intensity interventions targeting providers showed effectiveness. Limited evidence assessed additional clinics or targeted services for increasing access; only home visits had higher confidence evidence showing effectiveness. There was no evidence for interventions for some communities, such as religious minorities which may intersect with gaps in evidence for additional services. None of the evidence related to COVID-19 vaccination where inequalities of outcome are exacerbated.

Prospero registration

CRD42021293355

Peer Review reports

Inequity in vaccination is a recognised public health issue which has been amplified by the COVID-19 pandemic.

This overview uses rapid but rigorous methods to systematically review evidence from over 20 systematic reviews of interventions for increasing vaccination in marginalised, disadvantaged or otherwise vulnerable groups.

We identify and evaluate evidence for low, medium and high intensity interventions targeting pull factors (increasing demand for vaccination); push factors (increasing access to vaccination); and vaccination providers.

We highlight the gaps in evidence for key interventions to improve access, for COVID-19 vaccination and for groups such as religious minorities.

Comparatively low vaccination rates in underserved groups are a recognised UK public health issue. Impacted groups include those who are socio-economically disadvantaged, people from black and minority ethnic backgrounds, and other groups who are marginalised, disadvantaged or otherwise vulnerable [ 1 ].

These known issues in vaccination inequalities were amplified during the COVID-19 pandemic. There is recent evidence that inequities in COVID-19 vaccination rates are even greater than in influenza vaccination, where vulnerability to disease may be similarly distributed in older people and those with pre-existing conditions. For example, in communities across Greater Manchester, a city-region with approximately 2.8 million people and considerable population diversity, inequality of vaccination relative to white British residents was greater for all except one of 16 minority ethnic groups for the first dose of COVID-19 vaccine than it was for influenza vaccination [ 2 ].

People from minoritised and disadvantaged groups are disproportionately likely to experience negative outcomes from COVID-19 infection, including hospitalisation, intensive care unit admission, and death [ 3 ]. Given that inequality of vaccination is disproportionately concentrated among those at greatest risk from the disease, the need for interventions which can address vaccination inequity is particularly acute in the context of COVID-19 vaccination campaigns, including annual booster campaigns for older or clinically vulnerable people. Learning, however, is also relevant to wider vaccination campaigns.

A 2015 overview of reviews identified 15 systematic reviews of strategies for so-called vaccine hesitancy, but few interventions which specifically targeted those who were labelled vaccine hesitant [ 4 ]. Most of the included reviews also focused on childhood vaccination campaigns. This review is now substantially out of date, particularly in the context of the COVID-19 pandemic, and the exacerbation of vaccination inequity seen in its early stages. Our preparatory work highlighted further relevant literature and reinforced the need for a new systematic overview of reviews focused on interventions to reduce vaccination inequalities in underserved groups.

This rapid overview of reviews was undertaken to identify and assess the evidence for effectiveness of interventions to increase vaccination in underserved, minority or vulnerable groups.

The protocol for this overview of reviews was registered on Prospero: CRD42021293355 [ 5 ]. We adapted appropriate rapid systematic review methods for this rapid overview and reported it following PRIOR reporting guidelines where possible [ 6 ].

Inclusion criteria

We included systematic reviews which contained randomised controlled trials (RCTs) of interventions for increasing vaccination in groups of people who were considered to be underserved, minoritised or otherwise vulnerable in the context of the vaccination activities investigated. We did not otherwise limit eligibility and accepted authors’ definitions of these groups. We did however consider older age to be a source of vulnerability as well as groups which may be marginalised based on ethnicity, socioeconomic status, place of residence, faith, or LGBTQ + identity. We treated pregnancy/maternity as an additional vulnerability but not itself as a sufficient reason to consider populations vulnerable (so we included interventions targeted at pregnant women/new mothers eligible for other reasons, but not interventions for all pregnant women or families with young children).

Systematic reviews were defined as reviews which included as a minimum: a systematic search; specific inclusion criteria; and an identifiable set of included studies. We only included English-language reviews; reviews in other languages would have been noted but not extracted. We included reviews that contained RCTs evaluating any intervention aimed at increasing vaccination rates in groups of interest, even if reviews were not exclusively aimed at these groups. Interventions could be delivered in any clinical or community setting and in any country, although we considered the relevance of settings in our synthesis. We included any comparator including alternative interventions, no intervention, or provision of usual healthcare/standard vaccination campaigns.

Our primary outcome was vaccination, broadly defined as we anticipated a range of reported measures. In the absence of evidence for vaccinations we would have considered measures such as willingness/intention to vaccinate and knowledge about vaccinations.

We searched the Cochrane Database of Systematic Reviews, Ovid Medline, Ovid Embase and Ebsco CINAHL from inception to 25 November 2022 (updating an initial search in December 2021) without language or date restrictions. For search strategies see supplementary information (Appendix 1 ). We also checked references of included studies. Search results were deduplicated using Endnote X20 [ 7 ].

Selection of studies

We used Rayyan to screen search records [ 8 ]. To increase rapidity, 10% of titles and abstracts were screened in duplicate by two independent researchers for calibration and consistency. Remaining citations were single screened with a second researcher consulted in cases of uncertainty; disagreements were resolved through discussion. Full texts were obtained for all potentially eligible studies. After initial single screening of these full texts, all reviews which were not clearly included or excluded were screened by a second independent researcher; because of the nuanced nature of the inclusion criteria in relation to vulnerable groups this was most reviews. All relevant reviews were included; overlap in included studies was managed post-inclusion.

Data extraction and assessment of risk of bias

We piloted a bespoke Microsoft Excel data extraction form on a small sample of reviews. After this one researcher extracted the data and a second was consulted in cases of uncertainty. Extraction focused on review and study level data and key review findings. Some reviews contained only a portion of studies eligible for our overview e.g. some included reviews contained RCTs and non-RCTs or RCTs assessing irrelevant interventions or populations. In these cases, only relevant data were extracted. We extracted the following: number and size of relevant RCTs and their intervention characteristics; vaccination types; participants and vulnerabilities; outcome data; results of quality appraisal, risk of bias and/or GRADE assessment (Grading of Recommendations Assessment, Development and Evaluation) [ 9 ].

We assessed risk of bias in reviews using ROBIS; one researcher performed the assessments and a second checked these [ 10 ].

Data synthesis

We followed recommendations of the Synthesis Without Metanalysis (SWiM) approach in the synthesis of data, adapted to our rapid overview of reviews [ 11 ]. We developed the following framework to support narrative synthesis of finding, (this is an expansion and codification of the approach planned in the protocol).

We focused on the primary outcome of vaccination (including vaccination, completion of vaccination schedules, and being up to date with vaccinations); outcomes such as willingness/intention to vaccinate or knowledge about vaccination were considered indirectly relevant.

We first adapted the approach of Ward 2012 [ 12 ] and grouped interventions into three sets based on type and purpose of the intervention: interventions to increase demand for vaccination (targeting pull factors); interventions to increase access to vaccination (targeting push factors); and interventions targeting vaccination providers. We considered that interventions which were primarily provider-focused would also fall into the other two categories for their impact on patients but considered that the provider focus was important to consider separately. Within these sets of interventions we then followed Thomas 2018 [ 13 ], and considered the intensity of the intervention delivery as: high intensity (e.g. home visits); medium intensity (e.g. telephone calls); or low intensity (e.g. text messages). This intensity categorisation was based on resource requirements for providers rather than possible patient perception re the intensity of receiving the intervention. When interventions were multi-component or multi-level we noted this. Two researchers agreed on groupings by intervention purpose and intensity and resolved disagreements through discussion.

We mapped RCT overlap between reviews using GROOVE and paid particular attention to this issue of overlapping primary studies for interventions where contributing reviews showed overlap, in order to reduce the risk of double weighting data [ 14 ].

We considered differences in findings between the countries where studies were undertaken, particularly noting whether the studies were undertaken in high income countries or in low or middle income countries (LMIC). This included consideration of the specific populations and groups targeted.

Assessing confidence in synthesised findings

We assessed confidence in findings using a GRADE-informed approach [ 9 ]. One researcher made judgements and consulted a second in cases of uncertainty. We made initial judgements for each intervention in each review then considered evidence across the overview, taking into account overlapping data [ 14 ]. We assigned greatest confidence to interventions where there was consistent evidence for effectiveness from reviews with low risk of bias, which provided either GRADE assessment or reported evidence from larger RCTs that were described in reviews as well-conducted with clearly reported effect estimates. There is necessarily more subjectivity and estimation in these judgements than in GRADE because of often incomplete information; a formal GRADE judgement would overstate our certainty about evidence quality [ 9 ]. We have used the terms “higher, medium and lower confidence” to denote these judgements.

Results of the search

We identified 674 records following deduplication and assessed 88 full texts. We included 23 reviews [ 13 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 ]. (Fig.  1 ). Sixty-five full texts were excluded for the following reasons: not a systematic review; a review protocol; an earlier version of a Cochrane review; not relevant to interventions to improve vaccination-related outcomes; did not include any relevant RCTs. An excluded studies list is available on request.

figure 1

Flow diagram for records identified for the review

Characteristics of included reviews

Characteristics of included reviews are summarised in Tables  1 and 2 . Reviews were published between 1998 and 2022; most were recent with nine published in 2021 or 2022 and 17 since 2017. Fifteen reviews included studies from high income countries and eighteen included either adults or adolescents (Table  1 ). Eleven reviews looked at multiple types of vaccination (of which five focused on childhood vaccinations); six reviews looked at influenza vaccination, five at HPV vaccination and one at hepatitis B vaccination (Table  2 ).

Underserved groups represented included

Low socioeconomic status (9 reviews).

Ethnic minority or first nations people (10 reviews).

Migrant or refugee status (2 reviews).

Age (9 reviews).

Location (6 reviews).

LGBTQ identity (1 review).

Other (6 reviews).

These vulnerabilities co-occurred in many studies; in three reviews socioeconomic status, age or ethnic minority status co-occurred with the additional vulnerability of pregnancy/maternity (Table  2 ). We did not identify any reviews with RCTs targeting faith groups.

Risk of bias

The ROBIS assessment found that nine reviews had low, six unclear, and eight high overall risk of bias (Fig.  2 ; Table  2 ). Full responses to signalling questions are available on request.

figure 2

Summary of ROBIS assessments for included reviews

Mapping of included RCTs using GROOVE identified 16 pairs of reviews with moderate (six), high (five) or very high (five) overlap (Fig.  3 ) [ 14 ]. Nineteen individual reviews contributed to the overlap; three reviews overlapped at least moderately with three other reviews and seven with two other reviews. Nine reviews [ 16 , 19 , 20 , 21 , 24 , 26 , 27 , 29 , 32 ] were linked through the three reviews with the highest number of overlaps [ 20 , 24 , 26 ]. Because overlap was substantive we were careful to consider evidence from individual RCTs, and pay particular attention to overlap for interventions in reviews with highest overlap, which included home visiting and various educational and communication interventions.

figure 3

Summary of GROOVE assessment of overlapping RCTs in included review

Effectiveness of interventions

Unless otherwise stated, all interventions are compared with usual care, the outcome is vaccination, and effects favour interventions. Where we have medium rather than higher confidence this is because of combinations of concerns around one or more of: reporting, study quality, inconsistency of results or limited numbers of participants. We have reported reviews’ GRADE assessments where these were available. Full documentation of evidence for interventions is in Supplementary Information (Appendix 2 ). Below we summarise narratively interventions for which we have higher or medium confidence, grouped by intervention intensity, purpose and type. For interventions where we have lower confidence see Tables  3 and 4 . Table  3 summarises all interventions where evidence identified a benefit of the intervention; Table  4 summarises those interventions where current evidence did not identify a benefit.

High-intensity interventions: increasing demand for vaccination

Home visits for communication or education.

We have higher confidence that home visits for the purposes of communication by health professionals, lay health workers, volunteers and students increase vaccination in underserved groups (11 reviews) [ 13 , 18 , 19 , 20 , 23 , 24 , 25 , 26 , 28 , 32 , 35 ]. We drew primarily on evidence from Cochrane reviews finding moderate certainty evidence in, respectively, influenza vaccinations for older adults; [ 13 ] and childhood vaccinations in economically disadvantaged families being visited by lay healthcare workers; [ 26 ] the evidence was broadly consistent across the other reviews, which included a variety of vulnerable groups. Home visits were also the highest intensity component of interventions using escalating intensity of reminders, and were used in multicomponent interventions; both were effective for disadvantaged children and adolescents. Evidence for home visits compared with postal reminders was inconsistent [ 20 ].

We have medium confidence that community volunteers or pharmacists advocating for vaccination may increase vaccination in some underserved groups [ 31 , 32 ]. Each review contained a single relevant medium-sized or large RCT in different groups (older adults and children in urban/disadvantaged groups respectively).

Community partnership and outreach

We have medium confidence in the effectiveness of community partnership and outreach within multicomponent interventions (four reviews). This includes outreach as part of a multicomponent intervention; [ 28 ] lay health workers leading focus groups (groups cascade information to the community; moderate certainty evidence); [ 20 ] and community involvement in motivating vaccine acceptance [ 32 ], and ensuring relevance of reminders [ 18 ].

School and other non-home-based in person educational interventions

Eight reviews contributed evidence for varied interventions [ 16 , 18 , 19 , 21 , 29 , 32 , 34 , 36 ]. We have medium or lower confidence in this evidence. Reviews found school-based interventions do not currently have clear evidence of effectiveness in impacting vaccination in underserved groups [ 16 , 18 ]. Educational sessions delivered to adults in a range of settings including English as a second language (ESL) classes [ 36 ] and community venues showed mixed results, with some RCTs reporting benefits. Interventions in LMIC contexts found positive effects from interventions targeting parents (pictorial information or redesigned vaccination cards alongside a verbal educational message) [ 19 , 32 ], or brief interventions for adolescents [ 29 ]. Evidence from high income contexts primarily related to HPV vaccination for adolescent girls in the US and mostly did not show evidence of an effect [ 16 , 18 , 21 , 34 ]. RCTs in two reviews found benefits in increasing vaccination via interventions for mother-daughter dyads in minority ethnic communities [ 21 , 34 ].

High intensity interventions: increasing access to vaccination

Home visits for vaccination.

We have higher confidence that vaccination during home visits (delivered by health care professionals, students or community healthcare workers) increases vaccination compared to standard care (invitations to clinic) [ 13 , 19 , 32 ]. Most evidence comes from the Cochrane review in influenza vaccination for older adults, (high certainty GRADE assessment based on two RCTs) [ 13 ], but there is also evidence from childhood vaccination in LMIC.

Using different/additional locations or services or staff to deliver vaccinations

We have medium confidence in effectiveness of additional clinics as part of a “four pillars” multicomponent intervention [ 18 ]. We found no evidence for them as standalone interventions. We also have medium confidence in using routine/general clinic visits to vaccinate within a multilevel, multicomponent intervention [ 28 ]. For both interventions our confidence is reduced because the impact of the availability component cannot be isolated. Using group visits of participants to clinics may also be effective (moderate certainty evidence from a Cochrane review including one RCT) [ 13 ]. We have medium confidence in pharmacist-initiated vaccination programmes involving use of pharmacy-based services and/or delivery by pharmacy staff [ 22 , 31 , 35 ].

Provider focused interventions

Facilitators for healthcare professionals.

We have higher confidence that facilitator involvement with healthcare practices increases vaccination; this is based on moderate certainty evidence from a Cochrane review of influenza vaccination for older adults [ 13 ]. . Two cluster RCTs targeted several goals using multiple strategies over 12 to 18 months, including practice visits by facilitators and approaches such as baseline audit, ongoing feedback, consensus building, and follow-up. One study also used educational materials for professionals and patients. Both these studies showed increased numbers of eligible older people vaccinated, a smaller study of a facilitated educational group plus audit did not find a clear effect [ 13 ].

Medium-intensity interventions: increasing demand for vaccination

Telephone calls for communication and education.

We have higher confidence that the following increase vaccination: telephone calls to remind people about booked appointments, deliver reminders about booking appointments, and provide information about vaccination processes (six reviews) [ 13 , 16 , 18 , 26 , 28 , 35 ]. We have medium confidence in the effectiveness of using phone calls to adolescents as well as their parents; and for telephone calls as part of multicomponent interventions [ 16 , 18 , 28 ], including targeted phone calls and phone calls in an intervention with escalating intensity of reminders [ 18 ].

Medium-intensity interventions: increasing access to vaccination

Included reviews did not report on medium intensity interventions primarily aiming to increase access to vaccination; however some provider-focused interventions (e.g. changing provider systems to allow use of routine visits for vaccination) are likely to have increased access to vaccination.

Medium-intensity interventions: provider-focused interventions

Case management.

We have medium confidence in case management within a multicomponent intervention: this included feedback on missed opportunities to vaccinate, tracking, triage and flagging of vaccination status [ 18 , 28 ].

Routine visits

Using routine visits to vaccinate: We have medium confidence in using routine visits to healthcare providers or clinics to vaccinate as part of a multicomponent intervention for childhood vaccinations [ 28 ], and lower confidence as a stand-alone intervention in older adults [ 35 ].

Low-intensity interventions: increasing demand for vaccination

Various methods involving written material for communication and education were used. Both texts and postal communications were used as stand-alone interventions and as part of multicomponent or escalating intensity interventions, where it is harder to determine their impact [ 18 , 26 ]. These multicomponent or escalating interventions showed evidence of effectiveness; we summarise evidence for different delivery methods here.

Text messages

We have higher confidence that single or multiple text message reminders to attend or book appointments (multiple trials in seven reviews) increase vaccination [ 16 , 17 , 18 , 19 , 27 , 28 , 33 ]. Text messages for appointment reminders may be more effective than postal communication [ 18 ]. Unlike phone calls and postal communication there was limited evidence for an impact of text messages in older populations, where risk of digital exclusion may be higher. Evidence for effectiveness of different types of text messages such as using different messages or interactivity is limited (Table  4 ).

Emails, online messages, mass media and videos

We have medium confidence that individually tailored reminder emails are effective [ 27 ], although there is less evidence for email and online messages than other forms of written messaging, with most examples being elements of wider communication strategies [ 16 , 27 , 28 ]. We only had lower confidence in assessments of video-based messaging [ 16 , 17 , 27 , 29 , 34 ], online messages [ 27 ], and mass media [ 29 , 36 ]. In each case the evidence comprised single studies with limitations of reporting, power and study methods.

Postal communication reminders

We have higher confidence that postal reminders to attend or book appointments, and for providing information about vaccination are effective (six reviews). Postcards and letters were the most used; [ 13 , 16 , 18 , 26 , 28 , 30 ] postcards were identified as particularly effective [ 13 ]. Personalised letters may be more effective than generic ones [ 28 ]. We have medium confidence in evidence for sending letters in an appropriate community language (e.g. Spanish for US Hispanic communities) [ 27 ], and for use of postcards designed to use (accessible) “universal language” [ 18 ].

Written material given in person

We have medium confidence that brief paper-based information given in person increase vaccination (three reviews). This included redesigned immunisation cards with the next appointment date in large print, with or without a short verbal intervention; use of pictorial information cards as an additional intervention during home visits; [ 19 , 32 ] and providing a pamphlet of information with or without a short verbal intervention [ 17 ].

Low intensity interventions: increasing access to vaccination

Included reviews did not report on low intensity interventions aiming to increase access to vaccination.

Low intensity interventions: provider-focused interventions

Centralised systems.

We have higher confidence in the effectiveness of centralised reminder/recall systems compared to practice-based reminder/recall systems in increasing the number of children up to date on vaccinations [ 18 , 33 ].

We have higher confidence in reminders to physicians to vaccinate [ 13 ], and medium confidence in the use of computerised reminders to providers to vaccinate (four reviews) [ 16 , 18 , 33 , 35 ]. These are reminders which are sent or flagged to health care professionals to alert them to the need to vaccinate, rather than reminders to patients to attend for vaccination.

Personalised electronic health records

These were not evaluated as a standalone intervention, but we have medium confidence that their use, together with their electronic messaging features, to educate, send reminders and schedule appointments may increase vaccination relative to control groups with only record access or with no access, including where postal reminders were used [ 15 ].

Other low intensity approaches

There is low to moderate certainty evidence from the Cochrane review in influenza vaccination for older people that the following may be effective: payments to physicians; reminding physicians to vaccinate all patients; posters in clinics presenting vaccination rates and encouraging competition between doctors; chart reviews and benchmarking to rates achieved by the top 10% of physicians [ 13 ].

Summary of the evidence

We identified 23 systematic reviews which included RCTs of interventions to increase vaccination in vulnerable groups. Of these 18 reviews were published after the 2015 overview of reviews identified in our scoping work [ 4 ]. In this overview we have summarised randomised evidence for high, medium and low intensity examples of interventions to increase demand for vaccination; interventions to increase access to vaccination; and provider-focused interventions.

The best represented interventions targeted vaccination demand. We had higher confidence in the effectiveness of high, medium and low intensity communication interventions: home visits, telephone calls and text messages respectively. We had higher confidence in home visits for vaccination but medium confidence in evidence for other interventions for increasing access, including additional clinics. We did not identify patient-focused medium or low intensity interventions to increase access to vaccination. However there were provider-focused interventions which would likely have increased access, such as changing systems to allow vaccination on routine or unrelated visits. For provider-focused interventions we had higher confidence in facilitator visits to practices (high intensity incorporating lower intensity components) and centralised reminder/recall systems (low intensity), and medium confidence in case management (moderate intensity). Where interventions did not show evidence of an effect we typically had lower confidence in the evidence.

Strengths and limitations: review process

We searched multiple databases using a strategy designed by an information specialist and updated the search in November 2022 to capture rapidly developing literature in the context of the COVID-19 pandemic. We have surveyed the literature published since then to further contextualise the relevance of the review. We limited our overview to reviews published in English, but the eight identified reviews in other languages would have been excluded for other reasons. We undertook this overview rapidly to inform work to increase vaccination uptake among vulnerable groups in Greater Manchester. We therefore did not screen all records in duplicate, but we used duplicate screening for a sample of records and for all records where there was uncertainty; two reviewers evaluated most full text records because decisions were nuanced; and we checked samples of data extraction. Two researchers agreed risk of bias and confidence assessments and undertook ROBIS assessments independently. We prespecified our synthesis approach to categorising interventions and strength of evidence and informed this using GROOVE mapping of overlap between reviews.

While our approach to categorising evidence was based on those of other reviews [ 12 , 13 ], it was necessarily subjective to some degree. We partially mitigated this by having two reviewers involved in the categorisation process and discussing disagreements within the review team. However, many interventions will contain elements of more than one category even when they are not multicomponent. In particular interventions which involve providers sending recalls or reminders to patients can be conceived of as both intended to increase demand for vaccination (targeting pull factors) and as provider focused. Categorisation of these was based on the intervention description and whether the focus was on the communication with the patient or the providers’ systems to enable these. We acknowledge that this distinction is to some degree arbitrary. Finally we recognise that our categorisation of intensity is based, as in the review that used it previously [ 13 ], on the resource implications of the intervention for providers. While it is less resource intensive to send text messages than to make phone calls to patients, patients may experience (for example) a series of repeated and tailored text messages as a more intensive intervention than a single generic phone call. The patient experience of intervention intensity is outside the scope of this work but would be worth exploring further.

Our dependence the conduct and reporting of included reviews limited us in multiple respects. In some reviews information was extremely limited and we did not have capacity to directly check relevance of primary studies for study design, intervention, population and outcomes if this was not apparent. In one review, we were unsure how many relevant RCTs were included [ 30 ]. This may have led to exclusion of some relevant evidence. Overlap between reviews means evidence missed in one review may be identified elsewhere. We also did not have capacity to check risk of bias assessments or conduct them where they were absent; lack of assessments reduced our confidence in evidence.

Strengths and limitations: scope of review

We limited this overview to randomised evidence and so did not include specific interventions only evaluated by non-randomised studies, and some included interventions are only represented by small numbers of RCTs or RCTs with small numbers of participants. We are conscious that some important interventions, such as the class of societal interventions identified in Thomas (2018), have thereby been excluded entirely [ 13 ]. This decision also meant that we included only very limited evidence for people from LGBTQ + communities; only one review included relevant RCTs [ 27 ]. We accept that limitation to RCTs is also likely to have excluded interventions which are evaluated in other ways, which can be appropriate research designs in the context of the work, often in partnership with a community, which is being undertaken. We have undertaken such work ourselves in Greater Manchester [ 37 ], informed in part by this review, and would suggest that reading our review in the context of reports of this work – which may be found in the grey as much as the published literature – would be appropriate.

We have not identified RCT evidence not included in a systematic review indexed by 2022; this is a necessary constraint in a review of reviews. However, we updated the Medline search in April 2024 to assess the impact of this cutoff on the review. A large number of reviews published or indexed after December 2022 evaluated vaccination uptake and barriers and facilitators to this in both general populations and minority or otherwise vulnerable groups in the context of COVID-19 vaccination programmes. However, only three reviews would have been eligible for inclusion in out review [ 38 , 39 , 40 ].

The most substantive evidence was supplied by a review of behaviour change techniques in minority ethnic populations. This included ten RCTs and reported that across all study designs and multiple target vaccines the most commonly used intervention functions were education, persuasion and enablement. Effective interventions were multicomponent and tailored to the target population, while awareness raising and community organisation involvement were also associated with positive effects [ 39 ]. We suggest that this review be read in conjunction with our overview. Two other reviews each included a small number of relevant RCTs. One contained a single relevant RCT relating to willingness to receive COVID-19 vaccination among black and minority ethnic people in the UK and explored the effectiveness of exposure to different forms of written information [ 40 ]. Another contained two relevant RCTs targeting influenza vaccination in older adults with or without additional markers of vulnerability or marginalisation [ 38 ]. While we would include these reviews in an update of this overview we do not consider that they are likely to substantively change our findings.

We also identified very limited evidence relating to financial incentives for vaccination, this was always part of a wider intervention, and we did not deal with it separately. Free vaccination was evaluated but is not included because our overview was undertaken to inform COVID-19 vaccination work in the UK where universal free vaccination was available.

We excluded several recent scoping reviews, which may be more up to date than systematic reviews. The most substantive is a Cochrane scoping review of interventions for COVID-19 vaccine hesitancy [ 41 ]. This was not limited to minoritised or vulnerable groups although some included studies focused on them. Of the 61 completed studies identified, none were systematic reviews and 45 were RCTs; these focused on online communication interventions posing hypothetical decision-making scenarios. Thirty-five ongoing studies (29 RCTs) mainly evaluated education or communication interventions. An update or subsequent systematic review may identify completed trials relevant to our overview.

Applicability

Identified evidence relates to specific groups and its transferability to other marginalised or vulnerable groups is not evidenced. A substantial amount of the evidence comes from people who are vulnerable due to older age. We did not identify any evidence relating to minoritised faith groups. Some evidence related to people from minority ethnic groups, who may also be members of minority faith groups, but interventions were not targeted on this basis and most evidence related to African or Hispanic Americans who are often members of majority faiths. We therefore did not find evidence for interventions such as women-only vaccination sessions targeted at Muslim or Orthodox Jewish communities.

Conclusions and further research

Considerable evidence supports the probable effectiveness of communication in person, by phone or in writing to increase vaccination; this includes evidence from a Cochrane review with an overall GRADE assessment of moderate certainty. Both high and low intensity interventions targeting providers showed increases in vaccination compared to standard care. However, our overview highlighted the comparatively very limited evidence assessing key strategies to increase access, such as additional clinics or targeted services for increasing access. Only the very high intensity intervention of home visits had higher confidence evidence showing effectiveness. None of the evidence related to COVID-19 vaccination where inequalities of outcome are exacerbated.

There was no evidence for interventions for religious minority communities; this may intersect with gaps in evidence for additional services. Systematic reviews looking specifically at interventions targeting these communities may be needed. We identified very limited evidence for online messaging, video messaging or mass media messaging. Following the COVID-19 pandemic these approaches are not yet well-represented in systematic reviews and a systematic review of primary evidence for these types of communication may also be warranted.

We identified many reviews of barriers and facilitators for vaccination, often relevant to vulnerable or minoritised groups. We also identified several reviews of vaccination programmes. Both sets of reviews would be of interest to those designing interventions to increase vaccination uptake. We did not identify overviews of reviews in either area and there may be merit in undertaking these. We identified multiple, often overlapping reviews in a rapidly growing research field. It may therefore be useful to establish a living systematic review of trials, and to encourage trialists to collaborate actively with the reviewers.

Data availability

All data were previously published; full search strategies and data coding used to support the synthesis are provided in supplementary material or in the main text. Additional data are available on request.

Butler R, McDonald NE. SAGE Working Group on Vaccine Hesitancy. Diagnosing the determinants of vaccine hesitancy in specific subgroups: the Guide to Tailoring immunization programmes (TIP). Vaccine. 2015;14(33):4176–9.

Article   Google Scholar  

Watkinson RE, et al. Ethnic inequalities in COVID-19 vaccine uptake and comparison to seasonal influenza vaccine uptake in Greater Manchester, UK: a cohort study. PLoS Med. 2022;19(3):e1003932.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Disparities in the risk and outcomes of COVID-19 . 2020, Public Health England: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/908434/Disparities_in_the_risk_and_outcomes_of_COVID_August_2020_update.pdf .

Dubé E, Gagnon D, McDonald NE. Strategies intended to address vaccine hesitancy: review of published reviews. Vaccine. 2015;33:4191–203.

Article   PubMed   Google Scholar  

Norman G, Kletter M, Dumville J. CRD42021293355. Rapid review of reviews of interventions for vaccine hesitancy in underserved, minority or vulnerable groups . 2021.

Gates M, et al. Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement. BMJ. 2022;378:e070849.

Article   PubMed   PubMed Central   Google Scholar  

The EndNote Team. EndNote. Philadelphia, PA: Publisher – Clarivate; 2013.

Google Scholar  

Ouzzani M, et al. Rayyan—a web and mobile app for systematic reviews. Syst Reviews. 2016;5(1):210.

Guyatt G, Oxman A, Akl E. GRADE guidelines: 1. Introduction-GRADE evidence profiles and Summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.

Whiting P, Savović J, Higgins JP. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Campbell M, McKenzie JE, Sowden A. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368:16890.

Strategies to improve vaccination uptake in Australia, a systematic review of types and effectiveness… corrected] [published erratum appears in AUST NZ J PUBLIC HEALTH 2012; 36(5):490] Australian & New Zealand Journal of Public Health, 2012. 36(4): pp. 369–377.

Thomas RE, Lorenzetti DL. Interventions to increase influenza vaccination rates of those 60 years and older in the community. Cochrane Database Syst Reviews. 2018;5:CD005188.

Pérez-Bracchiglione J, et al. Graphical representation of overlap for OVErviews: GROOVE tool. Res Synthesis Methods. 2022;13(3):381–8.

Balzarini F, et al. Does the use of personal electronic health records increase vaccine uptake? A systematic review. Vaccine. 2020;38(38):5966–78.

Brandt HM, et al. A narrative review of HPV vaccination interventions in rural U.S. communities. Prev Med. 2021;145:106407.

Callahan AG, et al. Racial disparities in influenza immunization during pregnancy in the United States: a narrative review of the evidence for disparities and potential interventions. Vaccine. 2021;39(35):4938–48.

Crocker-Buque T, Edelstein M, Mounier-Jack S. Interventions to reduce inequalities in vaccine uptake in children and adolescents aged < 19 years: a systematic review. J Epidemiol Community Health. 2017;71(1):87–97.

Crocker-Buque T, et al. Immunization, urbanization and slums - a systematic review of factors and interventions. BMC Public Health. 2017;17(1):556.

Glenton C, et al. Can lay health workers increase the uptake of childhood immunisation? Systematic review and typology. Tropical Med Int Health. 2011;16(9):1044–53.

Gopalani SV, et al. Barriers and Factors Associated with HPV Vaccination among American indians and Alaska Natives: a systematic review. J Community Health. 2022;47(3):563–75.

Isenor JE, et al. Impact of pharmacists as immunizers on vaccination rates: a systematic review and meta-analysis. Vaccine. 2016;34(47):5708–23.

Article   CAS   PubMed   Google Scholar  

Kaufman J et al. Face-to‐face interventions for informing or educating parents about early childhood vaccination. Cochrane Database Syst Reviews, 2018(5).

Kendrick D, et al. The effect of home visiting programmes on uptake of childhood immunization: a systematic review and meta-analysis. J Public Health Med. 2000;22:90–8.

Lambert JF, et al. Reducing burden from respiratory infections in refugees and immigrants: a systematic review of interventions in OECD, EU, EEA and EU-applicant countries. BMC Infect Dis. 2021;21(1):872.

Lewin S, et al. Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database of Systematic Reviews; 2010. 3.

Lott BE, et al. Interventions to increase uptake of human papillomavirus (HPV) vaccination in minority populations: a systematic review. Prev Med Rep. 2020;19:101163.

Machado AA, et al. Effective interventions to increase routine childhood immunization coverage in low socioeconomic status communities in developed countries: a systematic review and critical appraisal of peer-reviewed literature. Vaccine. 2021;39(22):2938–64.

Mogaka EN, et al. Effectiveness of an educational intervention to increase human papillomavirus knowledge in high-risk populations: a systematic review. Univ Tor Med J. 2019;96(3):41–7.

Mohammed H, et al. A rapid global review of strategies to improve influenza vaccination uptake in Australia. Hum Vaccines Immunotherapeutics. 2021;17(12):5487–99.

Murray E, et al. Impact of pharmacy intervention on influenza vaccination acceptance: a systematic literature review and meta-analysis. Int J Clin Pharm. 2021;43(5):1163–72.

Nelson KN et al. Assessing strategies for increasing urban routine immunization coverage of childhood vaccines in low and middle-income countries: a systematic review of peer-reviewed literature. Vaccine, 2016. 34(5495 – 503).

Odone A, et al. Effectiveness of interventions that apply new media to improve vaccine uptake and vaccine coverage. Volume 11. Human vaccines & Immunotherapeutics; 2015. pp. 72–82. 1.

Rani U, et al. Public Education Interventions and Uptake of Human Papillomavirus Vaccine: a systematic review. J Public Health Manage Pract. 2022;28(1):E307–15.

Sarnoff R, Rundall T. Meta-analysis of effectiveness of interventions to increase influenza immunization rates among high-risk population groups. Med Care Res Rev. 1998;55(4):432–56.

Vedio A, et al. Improving access to health care for chronic hepatitis B among migrant Chinese populations: a systematic mixed methods review of barriers and enablers. J Viral Hepatitis. 2017;24(7):526–40.

Article   CAS   Google Scholar  

Bradley F et al. Optimising Targeted Vaccination Activity in Greater Manchester . 2023, NIHR Applied Research Collaboration Greater Manchester: https://arc-gm.nihr.ac.uk/media/Resources/ARC/Evaluation/NIPP%20-%20Vaccine%20Optimitsation/NIPP%20Report%20-%20FINAL.pdf .

Gobbo ELS, et al. Do peer-based education interventions effectively improve vaccination acceptance? A systematic review. BMC Public Health. 2023;23(1):1354.

Ekezie W et al. A Systematic Review of Behaviour Change Techniques within Interventions to Increase Vaccine Uptake among Ethnic Minority Populations Vaccines (Basel), 2023. 11(7).

Hussain B, et al. Overcoming COVID-19 vaccine hesitancy among ethnic minorities: a systematic review of UK studies. Vaccine. 2022;40(25):3413–32.

Andreas M et al. Interventions to increase COVID-19 vaccine uptake: a scoping review. Cochrane Database Syst Reviews, 2022(8).

Download references

Acknowledgements

The authors are grateful to Sophie Bishop for designing and implementing the search strategy.

This research was funded by the National Institute for Health and Care Research (NIHR) Applied Research Collaboration Greater Manchester (ARC-GM); (funding award NIHR200174). The views expressed in this publication are those of the authors and not necessarily those of the National Institute for Health and Care Research or the Department of Health and Social Care.

Author information

Authors and affiliations.

NIHR Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK

Gill Norman

Evidence Synthesis Group, Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK

Division of Nursing, Midwifery & Social Work, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK

Gill Norman, Maartje Kletter & Jo Dumville

You can also search for this author in PubMed   Google Scholar

Contributions

GN screened studies, extracted data, assessed risk of bias and wrote the first draft of the protocol and the manuscript. MK screened studies, extracted data, assessed risk of bias, contributed substantively to drafting the protocol, and edited and commented substantively on the manuscript. JD had the idea for the review, contributed substantively to drafting the protocol and designing the synthesis, provided input and advice at each stage of the review, and edited and commented substantively on the manuscript.

Corresponding author

Correspondence to Gill Norman .

Ethics declarations

Ethics approval and consent to participate.

This is a systematic review of previously published data and as such did not require ethical approval or participant consent.

Consent for publication

Consent for publication is not applicable as this is a systematic review of previously published data.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1 Appendix 1

: Full search strategies

Supplementary Material 2 Appendix 2

: Documentation of evidence for interventions

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Norman, G., Kletter, M. & Dumville, J. Interventions to increase vaccination in vulnerable groups: rapid overview of reviews. BMC Public Health 24 , 1479 (2024). https://doi.org/10.1186/s12889-024-18713-5

Download citation

Received : 14 July 2023

Accepted : 25 April 2024

Published : 03 June 2024

DOI : https://doi.org/10.1186/s12889-024-18713-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Vaccination
  • Vulnerable groups
  • Intervention
  • Systematic review

BMC Public Health

ISSN: 1471-2458

how to write methodology for systematic review

IMAGES

  1. 10 Steps to Write a Systematic Literature Review Paper in 2023

    how to write methodology for systematic review

  2. [PDF] How to Write a Systematic Review : A Step-by-Step Guide

    how to write methodology for systematic review

  3. systematic review methodology section

    how to write methodology for systematic review

  4. Methodology for a systematic review of randomised controlled trials. 9

    how to write methodology for systematic review

  5. Step-by-step description of the systematic review process. Adapted from

    how to write methodology for systematic review

  6. systematic review step by step guide

    how to write methodology for systematic review

VIDEO

  1. Research Paper Methodology

  2. How to Write the Methodology

  3. Literature review in research

  4. HOW TO WRITE METHODOLOGY CHAPTER IN A SKRIPSI 1

  5. How to Write a Systematic Review

  6. How to write your methodology chapter for dissertation students

COMMENTS

  1. How to write the methods section of a systematic review

    Keep it brief. The methods section should be succinct but include all the noteworthy information. This can be a difficult balance to achieve. A useful strategy is to aim for a brief description that signposts the reader to a separate section or sections of supporting information. This could include datasets, a flowchart to show what happened to ...

  2. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  3. PDF How to Write a Systematic Review: A Step-by-Step Guide

    your review and an explanation of statistical tools available for data analysis and presentation. An academic discussion regarding the strengths and weaknesses of systematic review methodology is beyond the scope of this guide, as are detailed instructions regarding statistical analysis. Initial Planning When initiating a systematic review, it

  4. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  5. Chapter 1: Starting a review

    Review teams should also include expertise in systematic review methodology, including statistical expertise. Arguments have been made that methodological expertise is sufficient to perform a review, and that content expertise should be avoided because of the risk of preconceptions about the effects of interventions (Gøtzsche and Ioannidis 2012).

  6. How to Write a Systematic Review: A Narrative Review

    In this study, the steps for a systematic review such as research question design and identification, the search for qualified published studies, the extraction and synthesis of information that pertain to the research question, and interpretation of the results are presented in details. This will be helpful to all interested researchers.

  7. Guidelines for writing a systematic review

    A preliminary review, which can often result in a full systematic review, to understand the available research literature, is usually time or scope limited. Complies evidence from multiple reviews and does not search for primary studies. 3. Identifying a topic and developing inclusion/exclusion criteria.

  8. PDF Conducting a Systematic Review: Methodology and Steps

    a systematic review and a meta-analysis. While a systematic review refers to the entire process of selection, evaluation and synthesis of evidence; meta-analysis is a specialised sub-set of systematic review.3 Meta-analysis refers to the statistical approach of combining data derived from systematic review. It uses

  9. Systematic Reviews and Meta-Analysis: A Guide for Beginners

    Systematic reviews involve the application of scientific methods to reduce bias in review of literature. The key components of a systematic review are a well-defined research question, comprehensive literature search to identify all studies that potentially address the question, systematic assembly of the studies that answer the question, critical appraisal of the methodological quality of the ...

  10. How-to conduct a systematic literature review: A quick guide for

    Abstract. Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in ...

  11. Systematic Reviews: Step 8: Write the Review

    A librarian can advise you on the process of organizing and writing up your systematic review, including: Applying the PRISMA reporting templates and the level of detail to include for each element; How to report a systematic review search strategy and your review methodology in the completed review

  12. Steps of a Systematic Review

    Image: https://pixabay.com Steps to conducting a systematic review: PIECES. P: Planning - the methods of the systematic review are generally decided before conducting it. I: Identifying - searching for studies which match the preset criteria in a systematic manner E: Evaluating - sort all retrieved articles (included or excluded) and assess the risk of bias for each included study

  13. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  14. Systematic Reviews and Meta Analysis

    A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies.

  15. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr Robert Boyle and his colleagues published a systematic review in ...

  16. A step by step guide for conducting a systematic review and meta

    Detailed steps for conducting any systematic review and meta-analysis. We searched the methods reported in published SR/MA in tropical medicine and other healthcare fields besides the published guidelines like Cochrane guidelines {Higgins, 2011 #7} [] to collect the best low-bias method for each step of SR/MA conduction steps.Furthermore, we used guidelines that we apply in studies for all SR ...

  17. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline.

  18. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  19. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  20. Five steps to conducting a systematic review

    A review earns the adjective systematic if it is based on a clearly formulated question, identifies relevant studies, appraises their quality and summarizes the evidence by use of explicit methodology. It is the explicit and systematic approach that distinguishes systematic reviews from traditional reviews and commentaries.

  21. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  22. Methodology of a systematic review

    A systematic review involves a critical and reproducible summary of the results of the available publications on a particular topic or clinical question. To improve scientific writing, the methodology is shown in a structured manner to implement a systematic review.

  23. What are the steps to write a systematic literature review

    For writing a systematic literature review, follow this structure: Title: Create a concise and informative title that reflects the focus of your review. Abstract: Write a structured abstract that ...

  24. Beginning Steps and Finishing a Review

    2(i). (For Systematic Reviews or Meta-Analyses) Select your inclusion / pre-selection criteria to identify the types of studies that will be most relevant to the review. a. Decide on the following to create your inclusion criteria: Patient, population, or people who were studied. Methodology: type of study design or method.

  25. Method Article How-to conduct a systematic literature review: A quick

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  26. Interventions to increase vaccination in vulnerable groups: rapid

    Inequity in vaccination is a recognised public health issue which has been amplified by the COVID-19 pandemic. This overview uses rapid but rigorous methods to systematically review evidence from over 20 systematic reviews of interventions for increasing vaccination in marginalised, disadvantaged or otherwise vulnerable groups.

  27. Criteria and methods in nuclear power plants siting: a systematic

    2. Materials and methods. The SLR applied in this research was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), which is an evidence-based minimum set to assist authors to report several systematic reviews and meta-analyses (Pati & Lorusso, Citation 2018).PRISMA focuses on the methods to be applied by authors to ensure complete and transparent reporting ...