Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 9 April 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

Easy guide to conducting a systematic review

Affiliations.

  • 1 Discipline of Child and Adolescent Health, University of Sydney, Sydney, New South Wales, Australia.
  • 2 Department of Nephrology, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • 3 Education Department, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • PMID: 32364273
  • DOI: 10.1111/jpc.14853

A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a predefined search strategy to identify and appraise all available published literature on a specific topic. The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.

Keywords: research; research design; systematic review.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

Publication types

  • Systematic Review
  • Research Design*
  • Open access
  • Published: 19 April 2021

How to properly use the PRISMA Statement

  • Rafael Sarkis-Onofre 1 ,
  • Ferrán Catalá-López 2 , 3 ,
  • Edoardo Aromataris 4 &
  • Craig Lockwood 4  

Systematic Reviews volume  10 , Article number:  117 ( 2021 ) Cite this article

65k Accesses

171 Citations

103 Altmetric

Metrics details

A Research to this article was published on 29 March 2021

It has been more than a decade since the original publication of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement [ 1 ], and it has become one of the most cited reporting guidelines in biomedical literature [ 2 , 3 ]. Since its publication, multiple extensions of the PRISMA Statement have been published concomitant with the advancement of knowledge synthesis methods [ 4 , 5 , 6 , 7 ]. The PRISMA2020 statement, an updated version has recently been published [ 8 ], and other extensions are currently in development [ 9 ].

The number of systematic reviews (SRs) has increased substantially over the past 20 years [ 10 , 11 , 12 ]. However, many SRs continue to be poorly conducted and reported [ 10 , 11 ], and it is still common to see articles that use the PRISMA Statement and other reporting guidelines inappropriately, as was highlighted recently [ 13 ].

The PRISMA Statement and its extensions are an evidence-based, minimum set of recommendations designed primarily to encourage transparent and complete reporting of SRs. This growing set of guidelines have been developed to aid authors with appropriate reporting of different knowledge synthesis methods (such as SRs, scoping reviews, and review protocols) and to ensure that all aspects of this type of research are accurately and transparently reported. In other words, the PRISMA Statement is a road map to help authors best describe what was done, what was found, and in the case of a review protocol, what are they are planning to do.

Despite this clear and well-articulated intention [ 2 , 3 , 4 , 5 ], it is common for Systematic Reviews to receive manuscripts detailing the inappropriate use of the PRISMA Statement and its extensions. Most frequently, improper use appears with authors attempting to use the PRISMA statement as a methodological guideline for the design and conduct reviews, or identifying the PRISMA statement as a tool to assess the methodological quality of reviews, as seen in the following examples:

“This scoping review will be conducted according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Statement.”

“This protocol was designed based on the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) Statement.”

“The methodological quality of the included systematic reviews will be assessed with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement.”

Some organizations (such as Cochrane and JBI) have developed methodological guidelines that can help authors to design or conduct diverse types of knowledge synthesis rigorously [ 14 , 15 ]. While the PRISMA statement is presented to predominantly guide reporting of a systematic review of interventions with meta-analyses, its detailed criteria can readily be applied to the majority of review types [ 13 ]. Differences between the role of the PRISMA Statement to guide reporting versus guidelines detailing methodological conduct is readily illustrated with the following example: the PRISMA Statement recommends that authors report their complete search strategies for all databases, registers, and websites (including any filters and limits used), but it does not include recommendations for designing and conducting literature searches [ 8 ]. If authors are interested in understanding how to create search strategies or which databases to include, they should refer to the methodological guidelines [ 12 , 13 ]. Thus, the following examples can illustrate the appropriate use of the PRISMA Statement in research reporting:

“The reporting of this systematic review was guided by the standards of the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) Statement.”

“This scoping review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR).”

“The protocol is being reported in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) Statement.”

Systematic Reviews supports the complete and transparent reporting of research. The Editors require the submission of a populated checklist from the relevant reporting guidelines, including the PRISMA checklist or the most appropriate PRISMA extension. Using the PRISMA statement and its extensions to write protocols or the completed review report, and completing the PRISMA checklists are likely to let reviewers and readers know what authors did and found, but also to optimize the quality of reporting and make the peer review process more efficient.

Transparent and complete reporting is an essential component of “good research”; it allows readers to judge key issues regarding the conduct of research and its trustworthiness and is also critical to establish a study’s replicability.

With the release of a major update to PRISMA in 2021, the appropriate use of the updated PRISMA Statement (and its extensions as those updates progress) will be an essential requirement for review based submissions, and we encourage authors, peer reviewers, and readers of Systematic Reviews to use and disseminate that initiative.

Availability of data and materials

We do not have any additional data or materials to share.

Abbreviations

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews

Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols

Systematic reviews

Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009;62(10):1006–12. https://doi.org/10.1016/j.jclinepi.2009.06.005 .

Article   PubMed   Google Scholar  

Caulley L, Cheng W, Catala-Lopez F, Whelan J, Khoury M, Ferraro J, et al. Citation impact was highly variable for reporting guidelines of health research: a citation analysis. J Clin Epidemiol. 2020;127:96–104. https://doi.org/10.1016/j.jclinepi.2020.07.013 .

Page MJ, Moher D. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement and extensions: a scoping review. Syst Rev. 2017;6(1):263. https://doi.org/10.1186/s13643-017-0663-8 .

Article   PubMed   PubMed Central   Google Scholar  

Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, et al. PRISMA-S: an extension to the PRISMA Statement for reporting literature searches in systematic reviews. Syst Rev. 2021;10(1):39. https://doi.org/10.1186/s13643-020-01542-z .

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73. https://doi.org/10.7326/M18-0850 .

Article   Google Scholar  

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1. https://doi.org/10.1186/2046-4053-4-1 .

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162(11):777–84. https://doi.org/10.7326/M14-2385 .

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10(1):89. https://doi/10.1186/s13643-021-01626-4.

EQUATOR Network: Reporting guidelines under development for systematic reviews. https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-systematic-reviews/ . Accessed 11 Feb 2021.

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, et al. Epidemiology and Reporting Characteristics of Systematic Reviews of Biomedical Research: A Cross-Sectional Study. Plos Med. 2016;13(5):e1002028. https://doi.org/10.1371/journal.pmed.1002028 .

Ioannidis JP. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q. 2016;94(3):485–514. https://doi.org/10.1111/1468-0009.12210 .

Niforatos JD, Weaver M, Johansen ME. Assessment of Publication Trends of Systematic Reviews and Randomized Clinical Trials, 1995 to 2017. JAMA Intern Med. 2019;179(11):1593–4. https://doi.org/10.1001/jamainternmed.2019.3013.

Caulley L, Catala-Lopez F, Whelan J, Khoury M, Ferraro J, Cheng W, et al. Reporting guidelines of health research studies are frequently used inappropriately. J Clin Epidemiol. 2020;122:87–94. https://doi.org/10.1016/j.jclinepi.2020.03.006 .

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions. 2nd Edition ed. Chichester: Wiley; 2019.

Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. ed. Adelaide: JBI; 2020.

Download references

Acknowledgements

RSO is funded in part by Meridional Foundation. FCL is funded in part by the Institute of Health Carlos III/CIBERSAM.

Author information

Authors and affiliations.

Graduate Program in Dentistry, Meridional Faculty, IMED, Passo Fundo, Brazil

Rafael Sarkis-Onofre

Department of Health Planning and Economics, National School of Public Health, Institute of Health Carlos III, Madrid, Spain

Ferrán Catalá-López

Department of Medicine, University of Valencia/INCLIVA Health Research Institute and CIBERSAM, Valencia, Spain

JBI, Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, Australia

Edoardo Aromataris & Craig Lockwood

You can also search for this author in PubMed   Google Scholar

Contributions

RSO drafted the initial version. FCL, EA, and CL made substantial additions to the first and subsequent drafts. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Rafael Sarkis-Onofre .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

CL is Editor-in-Chief of Systematic Reviews, FCL is Protocol Editor of Systematic Reviews, and RSO is Associate Editor of Systematic Reviews.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sarkis-Onofre, R., Catalá-López, F., Aromataris, E. et al. How to properly use the PRISMA Statement. Syst Rev 10 , 117 (2021). https://doi.org/10.1186/s13643-021-01671-z

Download citation

Published : 19 April 2021

DOI : https://doi.org/10.1186/s13643-021-01671-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

example of systematic review research proposal

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?

Steps of a Systematic Review

  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Forms and templates

Logos of MS Word and MS Excel

Image: David Parmenter's Shop

  • PICO Template
  • Inclusion/Exclusion Criteria
  • Database Search Log
  • Review Matrix
  • Cochrane Tool for Assessing Risk of Bias in Included Studies

   • PRISMA Flow Diagram  - Record the numbers of retrieved references and included/excluded studies. You can use the Create Flow Diagram tool to automate the process.

   •  PRISMA Checklist - Checklist of items to include when reporting a systematic review or meta-analysis

PRISMA 2020 and PRISMA-S: Common Questions on Tracking Records and the Flow Diagram

  • PROSPERO Template
  • Manuscript Template
  • Steps of SR (text)
  • Steps of SR (visual)
  • Steps of SR (PIECES)

Adapted from  A Guide to Conducting Systematic Reviews: Steps in a Systematic Review by Cornell University Library

Source: Cochrane Consumers and Communications  (infographics are free to use and licensed under Creative Commons )

Check the following visual resources titled " What Are Systematic Reviews?"

  • Video  with closed captions available
  • Animated Storyboard
  • << Previous: What is a Systematic Review (SR)?
  • Next: Framing a Research Question >>
  • Last Updated: Mar 4, 2024 12:09 PM
  • URL: https://lib.guides.umd.edu/SR

Research Guides

Systematic reviews and related evidence syntheses: proposal.

  • Standards & Guidelines

Getting started with proposal of review

The proposal stage is the most important step of a review project as it determines the feasibility of the review and its rationale.

The steps are: 

1. Determining review question and review type. 

  • Right Review : free tool to assist in selecting best review type for a given question
  • Trying to choose between a scoping or a systematic review? try this article- Munn, Z., Peters, M.D.J., Stern, C.  et al.   Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach .  BMC Med Res Methodol   18 , 143 (2018). https://doi.org/10.1186/s12874-018-0611-x
  • This article provides 10 different types of questions that systematic reviews can answer- Munn, Z., Stern, C., Aromataris, E. et al.  What kind of systematic review should I conduct?  A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 18, 5 (2018).  
  • For scoping reviews, the framework is: Population, Concept, Context (see JBI Scoping Review guide )

2. Search for related reviews to proposed question. Places to search include:

  • Prospero : database of protocols for systematic reviews, umbrella reviews, and rapid reviews with human health outcomes
  • Open Science Framework : open access registry for any type of research, including scoping reviews and more
  • Cochrane Collaboration Handbook : systematic reviews of clinical interventions
  • Campbell Collaboration : accepts many types reviews across many disciplines: Business and Management, Crime and Justice, Disability, Education, International Development, Knowledge Translation and Implementation, Methods, Nutrition, and Social Welfare

Collaboration for Environmental Evidence : reviews in environmental research

Systematic Reviews for Animals & Food (SYREAF) : protocols and reviews on animals and food science related to animals

Also, consider searching subject related databases, adding a search concept "review"

3. Evaluate previous reviews for quality, as well as comparing their scope to the proposed review. The following tools can be used to 

  • ROBIS (Risk of Bias in Systematic reviews)

AMSTAR : Assessing the Methodological Quality of Systematic Reviews, for meta-analysis

  • CASP Checklist : Critical Appraisal Skills Programme

4. Further refine question by defining the eligibility criteria

  • Eligibility criteria are the characteristics of the studies/research to be collected. Inclusion criteria are those characteristics a study must have to be include. Exclusion criteria are exceptions to the inclusion criteria.

5. Develop a preliminary search and find a few studies that match the eligibility criteria

  • Consider working with a librarian to develop a search. The purpose is to estimate the number of citations to be sorted (giving some idea of the amount time it will take complete the review) and to find at least a few studies that match the criteria.

6. Summarize proposal : A written proposal helps in framing the project and getting feedback. It should include:

  • A descriptive title of project, which includes the type of review
  • A brief introduction
  • A description of previous reviews and the rationale for the proposed review
  • An appropriate framed question for the review
  • The eligibility criteria
  • << Previous: About
  • Next: Standards & Guidelines >>
  • Last Updated: Apr 4, 2024 12:34 PM
  • URL: https://tamu.libguides.com/systematic_reviews

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • How to Write a Research Proposal | Examples & Templates

How to Write a Research Proposal | Examples & Templates

Published on October 12, 2022 by Shona McCombes and Tegan George. Revised on November 21, 2023.

Structure of a research proposal

A research proposal describes what you will investigate, why it’s important, and how you will conduct your research.

The format of a research proposal varies between fields, but most proposals will contain at least these elements:

Introduction

Literature review.

  • Research design

Reference list

While the sections may vary, the overall objective is always the same. A research proposal serves as a blueprint and guide for your research plan, helping you get organized and feel confident in the path forward you choose to take.

Table of contents

Research proposal purpose, research proposal examples, research design and methods, contribution to knowledge, research schedule, other interesting articles, frequently asked questions about research proposals.

Academics often have to write research proposals to get funding for their projects. As a student, you might have to write a research proposal as part of a grad school application , or prior to starting your thesis or dissertation .

In addition to helping you figure out what your research can look like, a proposal can also serve to demonstrate why your project is worth pursuing to a funder, educational institution, or supervisor.

Research proposal length

The length of a research proposal can vary quite a bit. A bachelor’s or master’s thesis proposal can be just a few pages, while proposals for PhD dissertations or research funding are usually much longer and more detailed. Your supervisor can help you determine the best length for your work.

One trick to get started is to think of your proposal’s structure as a shorter version of your thesis or dissertation , only without the results , conclusion and discussion sections.

Download our research proposal template

Prevent plagiarism. Run a free check.

Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We’ve included a few for you below.

  • Example research proposal #1: “A Conceptual Framework for Scheduling Constraint Management”
  • Example research proposal #2: “Medical Students as Mediators of Change in Tobacco Use”

Like your dissertation or thesis, the proposal will usually have a title page that includes:

  • The proposed title of your project
  • Your supervisor’s name
  • Your institution and department

The first part of your proposal is the initial pitch for your project. Make sure it succinctly explains what you want to do and why.

Your introduction should:

  • Introduce your topic
  • Give necessary background and context
  • Outline your  problem statement  and research questions

To guide your introduction , include information about:

  • Who could have an interest in the topic (e.g., scientists, policymakers)
  • How much is already known about the topic
  • What is missing from this current knowledge
  • What new insights your research will contribute
  • Why you believe this research is worth doing

As you get started, it’s important to demonstrate that you’re familiar with the most important research on your topic. A strong literature review  shows your reader that your project has a solid foundation in existing knowledge or theory. It also shows that you’re not simply repeating what other people have already done or said, but rather using existing research as a jumping-off point for your own.

In this section, share exactly how your project will contribute to ongoing conversations in the field by:

  • Comparing and contrasting the main theories, methods, and debates
  • Examining the strengths and weaknesses of different approaches
  • Explaining how will you build on, challenge, or synthesize prior scholarship

Following the literature review, restate your main  objectives . This brings the focus back to your own project. Next, your research design or methodology section will describe your overall approach, and the practical steps you will take to answer your research questions.

To finish your proposal on a strong note, explore the potential implications of your research for your field. Emphasize again what you aim to contribute and why it matters.

For example, your results might have implications for:

  • Improving best practices
  • Informing policymaking decisions
  • Strengthening a theory or model
  • Challenging popular or scientific beliefs
  • Creating a basis for future research

Last but not least, your research proposal must include correct citations for every source you have used, compiled in a reference list . To create citations quickly and easily, you can use our free APA citation generator .

Some institutions or funders require a detailed timeline of the project, asking you to forecast what you will do at each stage and how long it may take. While not always required, be sure to check the requirements of your project.

Here’s an example schedule to help you get started. You can also download a template at the button below.

Download our research schedule template

If you are applying for research funding, chances are you will have to include a detailed budget. This shows your estimates of how much each part of your project will cost.

Make sure to check what type of costs the funding body will agree to cover. For each item, include:

  • Cost : exactly how much money do you need?
  • Justification : why is this cost necessary to complete the research?
  • Source : how did you calculate the amount?

To determine your budget, think about:

  • Travel costs : do you need to go somewhere to collect your data? How will you get there, and how much time will you need? What will you do there (e.g., interviews, archival research)?
  • Materials : do you need access to any tools or technologies?
  • Help : do you need to hire any research assistants for the project? What will they do, and how much will you pay them?

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Once you’ve decided on your research objectives , you need to explain them in your paper, at the end of your problem statement .

Keep your research objectives clear and concise, and use appropriate verbs to accurately convey the work that you will carry out for each one.

I will compare …

A research aim is a broad statement indicating the general purpose of your research project. It should appear in your introduction at the end of your problem statement , before your research objectives.

Research objectives are more specific than your research aim. They indicate the specific ways you’ll address the overarching aim.

A PhD, which is short for philosophiae doctor (doctor of philosophy in Latin), is the highest university degree that can be obtained. In a PhD, students spend 3–5 years writing a dissertation , which aims to make a significant, original contribution to current knowledge.

A PhD is intended to prepare students for a career as a researcher, whether that be in academia, the public sector, or the private sector.

A master’s is a 1- or 2-year graduate degree that can prepare you for a variety of careers.

All master’s involve graduate-level coursework. Some are research-intensive and intend to prepare students for further study in a PhD; these usually require their students to write a master’s thesis . Others focus on professional training for a specific career.

Critical thinking refers to the ability to evaluate information and to be aware of biases or assumptions, including your own.

Like information literacy , it involves evaluating arguments, identifying and solving problems in an objective and systematic way, and clearly communicating your ideas.

The best way to remember the difference between a research plan and a research proposal is that they have fundamentally different audiences. A research plan helps you, the researcher, organize your thoughts. On the other hand, a dissertation proposal or research proposal aims to convince others (e.g., a supervisor, a funding body, or a dissertation committee) that your research topic is relevant and worthy of being conducted.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2023, November 21). How to Write a Research Proposal | Examples & Templates. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/research-process/research-proposal/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a problem statement | guide & examples, writing strong research questions | criteria & examples, how to write a literature review | guide, examples, & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Banner

Systematic Reviews

  • Introduction
  • Guidelines and procedures
  • Management tools
  • Define the question
  • Check the topic
  • Determine inclusion/exclusion criteria
  • Develop a protocol
  • Identify keywords
  • Databases and search strategies
  • Grey literature
  • Manage and organise
  • Screen & Select
  • Locate full text
  • Extract data

Example reviews

  • Examples of systematic reviews
  • Accessing help This link opens in a new window
  • Systematic Style Reviews Guide This link opens in a new window

Please choose the tab below for your discipline to see relevant examples.

For more information about how to conduct and write reviews, please see the Guidelines section of this guide.

  • Health & Medicine
  • Social sciences
  • Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018).
  • Nicotine effects on exercise performance and physiological responses in nicotine‐naïve individuals: a systematic review. (2018).
  • Association of total white cell count with mortality and major adverse events in patients with peripheral arterial disease: A systematic review. (2014).
  • Do MOOCs contribute to student equity and social inclusion? A systematic review 2014–18. (2020).
  • Interventions in Foster Family Care: A Systematic Review. (2020).
  • Determinants of happiness among healthcare professionals between 2009 and 2019: a systematic review. (2020).
  • Systematic review of the outcomes and trade-offs of ten types of decarbonization policy instruments. (2021).
  • A systematic review on Asian's farmers' adaptation practices towards climate change. (2018).
  • Are concentrations of pollutants in sharks, rays and skates (Elasmobranchii) a cause for concern? A systematic review. (2020).
  • << Previous: Write
  • Next: Publish >>
  • Last Updated: Jan 16, 2024 10:23 AM
  • URL: https://libguides.jcu.edu.au/systematic-review

Acknowledgement of Country

Literature Reviews

  • What is a Literature Review?
  • Concept Mapping
  • Writing a Proposal
  • For Faculty

Need help? Ask a librarian

Profile Photo

Tools for All Writers

  • Reading, Writing, Speaking
  • Citing Sources
  • Library Searching

Templates for Proposal Writing

  • Template 1 from Drew University
  • Template 2 from Rutgers University (Saracevic)

Content of a proposal for a thesis or any research project

Full Pdf  from Rutgers University

  • What do you call this investigation?
  • What problem or area will you investigate in general?
  • Why is this problem important to investigate?
  • What was previously done in relation to this problem? What were some of the significant studies? (Literature review)
  • What theory or model is going to guide your research?
  • What will you specifically investigate or do in the framework of that problem? What are your specific research questions or hypotheses?
  • How will each research question be addressed ? What methods will you use for each research question?
  • How will the results be analyzed?
  • What are the deliverables? What can or will be gained by investigation of this problem?

Avoid Patchwriting

  • << Previous: Concept Mapping
  • Next: For Faculty >>
  • Last Updated: Mar 25, 2024 8:48 AM
  • URL: https://researchguides.njit.edu/literaturereview

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Research Proposal for a Systematic Review of the Effectiveness of Community Interventions in

Profile image of Norman Johnson

The documented health effects of smoking are perhaps the most fundamental motivators of the efforts to curtail tobacco use. The efforts can be the work of governments or independent bodies that are concerned about the public health due to for instance the economic implications of smoking-related health diseases particularly on low-income earners. By illustration, governments may adopt statutory policies to curtail smoking by designating smoke zones and instituting legal penalties against individuals that smoke beyond such areas. Nevertheless, such measures, however stringent are undermined by their coercive nature that restricts smoking in public arenas but provides no incentive to curtail smoking at the household level. As such, the efficacy of affirmative action depends on the motivation that such provides to the person to quit tobacco use. For example, community efforts might be more impactful when they are frequently held to sensitize the population on the actual effects of tobacco use on health and the economic status of the family. The challenges to the mitigation strategies stem in principal from the business community thus detracting from the full impact of the efforts. In the light of the varying efficiency of tobacco use mitigation efforts, this paper presents a protocol for a systematic review of community interventions to prevent smoking among adults in the United Kingdom.

Related Papers

Central European Journal of Public Health

Egemen Ünal

example of systematic review research proposal

Janet Ferguson , Amanda Amos

Tobacco Control

Christine Godfrey

Preventive Medicine

Lynne Parkinson

Public Health

Richard Edwards

Linda Bauld

Amanda Amos , Martin White

Drug and alcohol review

Prabhat Jha

Cigarette smoking and other tobacco use imposes a huge and growing public health burden globally. Currently, approximately 5 million people are killed annually by tobacco use; by 2030, estimates based on current trends indicate that this number will increase to 10 million, with 70% of deaths occurring in low- and middle-income countries. Numerous studies from high-income countries, and a growing number from low- and middle-income countries, provide strong evidence that tobacco tax increases, dissemination of information about health risks from smoking, restrictions on smoking in public places and in work-places, comprehensive bans on advertising and promotion and increased access to cessation therapies are all effective in reducing tobacco use and its consequences. Despite this evidence, tobacco control policies have been unevenly applied—due partly to political constraints. This paper provides a summary of these issues, beginning with an overview of trends in global tobacco use and its consequences and followed by a review of the evidence on the effectiveness of tobacco control policies in reducing tobacco use. A description of the types and comprehensiveness of policies currently in place and a discussion of some of the factors correlated with the strength and comprehensive of these policies follows.

American Journal of Preventive Medicine

Sajal Chattopadhyay

Afaf Girgis

RELATED PAPERS

IOSR Journals

Nursing and Midwifery …

monika dogra

arXiv: General Mathematics

Aleks Kleyn

linga reddy Boddam

Darija Vranesic

Thomas Martinetto

Oxford Handbooks Online

Rakefet Ackerman

marie-france nicolas

Advanced Technologies

Judite Wenzel

Current Nutraceuticals

Dalia Yassin

Journal of Clinical Investigation

Ivy Dambuza

Theoria: Časopis za filozofiju

Karlo Gardavski

Le Centre pour la Communication Scientifique Directe - HAL - Université Paris Descartes

philippe canalda

Duke law and technology review

Charles Nesson

Esteban Schiappacasse

Ivone Carroza

Holzforschung

Geoffrey Daniel

SAGE Open Medicine

Rebecca Dillingham

Malgorzata Mazurek

Physical Review B

Adolfo Trumper

Inorganica Chimica Acta

Inorganic Chemistry

Haleema yahya Otaif

ChemPhysChem

See More Documents Like This

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Peer review of health research funding proposals: A systematic map and systematic review of innovations for effectiveness and efficiency

Jonathan shepherd.

Wessex Institute, Faculty of Medicine, University of Southampton, Southampton, United Kingdom

Geoff K. Frampton

Karen pickett, jeremy c. wyatt, associated data.

All relevant data are within the paper and its Supporting Information files.

To investigate methods and processes for timely, efficient and good quality peer review of research funding proposals in health.

A two-stage evidence synthesis: (1) a systematic map to describe the key characteristics of the evidence base, followed by (2) a systematic review of the studies stakeholders prioritised as relevant from the map on the effectiveness and efficiency of peer review ‘innovations’. Standard processes included literature searching, duplicate inclusion criteria screening, study keyword coding, data extraction, critical appraisal and study synthesis.

A total of 83 studies from 15 countries were included in the systematic map. The evidence base is diverse, investigating many aspects of the systems for, and processes of, peer review. The systematic review included eight studies from Australia, Canada, and the USA, evaluating a broad range of peer review innovations. These studies showed that simplifying the process by shortening proposal forms, using smaller reviewer panels, or expediting processes can speed up the review process and reduce costs, but this might come at the expense of peer review quality, a key aspect that has not been assessed. Virtual peer review using videoconferencing or teleconferencing appears promising for reducing costs by avoiding the need for reviewers to travel, but again any consequences for quality have not been adequately assessed.

Conclusions

There is increasing international research activity into the peer review of health research funding. The studies reviewed had methodological limitations and variable generalisability to research funders. Given these limitations it is not currently possible to recommend immediate implementation of these innovations. However, many appear promising based on existing evidence, and could be adapted as necessary by funders and evaluated. Where feasible, experimental evaluation, including randomised controlled trials, should be conducted, evaluating impact on effectiveness, efficiency and quality.

Introduction

Peer review is a key element of quality assurance in academic research. [ 1 ] It is used to reassure research funders that research proposals are of the highest scientific merit and that funded research is appropriate to policy and practice needs. Peer review is also employed at later stages of the research lifecycle to improve the scientific credibility of research outputs, such as articles in academic journals. There is a need to ensure that peer review is effective and efficient, to support the production of high quality research across the sciences. [ 2 ]

However, there are challenges. Many research funders are facing increasing budgetary pressure and need to ensure that peer review, alongside other aspects of research management, is efficient in time and costs. [ 3 ] Peer review has also been subject to criticisms calling into question its validity and usefulness as a process for identifying the ‘best’ scientific research. [ 4 , 5 ] For example, peer review can be time consuming and therefore expensive, and funders often make substantial efforts to identify and recruit appropriate reviewers and obtain sufficient feedback from them in a timely manner. [ 3 ] Researchers typically spend several weeks or months preparing a proposal [ 6 ] and each year hundreds of years’ worth of total reviewers’ time are used by individual research councils, [ 7 , 8 ] which equates to tens of millions of pounds in salary costs. [ 6 ] The value of this investment is diminished if peer review is unable to identify good quality proposals that ultimately will have a high impact on policy, practice and science.

Despite the effort involved, it has been argued that peer review leads to inconsistent funding decisions which may be no better than chance decisions in selecting the best proposals. [ 9 ] In some cases, however, good correlations have been reported between peer review scores and the estimated scientific impact of the funded proposals. [ 10 ] In addition to concerns about the effort involved, peer review has been criticised as being biased, which may reflect a disproportionate influence of individual reviewers’ preferences [ 11 ] or conflicts of interest. [ 2 ] Common concerns are that peer review can be associated with gender bias, or institutional bias, may penalise inexperienced research applicants, and that traditional peer review systems used by major funding agencies tend to be conservative, rejecting innovative or ‘high-risk’ research proposals. [ 12 ] Criticism has also been made of the ‘black box’ nature of peer review, and attempts have been made to better understand the social and cultural processes by which multi-disciplinary academic funding panels discuss applications, define academic excellence and make funding decisions. [ 13 ]

Nonetheless, peer review remains a significant aspect of research commissioning, and some funding agencies have attempted to address the criticisms. For example, the US National Institutes of Health and UK Research Councils (among others) have studied their peer review practices to identify opportunities for improvement. Funders are increasingly exploring improvements to peer review processes and methods, or alternatives to peer review itself. [ 4 , 14 ] These include using open rather than blinded review, use of digital technology to discuss proposals rather than face-to-face meetings, testing new proposal scoring methods, and introducing shorter proposal forms and expedited review processes.

Given the costs of peer review and its centrality in ensuring the quality of research, there is a need to map alternative approaches to peer review and assess their impact in addressing some of the criticisms made. There have been few previous systematic reviews in this area. A Cochrane systematic review [ 15 ] assessed the impact of a variety of peer review processes on the quality of funded research, identified from the health literature. The review included 10 studies, conducted in a range of countries. Overall, the authors concluded that the quality of the evidence base was limited and that there is a strong need for experimental studies to examine the impact of different peer review processes on the quality of funded research. Given that the literature searches were carried out in 2002 this review is now very out-of-date. This underlines the need for an up-to-date comprehensive review of the evidence.

The question this project set out to investigate was: What is the research evidence on methods and processes for timely, efficient and good quality peer review of research funding proposals in health? The purpose was to make recommendations which could then be made to research funders about useful methods that could potentially be adopted, as well as identifying where further research into peer review of health research proposals is needed. This project was one of a number of complementary research projects conducted within a UK health research funder, the National Institute for Health Research (NIHR), to investigate potential improvements to the process of the peer review of funding applications.

A two-stage evidence synthesis was conducted comprising: (1) systematic mapping of the key characteristics of the evidence base, followed by: (2) a systematic review of a sub-set of studies on a particular area of relevance prioritised from the map by stakeholders. This is a flexible and pragmatic approach to evidence synthesis that has been successfully applied in a number of published systematic reviews of complex health and education interventions as a means of characterising the evidence base to facilitate a policy-relevant, stakeholder-informed synthesis. [ 16 – 20 ] Stakeholder involvement in systematic reviewing, including the setting of the scope and the research questions, has become increasingly important in evidence-informed health in recent years. [ 21 ] The intended methods were described in a research protocol which was circulated amongst NIHR stakeholders for comment before being finalised ( S1 Protocol ). This was not pre-published in the PROSPERO systematic review repository as it did not include a health outcome, so was ineligible.

Systematic map

Literature searching.

A comprehensive search for relevant literature was undertaken by an experienced health information specialist. A draft search strategy was created, piloted, and revised before implementation ( S1 Appendix ). The following electronic bibliographic databases were searched using the same strategy adapted for each database as necessary (the host platforms used are indicated in brackets): Medline (Ovid); MEDLINE In-Process & Other Non-Indexed Citations (Ovid); Embase (Ovid); The Cochrane Library (comprising the Cochrane Database of Systematic Reviews; Cochrane Central Register of Controlled Trials (CENTRAL); and Database of Abstracts of Reviews of Effects); Psychinfo (Ebsco); Social Sciences Citation Index (Web of Science); and Delphis (a University of Southampton Library database). Database searches were conducted during May-June 2016. We also searched the internet sites of international health research funders and health charities ( S1 Protocol ) during June-July 2016. Reference lists of a random sample of 25% of articles included in the systematic map, and of all studies included in the systematic review were searched to check that relevant studies had not been missed. All references identified from electronic databases were imported into an Endnote reference management library for storage, removal of duplicates, retrieval of the full text versions, and eligibility screening.

Systematic map eligibility criteria

To be included in the map the references needed to report a research study, of any design, investigating any aspect of the peer review of health research funding application process. Systematic reviews were also permitted but commentary, opinion and editorial articles were excluded. For this project health research was defined broadly to include research into health and social care, public health, and health promotion. References reporting investigations into the peer review of research outputs were not eligible unless they also reported an investigation into the peer review of funding applications. Study inclusion was limited to articles published in the English language. Before being fully implemented, the inclusion criteria were piloted by two reviewers independently on a sample of titles and abstracts which were published in 2015–2016 and retrieved by the literature search.

Each title and abstract was screened independently by two reviewers (JS, GF or KP) with extensive experience of systematic reviewing. If agreement between reviewers could not be reached a third reviewer was consulted. The full text versions of references deemed potentially relevant on checking their titles and abstracts were retrieved for further screening. All full text articles were screened by one reviewer and checked by a second. A third reviewer was consulted in cases of disagreement.

Systematic map coding

A draft set of keywords was devised and agreed by the research team (JS, GF, KP, JW) to describe the key characteristics of the studies relevant to this project. Terms were created for aspects such as: the scope of the studies; the study population (e.g. researchers, health professionals); the study design (e.g. experimental, observational); the study context (e.g. country; type of research funder); and study measures, including outcome and process measures. The keywords did not, however, characterise the results of studies as this was the purpose of the subsequent systematic review.

The draft keyword list was pilot-tested on a subset of 13 studies from the map, [ 6 , 10 , 22 – 32 ] to ensure validity and consistency of application between reviewers. The draft list was also circulated for general comment amongst relevant stakeholders from a working group on peer review as part of the NIHR’s strategic priority project ‘Push the Pace 2’ (which aims to establish a proportionate peer review system for research proposals). The final version of the keyword list is provided in a Microsoft Excel worksheet ( S1 Database ). All included full-text articles reporting an individual study were grouped and read together and the keywords which were applicable to the study were coded in the worksheet by one reviewer. A random sample of 20% of the studies (n = 16/83) was checked by a second reviewer to ensure reliability and comprehensiveness. The level of reliability between reviewers was considered sufficient, since fewer than 2% of the checked data cell entries in the map worksheet required amendments, which were relatively minor.

Upon completion of the keywording the applied coding was analysed within the database to generate frequencies and cross-tabulations of keywords, permitting an overview of the characteristics of the evidence. The research team met to discuss the results and to identify potential sub-sets of studies grouped by sets of keywords reflecting a particular issue or theme (‘scenario’) for potential inclusion in the systematic review.

Stakeholder topic prioritisation

Based on the peer review issues reported in the systematic map e.g. bias, quality assurance, efficiency, and study context (e.g. country, type of research funder), and the study outcomes and process measures (e.g. funding decisions made, impact of the funded research), the research team identified three contrasting evidence scenarios for potential systematic review. The scenarios were devised to be relevant to stakeholders involved in research commissioning and management.

The three scenarios were tabulated and emailed to the NIHR Working Group on peer review prior to a face-to-face meeting to discuss the scenarios. The meeting was attended by three of the current authors and 11 members of the working group, who represented all of the different NIHR research commissioning centres. Each scenario was described and discussed in turn and stakeholders were given the opportunity to ask the research team for more information about the scenario and pertinent evidence from the map.

Following the meeting a summary of the discussion was circulated to the NIHR working group members not present at the meeting to seek any additional comments. There was no disagreement from any of these other group members on the prioritised scenario. Further detail on the stakeholder topic prioritisation process is reported in S2 Appendix .

Systematic review

Following the stakeholder consultation exercise the prioritised scenario question for the systematic review was: “Which innovations can improve the efficiency and/or effectiveness of the peer review of health research proposals?”

A set of inclusion criteria for the systematic review was drafted to reflect this research question. The final criteria were: 1) Primary outcome evaluation studies or systematic reviews on the peer-review of research funding proposals in health published after 2005 (N.B. Systematic reviews were to be included as a source of references only); 2) Any peer review system structure innovation, with the exception of ranking or scoring of grant proposals (these were not considered relevant by the stakeholders); 3) At least one outcome measure relating to the efficiency of peer review (e.g. time required by peer reviewers; administrative costs of peer review; level of agreement between reviewers) and/or the effectiveness of peer review (e.g. ability of peer review to inform funding decisions; quality of the peer review process; scientific quality of the funded research and its impact on policy, practice and science).

The inclusion criteria were applied to the full text articles of studies already located in the systematic map. One reviewer applied the criteria and a second checked their decision, with any disagreements resolved through discussion. Studies meeting the criteria underwent data extraction and critical appraisal using a template devised for this study.

Due to the diverse range of potentially eligible studies, a number of critical appraisal instruments were considered for use. Any randomised controlled trials (RCTs) identified were to be appraised using the Cochrane risk of bias tool. [ 33 ] A modification to these criteria for non-randomised studies by the Cochrane Effective Practice and Organisation of Care (EPOC) group was also planned. However, this was not subsequently applied to any of the included studies due to the nature of their designs (see ‘ Results ‘ below). Few existing instruments were considered appropriate for critically appraising the included studies and therefore we undertook a narrative appraisal of the quality of each study, commenting on key aspects of data collection and analysis and threats to internal validity. Data extraction and critical appraisal was performed by one reviewer (JS or GF) and checked by a second with any disagreements resolved through discussion.

Given the heterogeneous nature of the included studies (the studies differed considerably in their designs and characteristics) it was not considered appropriate to conduct meta-analysis. A narrative synthesis was therefore conducted.

Systematic map results

A total of 1824 titles and abstracts was screened, and 198 of these were further screened as full text articles ( Fig 1 ). The rate of agreement between the two reviewers at full-text screening was 90%, with 10% of the decisions requiring further discussion or referral to a third reviewer to reach a final decision. A total of 83 studies (described in 89 publications) met the inclusion criteria for the systematic map. [ 3 , 6 , 9 , 10 , 12 , 15 , 23 – 30 , 32 , 34 – 104 ] ( S1 Database ).

An external file that holds a picture, illustration, etc.
Object name is pone.0196914.g001.jpg

Most studies (72%) were published from 2005 onwards (49% from 2010 onwards). Fifteen countries were represented, with 49% of studies having been conducted in the USA. Other locations included Europe (23%, most frequently in Germany and the UK [each 6%]); Canada (11%), and Australia (9%). Of the study types, 61% were observational; 31% were based on surveys, interviews or focus groups; and 7% were experimental (of which 3 studies [4%] were randomised). In the majority of studies (73%) the setting was a national research council (e.g. the US National Institutes of Health; NIH). A smaller proportion of studies were based in charities or local funders. In around one third of the studies the peer reviewers were academics and/or health professionals, and in 10% they were lay people. In the majority of studies, however, the professional status of the peer reviewers was not reported. In some studies the peer reviewers were external to the funder and its funding decision panel, whilst in other cases the reviewers were also involved in making funding decisions. In many studies the extent of the reviewer’s role (e.g. funding panel member) was not clearly defined.

A variety of peer review issues have been studied. We categorised these as relating to the process and structure of a peer review system, such as: scoring/ranking methods (12%); configurations of reviewers (e.g. the number needed or expertise required) (12%); or methods for identifying peer reviewers (7%); and peer reviewer processes, such as: bias in peer review (20%); predictive ability of peer review to identify research projects that will ultimately be successful (22%); consistency in reviewing scoring/judgements between reviewers (18%); and stakeholder opinions on the peer review process (30%).

Systematic review results

Eight studies met all the inclusion criteria for the systematic review and are summarized in Table 1 . These evaluated a broad range of innovations which can be categorised as: shortening of grant proposals (alongside other peer review simplifications); [ 6 , 23 , 29 ] videoconferencing or teleconferencing approaches; [ 47 , 60 , 100 ] a Delphi consensus approach; [ 27 ], a video training module for peer reviewers; [ 95 ] and involvement of patients and other care-giving stakeholders to improve peer review. [ 57 ] Table 2 provides our critical appraisal of each study and Table 3 describes features of the studies which relate to their generalizability. S1 Table provides tabulated details of the study results, ordered by outcome and process measure. A structured narrative description of the methods and results of each study follows.

Shortening of grant proposals and simplified approaches

Short proposal with simplified scoring & accelerated peer review (Barnett et al [ 23 ] ) Overview : A streamlined funding protocol for a new health services research stimulus grant awards programme—the Australian Centre for Health Services Innovation (AusHSI). The protocol comprised a short proposal form and accelerated peer review process. The aim was to reduce the content and time required by applicants and reviewers in order to provide rapid and transparent funding decisions.

Innovation method : In the protocol applicants are given four weeks to submit electronically a 1,200 word limit form describing the research question, methods, budget and expected impact on health services. Two members of the multi-disciplinary funding committee shortlist proposals and provide written feedback to unsuccessful applicants. Shortlisted applicants attend interviews within 10 days where they make a brief 10 minute presentation to the committee. The proposals are then ranked against a set of criteria and funding is allocated in order of rank until the pre-defined budget limit is met. Successful applicants are notified within two weeks. There is particular emphasis on providing feedback with unsuccessful applicants receiving written feedback and suggested improvements for resubmission.

Method for assessing the innovation : The protocol was evaluated as part of a prospective quality improvement evaluation, with internal monitoring data collected at four cross-sectional time points (funding round 1 and 2 in 2012, and round 1 and 2 in 2013). Brief data are also reported on applicants’ views and experiences of the proposal and peer review system.

Principal results and conclusions : The average time applicants’ spent preparing their proposals (described as a primary outcome) was seven days over the four funding rounds. The committee members spent on average 36 minutes (range 15–105 minutes) reviewing each proposal prior to the committee meeting where the same reviewers spent 10 minutes discussing each proposal. The mean time from proposal submission to decision notification over the four rounds was seven weeks. Successful research teams were notified within two weeks of interview, which was a maximum of eight weeks after proposal submission. Selected quotations suggest applicants’ views of the protocol were positive. Although for some applicants the 1,200 word limit was challenging the reduction in unnecessary paperwork was appreciated. The feedback given to applicants was also appreciated and they found it enabled them to create better research proposals. In their discussion the authors suggest that, over time, the comprehensive feedback given to applicants who were not successful led to receipt of fewer proposals but of better quality. They conclude that this has improved efficiency for both applicants and reviewers.

Key strengths and limitations : The innovation was used in a ‘live’ review round to allocate funding. Overall, limited details are given on the study methods and there is little detailed quantitative or qualitative analysis. The protocol evaluated here was for a relatively smaller scale funding programme, funding award $80,000 (AUSD) for a maximum 12 month project. The findings may not necessarily be applicable to larger funding awards of longer duration.

Shorter proposal & smaller peer reviewer panel ± face-to-face meeting (Herbert et al [ 6 ] ) Overview : A prospective evaluation of shortened research proposals and simplified peer review processes for the Project Grant scheme of the National Health and Medical Research Council (NHMRC) of Australia. The aim was to identify the agreement between the programme’s official process and two new simplified processes, and the peer review cost savings for the simplified processes.

Innovation method : A simplified process where panel members reviewed a nine-page research plan and a two-page track record for each chief investigator. There were two types of simplified panels. One comprised seven members who reviewed proposals during a one and a half day face-to-face meeting (15 minutes discussion of each proposal). The other was a two person ‘journal panel’ (similar to peer review in an academic journal) who independently reviewed and scored proposals (without the two-page track investigator track record). A simplified scoring process was used for both panels (definitely fund, possibly fund, or definitely do not fund). The topics of the proposals were classified as basic science or public health.

Method for assessing the innovation : The project was described as a prospective parallel study. The authors compared the outcomes from the two simplified peer review panels in parallel with the existing official NHMRC programme. The study included a sample of 72 research proposals that had been submitted to the official programme and were undergoing assessment in parallel to the research study. The simplified process was initiated by the authors, whilst the official process was independent of the research study (though it was used for purposes of comparison). The official programme comprised 43 panels each with 12 members who meet for a week, and who discuss an average of 91 proposals each of around 100 pages long. Proposals are ranked using a weighted calculation using three criteria-based integer scores (from a one to seven).

Principal results and conclusions : The time spent reviewing proposals was similar between the two simplified panels (3.6 to 3.9 hours per proposal on average) (NB. no comparison was made with the official process for this measure). There was near satisfactory agreement in funding decisions between simplified processes and the official processes (72%-74%). The authors estimate that the two simplified panels could result in cost-savings equivalent to AUD $A2.1–$A4.9 million per year compared to the official process (based on costs for the year 2013, equating to a reduction in costs of between 34% to 78%), achieved through reductions in reviewers’ time (and therefore salary costs). The journal panel achieved the highest savings, as no meeting expenses were incurred.

Key strengths and limitations : A strength of this study was that the innovation was evaluated in the context of a ‘live’ funding round of a national funder. In terms of limitations there were differences between the official programme and the two simplified processes in terms of how proposals were scored and therefore how funding decisions were made. This may potentially confound the comparison in funding agreement between the processes. The sample of proposals analysed may not be wholly generalisable as they were provided to the study by contacts of the authors, rather than being sampled on a representative basis.

Peer review panel (11 members) with short proposal vs standard 2-reviewer critique (Mayo et al [ 29 ] ) Overview : A comparison of two methods of peer review on the probability of funding a research proposal: a panel of reviewers who ranked proposals; and a two peer reviewer method. This was a research project funding competition at a major Canadian university medical centre aimed at stimulating pilot clinical research from new investigators and teams. The intention was that they would later submit a full proposal to an external funding agency.

Innovation method : A committee of 11 experienced researchers and peer reviewers read and ranked 32 proposals (divided into two streams—new teams and new investigators) and ranked them, without using any explicit criteria (the ‘RANKING’ method). At the start of the committee meeting (before discussion of any results) it was decided that the top two ranked projects in each stream would be funded. For projects ranked three to eight the committee reviewed the ratings from an alternative two-reviewer method (the CLassic Structured Scientific In-depth two reviewer critique ‘CLASSIC’ method) and discussed the projects. Consensus was reached for the next three in each stream to be recommended for funding (thus a total of 10 proposals would be funded).

Method for assessing the innovation : The study was a prospective evaluation of two parallel models of peer reviewing. Under the CLASSIC method each proposal was assessed and scored by two assigned peer reviewers using a five point rating scale. The study measured agreement in proposal scoring rank and in the funding decision between the two methods, and the number of reviewers needed to arrive at a consistent ranking.

Principal results and conclusions : There was variability in the mean ranks assigned to each proposal between the two methods. The kappa value for agreement in funding decision (based on rank) was 0.36 (95% confidence interval 0.02 to 0.70) indicating poor quality agreement between the two methods. Of the 10 funded projects, the frequency of simulated reviewer pairings drawn from the RANKING committee in which the project failed to meet the funding cut-off ranged from 75% to 9%. Also, projects that were recommended for funding had a 9% to 60% probability of failing to meet the funding cut-off had only two reviewers been assigned (i.e. based on the CLASSIC method). It was estimated that least 10 reviewers would be needed for optimal agreement in funding of proposals. The authors call into question the appropriateness of using the two peer reviewer assessment of research proposals.

Key strengths and limitations : The innovation was used in a ‘live’ review round to allocate funding. The study simulated the percentage of possible reviewer pairings (drawn from the 11 member committee) in which a proposal failed to meet the funding cutoff. This was done to mimic the standard practice of (approximate) random allocation of pairs of reviewers to proposals. However, in actuality these proposals were not prospectively distributed amongst pairs of reviewers for review and ranking. Furthermore, ranking criteria differed between groups, confounding comparisons, and the sample of proposals was small.

Videoconferencing or teleconferencing approaches

Teleconference-based peer review meetings (Gallo et al; Carpenter et al [ 47 , 60 ] ) Overview : Retrospective comparison of two scientific peer review processes used by the American Institute of Biological Sciences (AIBS) for an anonymous federal funding programme. Specifically, effects on the peer review process and outcomes were compared for face-to-face meetings (held up to 2010) and teleconference meetings (introduced in 2011)[ 60 ]. Part of the study focused on examining the effects of discussion on peer review outcomes.[ 47 ]

Innovation method : Peer reviewers met by teleconference and presented the strengths and weaknesses for each grant proposal using specific review criteria. Each proposals was then discussed by a panel, comprising 7–12 subject matter experts plus one or more ‘consumer’ reviewers, guided by an AIBS chairperson to ensure consistency and fairness. Reviewers then submitted their final scores using an online system. The process was repeated for each proposal, and an overall summary paragraph prepared by assigned reviewers for each proposal, showing the panel’s evaluation and recommendations.

Method for assessing the innovation : Case-control type study comparing two years of teleconference peer review meetings (2011–2012) against two years of face-to-face meetings (2009–2010). Face-to-face meetings appear to have had similar structure to teleconferences except that reviewers had to travel to the meeting (usually in a hotel) to participate. Outcomes included: the average time spent discussing each proposal; reviewer agreement estimated using the intra-class correlation coefficient (ICC); the effect on the funding decision of pre-post meeting score changes after discussion (indicated by the proportion of proposals that crossed a theoretical funding threshold); and reviewers’ views on the panel discussions (surveyed at the end of each meeting using a numerical Likert-type scale).

Principal results and conclusions : Average review time per proposal was slightly shorter for teleconferences (20.0 minutes) than face-to-face meetings (23.9 minutes) (ANOVA: F 3,61 = 14.54; p<0.001). Reviewer agreement ranged from ICC = 0.84 to 0.87 across all years, with no clear difference between meeting settings. Slightly more (12.7%) proposals assessed in teleconferences than in face-to-face meetings (10.0%) crossed the funding threshold either way after discussion. After peer review discussion, 19.8% of proposals scored in teleconferences and 15.4% in face-to-face meetings fell within the fundable score range. The authors’ conclusion that most of the outcomes were unaffected by the review setting appears reasonable, although it is unclear how important the reduced discussion time in teleconferences is and unclear whether the reviewers reported any limitations to the process.

Key strengths and limitations : The innovation and comparator were used in ‘live’ review rounds of a national funder to allocate funding, with both approaches replicated in two years. Sample size was relatively large (circa 1600 proposals in total; range 291 to 669 per meeting). The retrospective case-control design is a limitation, but reviewer demographic characteristics appear to have been similar across the groups and years. Uncertainties are that the ‘consumer reviewers’ identity is unclear; and only a limited set of reviewers’ views are reported, making it unclear how representative they are.

WebEx-based virtual peer review meetings (Vo et al [ 100 ] ) Overview : Evaluation of the first six unplanned virtual review sessions conducted during the US 2012 hurricane season at the Agency for Healthcare Research and Quality (AHRQ), to assess their effects on review outcomes and to compare them with five face-to-face peer-review sessions.

Innovation method : Virtual online meetings of peer reviewers using WebEx software, which had: audio; high-definition video; real-time content sharing; and the capability to feed up to seven simultaneous webcam videos. A 30-minute basic training session on use of WebEx software was provided. Four Study Section meetings and two Special Emphasis Panel meetings were conducted. In total, 110 reviewers participated, ranging from 7 to 24 per section or panel. Of 194 total grant proposals reviewed, 128 were discussed, ranging from six to 34 proposals per session. Low-scoring proposals were not discussed so as to give reviewers ample time to concentrate on those with higher scores.

Method for assessing the innovation : Retrospective case-control type study which compared the six unplanned virtual grant proposal review sessions held in October 2012 against five face-to-face review sessions held in June 2012. The time taken for peer review and the cost of peer review were recorded. Views of reviewers on the advantages and disadvantages of the WebEx software and review process were obtained using a 10-item questionnaire.

Principal results and conclusions: The mean time spent discussing each proposal was 20 minutes for virtual review sessions and 26 minutes for face-to-face sessions and the average meeting lengths were 587 minutes and 430 minutes respectively. This gave costs per reviewer per day of US$ 324 and US$1314 respectively (a reduction in costs of 76%). The authors concluded that the virtual review process is a replicable and low cost method of review, but this is subject to the proviso that there are numerous uncertainties around the methods ( Table 2 ). Furthermore, reviewers’ responses to questionnaires indicated that 26% experienced technical difficulties and 33% would not use virtual review again.

Key strengths and limitations : The innovation and comparator were used in ‘live’ review rounds of a national funder to allocate funding, with five or six replicate sessions analysed. However, no information about the face-to-face sessions is provided so it is unclear whether these reflected usual AHRQ practice and whether they had comparable proposals, reviewers, and overall processes to the virtual review sessions. There is also uncertainty around several aspects of the virtual peer review process which were not reported, and whether all costs had been accounted for, which limits generalisability.

Other approaches

Modified Delphi process for selecting ‘innovator’ grants (Holliday and Robotin [ 27 ] ) Overview : ‘Modified Delphi’ process, conducted online by the Cancer Council of New South Wales (CCNSW, Australia) for selecting ‘innovator’ grants, based on proposals limited to six pages. The approach was developed because most potential cancer expert peer reviewers were listed as investigators, or had conflicts to declare. This made it inappropriate to use traditional peer review in which local experts are invited as peer reviewers. The grants aimed to support innovative research unlikely to be considered by traditional funding bodies.

Innovation method : The process was applied to the 10 best proposals received and involved five non-conflicted experts who held pancreatic cancer research grants in another country (the US). Three Delphi rounds were held over a 16-day period in March 2009 to score: (1) scientific merit (clarity, measurability of the endpoint, scientific quality, originality, adequacy of the study design to achieve the stated goal, whether the potential impact would warrant funding); (2) innovativeness; and (3) level of risk. At the end of each round scores were converted to ranks and the two lowest-ranking proposals at each round were excluded. The four remaining proposals were funded.

Method for assessing the innovation : Single-group prospective study in which reviewer agreement was assessed at the end of each round. Reviewers were provided with a table of de-identified scores and an overall ranking of proposals and were asked to advise whether they wished to proceed to the next round, or raise any objections. On completion of the Delphi process feedback was sought from the reviewers on the process, its usefulness, and possible alternatives or modifications (methods for obtaining feedback are not explicitly reported).

Principal results and conclusions : The authors’ conclusion was that “the modified Delphi process was an efficient, transparent and equitable method of reviewing novel grant proposals in a specialised field of research, where no local expertise was available” (p. 225). Reviewer feedback indicated that additional discussion would be helpful, suggesting that the innovation may benefit from further modification.

Key strengths and limitations : The innovation was used in a ‘live’ review round of a national funder to allocate funding. The process was relatively simple and quick, although it was only tested in one small group of five reviewers, and assessed only 10 proposals. As such, the generalisability is likely to be limited to very small-scale grant programmes or programmes where a subset of the ‘best’ proposals has already been identified for further prioritisation. Further research would be needed to confirm the findings and clarify whether the method could accommodate a larger number of reviewers and proposals. Several aspects of the methodology are unclear, particularly relating to the assessment of reviewer feedback.

Inclusion of patient-centred stakeholders in peer review meetings (Fleurence et al [ 57 ] ) Overview : The study explored contributions of scientist, patient, and stakeholder reviewers (e.g. nurses, physicians, other caregivers, patient advocates) to the merit-review process of the Patient-Centred Outcomes Research Institute (PCORI) in its inaugural funding round. The rationale was that using scientists alone might bias against novelty, and could lead to selection of proposals similar to the scientists’ interests.

Innovation method : The two phase inaugural PCORI merit-review process. In phase one (no discussion), proposals (n = 480) were reviewed by three scientific reviewers who submitted their reviews online. Reviewers received webinar training in PCORI’s review process and criteria. Proposals with average scores in the top third (n = 152) moved to phase two. Proposals in phase two were first given “pre-discussion” scores by two scientists (who did not participate in phase one), one patient and one stakeholder. These four lead reviewers had access to phase one critiques and scores. Patient and stakeholder reviewers based their overall score on three of eight PCORI merit criteria (innovation and potential for improvement; patient centeredness; patient and stakeholder engagement). Proposals in the top two-thirds based on the four lead reviewers’ scores (n = 98) were then given a final “post discussion” score by each member of a 21-person panel (including revised scores from the lead reviewers) during a face-to-face meeting. Lead reviewer scores were available to all reviewers during the discussion. The 25 proposals with the best average post discussion scores were funded. In total 59 scientists, 21 patients and 31 stakeholders participated in phase two.

Method for assessing the innovation : Single-group study. Agreement between scientist scores and patient and stakeholder scores was assessed before and after the in-person panel discussions in phase two. The effect on the funding decision of using the 2-phase (scientist, patient and stakeholder) or only a one phase (scientist-only) review process was assessed by comparing proposal rankings after each phase. Web-based surveys and focus groups were used to elicit reviewers’ views.

Principal results and conclusions : Of the 25 proposals with the best scores after phase two, only 13 had ranked in the top 25 after phase one, indicating patient and stakeholder reviewers influenced funding decisions. Graphical distributions of scores suggested reviewer agreement improved after discussion for all reviewer types, with strong agreement in post-discussion scores between scientists and non-scientists. Patients and stakeholders appeared to score more critically than scientists. A summary of themes emerging from the surveys and focus groups identified concerns about non-scientists’ technical expertise and a perceived ‘hierarchy’ among reviewers. The authors acknowledge that generalisability of the findings is uncertain.

Key strengths and limitations : The innovation was tested in a ‘live’ (inaugural) review round of a national funder, with a relatively large number of proposals, but limited by being a single-group study and unclear whether data collection was prospective or retrospective. Little information is provided about the web survey and focus groups, although it is stated that separate groups were held for a random sample of scientific reviewers, all patients and all stakeholder reviewers.

Peer reviewer training module to improve scoring accuracy (Sattler et al [ 95 ] ) Overview : Development and evaluation of a brief training programme for grant reviewers that aimed to increase inter-rater reliability, rating scale knowledge, and effort to read National Institutes of Health (NIH) grant review criteria (but did not actually review any proposals).

Innovation method : Participants visited a secure website that presented informed consent information, introduced the study, presented an 11-minute training programme video, offered an option to read the criteria for the funding mechanism, and presented a questionnaire. The video emphasized five issues: (1) grant agencies depend on reviewers for accurate information; (2) reviewer scores influence funding decisions; (3) explanation of the NIH rating scale and the definitions of minor, moderate, and major weakness; (4) how to assign evaluation scores that indicate how well the proposal matches the agency’s criteria; and (5) why it is important to carefully read and understand the agency’s criteria. The host stressed that the rating scale used in the video may differ from other grant review rating scales as well as rating scales used in other settings and gave an example of those differences.

Method for assessing the innovation : Two-group randomised controlled trial (RCT) comparing training and no-training groups. Participants in the no-training group visited a secure website that presented informed consent information, introduced the study, offered an option to read the criteria for the funding mechanism, and presented a questionnaire. Time to read the grant review criteria was recorded for both groups. Reviewers’ understanding of how to apply scores, and inter-rater agreement in scoring were also assessed for both groups, based on results of the questionnaire. Reviewer agreement was assessed using intra-class correlation coefficients (ICC); Poisson regression was used to assess significance of differences in time to read grant criteria between experienced and novice reviewers.

Principal results and conclusions : Inter-rater reliability was significantly higher in the video training group (ICC = 0.89; 95% CI 0.71 to 0.99) than the no-training group (ICC = 0.61; 95% CI 0.32 to 0.96). Participants who received video training spent more time reading grant review criteria (6.1 minutes, SD = 4.8) than those in the no-training group (4.2 minutes, SD = 4.8; Poisson regression, z = 2.17, p = 0.03). Experienced reviewers spent more time reading the criteria (6.0 minutes, SD = 5.6) than novice reviewers (4.2 minutes, SD = 4.0; Poisson regression, z = 3.22, p = 0.001) (reported only for both groups pooled). The authors’ concluded that the training video increased scoring accuracy, inter-rater reliability, and the amount of time reading the review criteria.

Key strengths and limitations : The RCT design suggests potentially high internal validity, although superficial reporting means that there are unclear risks of several types of bias. The study has low generalisability due to its focus on a specific part of an NIH scoring system, together with the experimental setting which did not involve assessment of ‘real’ proposals or making any funding decisions.

Our study is the most detailed systematic description of the characteristics of research into the peer review of funding proposals in the health sciences to date. The systematic map has revealed a burgeoning area of investigation, with just under half the studies in the map having been published since 2010. The topics investigated were diverse and the studies were mainly observational in design, typically comprising longitudinal or cross-sectional studies, or retrospective analyses of data collected during funding proposal calls. Experimental studies were very rare, which may demonstrate a preference to study peer review within the context of real world funding programmes, for example on grounds of feasibility, potentially at the expense of internal validity.

Our systematic review included a broad range of innovations and assessed their impact on various measures of effectiveness and efficiency. The majority of the outcomes measured represent ways to make peer review (as well as the research funding process in general) more efficient. The studies showed that innovations could reduce the time spent on peer review and the costs incurred, in varying magnitudes. For example, in one retrospective, case-control-type study, use of teleconferences compared to face-to-face meetings led to a slight reduction in discussion times of up to 10 minutes per proposal, though the overall importance of this reduction was not quantified in terms of changes in costs, or perceived significance. [ 47 , 60 ] In another retrospective, case-control-type study, use of internet-based video conferences compared to face-to-face meetings resulted in shorter discussion times per proposal (by around six minutes on average) and shorter average meeting lengths (by around 2.5 hours). [ 100 ] This was associated with an estimated cost saving of around $1000 (US dollars) per reviewer per day (a 76% reduction), which could be considered an important efficiency improvement. The peer review time per proposal was similar between two variants of an innovation that included shorter proposal forms and smaller peer review panels (3.6 to 3.9 hours), assessed in a prospective parallel group study. [ 6 ] The authors of this study estimated that use of these simplified panels could result in cost savings of between $2.1 to $4.9 million (Australian dollars) per year compared to the standard process of a larger panel and a longer proposal form (equating to a reduction in costs of between 34% to 78%). Again, this could represent substantial savings to funders, particularly those that operate at a large scale.

A prospective uncontrolled study [ 23 ] which evaluated a simplified process (comprising short proposal forms with accelerated peer review) reported relatively short peer review times per proposal (an average of 36 minutes) and an average time from proposal submission to funding outcome notification of between six to eight weeks. This suggests that accelerated peer review can enable timely funding decisions in certain contexts. The study also provided comprehensive feedback to applicants (both those successful and unsuccessful) on how their proposals could be improved, and the authors noted that over time they received fewer proposals but those submitted were of better quality. However, the trade-off between the costs to funders (in terms of time and resources required to provide detailed feedback to applicants), and the potential benefits to funders and applicants (in terms of production and submission of fewer, better quality, proposals) were not fully quantified by this study. Provision of detailed feedback to applicants has potential to improve the efficiency of the research funding system as a whole, and is an area for future research to investigate.

A number of the studies included in the systematic review measured inter-reviewer agreement, in terms of scores and in funding decisions, with varied findings. For example, good reviewer agreement was found in the study which compared peer review by teleconference discussions with face-to-face meetings, with ICCs ranging between 0.84 and 0.87. [ 47 , 60 ] The authors suggested that this, and the absence of other differences in review outcomes between the two approaches, supports the case for moving to teleconferences. In contrast, a study which compared ranking of proposals by a committee of 11 reviewers against ranking of proposals by two peer reviewers found poor reviewer agreement in ranking scores (and therefore decisions to fund) as measured by a kappa score of 0.36. [ 29 ] Lack of good agreement might not necessarily be a limitation of peer review if this is offset by other efficiency benefits such as time and cost reductions. However, none of the studies included in our systematic review measured all of these outcomes, so possible trade-offs among different aspects of efficiency cannot be ascertained currently.

There were mixed findings across the studies indicating perceived benefits but also drawbacks of the innovations. For example, in the study in which patients and care-giving stakeholders peer reviewed funding proposals alongside scientific reviewers, scientists appreciated the perspectives offered by patients and stakeholders and there was recognition of a collegial and respectful process. [ 57 ] However, there was concern from scientists about the level of technical expertise of some non-scientist reviewers. The study comparing internet-based video conferences to face-to-face meetings [ 100 ] reported both positive and negative views expressed by peer reviewers. Perceived advantages included less travel, decreased costs, and faster reviews. However, some technical problems were experienced, and there was concern that video-conferences might impair interaction among reviewers and result in less thorough reviews. It is important that any implementation of these peer review innovations takes into account the limitations, and future evaluations should thoroughly evaluate process issues to facilitate optimal planning and execution of peer review activity.

Our findings can be contextualised with those of a non-systematic literature review by Guthrie et al.[ 105 ] published in 2017 which included 105 empirical articles on the effectiveness and burden of peer review for grant funding. That review had a broader focus than our systematic review, covering issues such as bias and fairness, reliability, timeliness of peer review, and the burden of peer review on the research system as a whole. It also included studies of peer review in disciplines other than health sciences. The review included many of the studies included in our systematic review, but described them in less detail. Notably, Guthrie et al.’s review incorporated a different conceptualisation of effectiveness and efficiency than in our review: ‘effectiveness’ is a multi-dimensional concept that incorporates factors such as whether peer review selects the ‘best’ research; whether it is reliable, fair, accountable, timely and has the confidence of key stakeholders. The ‘burden’ of peer review on the research system is a concept that incorporates the time, resources and costs expended in the production and review of grant applications. ‘Efficiency’ is the trade-off between effectiveness and burden. Thus, an efficient peer review system is one that has one or more markers of effectiveness whilst being low in system burden. Guthrie et al. [ 105 ] found there was a lack of evidence about the overall efficiency of peer review of grant applications. In terms of markers of effectiveness they found evidence to indicate a bias against innovative research, and evidence of the poor prediction of peer review on future research performance. They found some evidence to suggest a high burden on applicants, though much of the research evidence in their review has focused on reducing burden on funders and reviewers. Applying Guthrie’s conceptualisation to our systematic review results there is evidence to show a reduction in burden for funders (which we refer to as efficiency in our review). However, evidence for the effectiveness of peer review in our systematic review is limited to whether innovations which aim to reduce peer review burden can lead to the same research applications being funded as would have been funded under existing (more burdensome) peer review systems. The studies in our systematic review did not assess other markers of effectiveness such as the predictive ability to identify the best research. Thus, we cannot conclude that there is strong evidence to support improving the ‘efficiency’ (as defined by Guthrie et al,[ 105 ]) of peer review of grant applications, but we can conclude there is evidence (albeit with methodological limitations) on burden reduction.

Our research used systematic methods to identify, collate, appraise and analyse the evidence, employing standard approaches in evidence synthesis. [ 106 , 107 ] Extensive internet searching was conducted to identify material not formally published in academic journals. Quality assurance procedures, such as independent screening and data checking, were used where possible to minimise bias and error. However, there were some potential limitations of this study. We could not check the reference lists of all studies included in the map to identify any additional relevant studies, though we did check the reference lists of all studies included in the systematic review. Not all of the keywords applied to studies included in the map were checked by a second reviewer. However, as mentioned above, following checking of a random sample of studies the level of reliability between reviewers was considered sufficient as few amendments were necessary. We restricted inclusion to studies published in the English language. It is unknown whether there is a significant pool of relevant evidence published in other languages. The scope of our evidence synthesis is limited to studies of peer review of research proposals in health; we did not investigate studies of peer review of research proposals in other disciplines. Whilst it is possible that findings from studies in non-health disciplines could also have relevance to health research, a substantial effort would be required to synthesise the evidence across multiple disciplines. Our findings suggest, however, that even within health research the studies had limited generalisability.

A strength of this evidence synthesis was the close consultation with stakeholders throughout the project, and in particular their role in setting the focus for the systematic review. [ 21 ] It should be reiterated that the scope of the systematic review was to focus on peer review innovations evaluated for effectiveness and efficiency. Only a small proportion (around 10%) of the evidence from the map met the inclusion criteria for the review, meaning that there remains a larger pool of evidence that could be included in future systematic reviews focusing on other aspects of peer review. Also of note, our systematic review included studies of innovations, which we defined as being new activity distinct from existing practice (or in addition to existing practice). Some of the literature evaluated only what appeared to be existing peer review practice, and useful information could be gleaned from these studies in further reviews.

This project has found that there is increasing international research activity into the peer review of health research funding. Overall, it appears that simplifying peer review by shortening proposals, using smaller panels of reviewers and accelerating the process could reduce the time needed for review, speed up the general process, and reduce costs. However, this might come at the expense of peer review quality, a key aspect that has not been fully assessed. Virtual peer review using videoconferencing or teleconferencing appears promising for reducing costs by avoiding the need for reviewers to travel, but again any consequences for the quality of the peer review itself have not been adequately assessed. All of the eight studies included in the systematic review were relatively weak methodologically or had variable generalisability, which limits how much emphasis should be placed on their results.

Given the methodological limitations of the evidence included in this systematic review it is not possible to recommend direct implementation of these innovations currently. However, many of them appear promising based on current evidence and could be adapted as necessary by funders and subjected to evaluation. Future evaluations should be conducted to a sufficient standard, to ensure high internal and external validity. In particular, we have identified a number of measures of generalisability of studies which we recommend that evaluators incorporate into the design and reporting of their work ( Table 3 ). Where feasible, experimental evaluations, including RCTs, should be conducted including economic evaluation to assess costs of peer review innovations as this is lacking in the currently available evidence.

Supporting information

S1 appendix, s2 appendix, s3 appendix, s1 database, s1 protocol, acknowledgments.

Thanks to the NIHR Push the Pace 2 peer review working group for their input, as described in this manuscript.

Conducted on behalf of the NIHR Research on Research programme.

We thank Karen Welch, information specialist, Southampton Health Technology Assessments Centre (SHTAC), for developing and running the search strategy; and Wendy Gaisford (SHTAC) for assisting with data checking.

Funding Statement

This research was supported by the National Institute for Health Research (NIHR) Evaluation, Trials and Studies Coordinating Centre (NETSCC) through its Research on Research programme. The views and opinions expressed are those of the authors and do not necessarily reflect those of the Department of Health, or of NETSCC. NIHR Stakeholders advised the research team on the scope of the systematic review as described in the manuscript. The NIHR had no role in the data collection, analysis or decision to publish or prepare the manuscript.

Data Availability

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 24, Issue 2
  • Five tips for developing useful literature summary tables for writing review articles
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-0157-5319 Ahtisham Younas 1 , 2 ,
  • http://orcid.org/0000-0002-7839-8130 Parveen Ali 3 , 4
  • 1 Memorial University of Newfoundland , St John's , Newfoundland , Canada
  • 2 Swat College of Nursing , Pakistan
  • 3 School of Nursing and Midwifery , University of Sheffield , Sheffield , South Yorkshire , UK
  • 4 Sheffield University Interpersonal Violence Research Group , Sheffield University , Sheffield , UK
  • Correspondence to Ahtisham Younas, Memorial University of Newfoundland, St John's, NL A1C 5C4, Canada; ay6133{at}mun.ca

https://doi.org/10.1136/ebnurs-2021-103417

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research. 1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis in reviews, the use of literature summary tables is of utmost importance. A literature summary table provides a synopsis of an included article. It succinctly presents its purpose, methods, findings and other relevant information pertinent to the review. The aim of developing these literature summary tables is to provide the reader with the information at one glance. Since there are multiple types of reviews (eg, systematic, integrative, scoping, critical and mixed methods) with distinct purposes and techniques, 2 there could be various approaches for developing literature summary tables making it a complex task specialty for the novice researchers or reviewers. Here, we offer five tips for authors of the review articles, relevant to all types of reviews, for creating useful and relevant literature summary tables. We also provide examples from our published reviews to illustrate how useful literature summary tables can be developed and what sort of information should be provided.

Tip 1: provide detailed information about frameworks and methods

  • Download figure
  • Open in new tab
  • Download powerpoint

Tabular literature summaries from a scoping review. Source: Rasheed et al . 3

The provision of information about conceptual and theoretical frameworks and methods is useful for several reasons. First, in quantitative (reviews synthesising the results of quantitative studies) and mixed reviews (reviews synthesising the results of both qualitative and quantitative studies to address a mixed review question), it allows the readers to assess the congruence of the core findings and methods with the adapted framework and tested assumptions. In qualitative reviews (reviews synthesising results of qualitative studies), this information is beneficial for readers to recognise the underlying philosophical and paradigmatic stance of the authors of the included articles. For example, imagine the authors of an article, included in a review, used phenomenological inquiry for their research. In that case, the review authors and the readers of the review need to know what kind of (transcendental or hermeneutic) philosophical stance guided the inquiry. Review authors should, therefore, include the philosophical stance in their literature summary for the particular article. Second, information about frameworks and methods enables review authors and readers to judge the quality of the research, which allows for discerning the strengths and limitations of the article. For example, if authors of an included article intended to develop a new scale and test its psychometric properties. To achieve this aim, they used a convenience sample of 150 participants and performed exploratory (EFA) and confirmatory factor analysis (CFA) on the same sample. Such an approach would indicate a flawed methodology because EFA and CFA should not be conducted on the same sample. The review authors must include this information in their summary table. Omitting this information from a summary could lead to the inclusion of a flawed article in the review, thereby jeopardising the review’s rigour.

Tip 2: include strengths and limitations for each article

Critical appraisal of individual articles included in a review is crucial for increasing the rigour of the review. Despite using various templates for critical appraisal, authors often do not provide detailed information about each reviewed article’s strengths and limitations. Merely noting the quality score based on standardised critical appraisal templates is not adequate because the readers should be able to identify the reasons for assigning a weak or moderate rating. Many recent critical appraisal checklists (eg, Mixed Methods Appraisal Tool) discourage review authors from assigning a quality score and recommend noting the main strengths and limitations of included studies. It is also vital that methodological and conceptual limitations and strengths of the articles included in the review are provided because not all review articles include empirical research papers. Rather some review synthesises the theoretical aspects of articles. Providing information about conceptual limitations is also important for readers to judge the quality of foundations of the research. For example, if you included a mixed-methods study in the review, reporting the methodological and conceptual limitations about ‘integration’ is critical for evaluating the study’s strength. Suppose the authors only collected qualitative and quantitative data and did not state the intent and timing of integration. In that case, the strength of the study is weak. Integration only occurred at the levels of data collection. However, integration may not have occurred at the analysis, interpretation and reporting levels.

Tip 3: write conceptual contribution of each reviewed article

While reading and evaluating review papers, we have observed that many review authors only provide core results of the article included in a review and do not explain the conceptual contribution offered by the included article. We refer to conceptual contribution as a description of how the article’s key results contribute towards the development of potential codes, themes or subthemes, or emerging patterns that are reported as the review findings. For example, the authors of a review article noted that one of the research articles included in their review demonstrated the usefulness of case studies and reflective logs as strategies for fostering compassion in nursing students. The conceptual contribution of this research article could be that experiential learning is one way to teach compassion to nursing students, as supported by case studies and reflective logs. This conceptual contribution of the article should be mentioned in the literature summary table. Delineating each reviewed article’s conceptual contribution is particularly beneficial in qualitative reviews, mixed-methods reviews, and critical reviews that often focus on developing models and describing or explaining various phenomena. Figure 2 offers an example of a literature summary table. 4

Tabular literature summaries from a critical review. Source: Younas and Maddigan. 4

Tip 4: compose potential themes from each article during summary writing

While developing literature summary tables, many authors use themes or subthemes reported in the given articles as the key results of their own review. Such an approach prevents the review authors from understanding the article’s conceptual contribution, developing rigorous synthesis and drawing reasonable interpretations of results from an individual article. Ultimately, it affects the generation of novel review findings. For example, one of the articles about women’s healthcare-seeking behaviours in developing countries reported a theme ‘social-cultural determinants of health as precursors of delays’. Instead of using this theme as one of the review findings, the reviewers should read and interpret beyond the given description in an article, compare and contrast themes, findings from one article with findings and themes from another article to find similarities and differences and to understand and explain bigger picture for their readers. Therefore, while developing literature summary tables, think twice before using the predeveloped themes. Including your themes in the summary tables (see figure 1 ) demonstrates to the readers that a robust method of data extraction and synthesis has been followed.

Tip 5: create your personalised template for literature summaries

Often templates are available for data extraction and development of literature summary tables. The available templates may be in the form of a table, chart or a structured framework that extracts some essential information about every article. The commonly used information may include authors, purpose, methods, key results and quality scores. While extracting all relevant information is important, such templates should be tailored to meet the needs of the individuals’ review. For example, for a review about the effectiveness of healthcare interventions, a literature summary table must include information about the intervention, its type, content timing, duration, setting, effectiveness, negative consequences, and receivers and implementers’ experiences of its usage. Similarly, literature summary tables for articles included in a meta-synthesis must include information about the participants’ characteristics, research context and conceptual contribution of each reviewed article so as to help the reader make an informed decision about the usefulness or lack of usefulness of the individual article in the review and the whole review.

In conclusion, narrative or systematic reviews are almost always conducted as a part of any educational project (thesis or dissertation) or academic or clinical research. Literature reviews are the foundation of research on a given topic. Robust and high-quality reviews play an instrumental role in guiding research, practice and policymaking. However, the quality of reviews is also contingent on rigorous data extraction and synthesis, which require developing literature summaries. We have outlined five tips that could enhance the quality of the data extraction and synthesis process by developing useful literature summaries.

  • Aromataris E ,
  • Rasheed SP ,

Twitter @Ahtisham04, @parveenazamali

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

Estimates of global and regional prevalence of Helicobacter pylori infection among individuals with obesity: a systematic review and meta-analysis

  • Published: 10 April 2024

Cite this article

  • Alireza Sadeghi   ORCID: orcid.org/0000-0002-7950-3270 1 , 2 , 3 ,
  • Fatemeh Nouri   ORCID: orcid.org/0000-0002-4878-7848 1 ,
  • Ehsan Taherifard   ORCID: orcid.org/0000-0002-8438-4990 1 ,
  • Mohammad Amin Shahlaee   ORCID: orcid.org/0009-0003-8942-7640 1 &
  • Niloofar Dehdari Ebrahimi   ORCID: orcid.org/0000-0002-9866-8361 1 , 2  

The prevalence of obesity is an escalating concern in modern populations, predominantly attributed to the widespread adoption of sedentary lifestyles observed globally. Extensive research has established a significant association between obesity and Helicobacter pylori ( H. pylori ). Nonetheless, a comprehensive assessment of the global prevalence of H. pylori among individuals with obesity remains undetermined.

A systematic search strategy was applied to PubMed, Scopus, and Web of Science. The resulting records were screened using the Rayyan online tool for the management of systematic reviews. Freeman–Tukey double arcsine transformation was used. Subgroup analyses (continent, regional classifications, developmental status, religion, global hemisphere, income, access to international waters, and H. pylori eradication) and multivariate meta-regression (latitude, longitude, male-to-all ratio, mean age, and body mass index) were done to estimate the effects of the moderators. Risk of bias assessment was done using JBI checklist for prevalence studies.

A total of 472,511 individuals with obesity from 208 studies were included. The global estimation of H. pylori prevalence among individuals with obesity was 32.3% (95% CI 26.9%, 38.0%). South America had the highest prevalence. Based on the different classifications of countries, resource-rich, low-/middle-income, developing, and Islamic countries had the highest prevalence. Lower pooled prevalence was observed in the studies with adequate sample sizes ( n  ≥ 270).

The findings have the potential to influence future health policies for preventing and treating H. pylori infection. However, there is variability among the included studies, indicating the need for more population-based research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

example of systematic review research proposal

Data availability

The data underlying this article are available in the article and in its online supplementary material.

Abbreviations

  • Helicobacter pylori

Gastrointestinal

Joanna Briggs institute

Body mass index

Coronavirus disease of 2019

Risk of bias

Lin X, Li H. Obesity: epidemiology, pathophysiology, and therapeutics (in eng). Front Endocrinol. 2021;12:706978. https://doi.org/10.3389/fendo.2021.706978 .

Article   Google Scholar  

Camilleri M, Malhi H, Acosta A. Gastrointestinal complications of obesity (in eng). Gastroenterology. 2017;152(7):1656–70. https://doi.org/10.1053/j.gastro.2016.12.052 .

Li Y, Choi H, Leung K, Jiang F, Graham DY, Leung WK. Global prevalence of Helicobacter pylori infection between 1980 and 2022: a systematic review and meta-analysis. Lancet Gastroenterol Hepatol. 2023;8(6):553–64. https://doi.org/10.1016/S2468-1253(23)00070-5 .

Zamani M, et al. Systematic review with meta-analysis: the worldwide prevalence of Helicobacter pylori infection (in eng). Aliment Pharmacol Ther. 2018;47(7):868–76. https://doi.org/10.1111/apt.14561 .

Article   CAS   Google Scholar  

Hooi JKY, et al. Global prevalence of Helicobacter pylori infection: systematic review and meta-analysis (in eng). Gastroenterology. 2017;153(2):420–9. https://doi.org/10.1053/j.gastro.2017.04.022 .

Baradaran A, et al. The association between Helicobacter pylori and obesity: a systematic review and meta-analysis of case-control studies (in eng). Clin Diabetes Endocrinol. 2021;7:15. https://doi.org/10.1186/s40842-021-00131-w .

Chen Y, Blaser MJ. Association between gastric helicobacter pylori colonization and glycated hemoglobin levels. J Infect Dis. 2012;205(8):1195–202. https://doi.org/10.1093/infdis/jis106 .

Liu J, et al. The synergy of Helicobacter pyloriand lipid metabolic disorders in induction of Th17-related cytokines in human gastric cancer. J Cancer Metast Treat. 2017;3:169–76.

Page MJ, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews (in eng). BMJ. 2021;372:n71. https://doi.org/10.1136/bmj.n71 .

E. Aromataris and Z. Munn, JBI Manual for Evidence Synthesis : JBI, 2020. https://synthesismanual.jbi.global .

Borges Migliavaca C, et al. How are systematic reviews of prevalence conducted? A methodological study. BMC Med Res Methodol. 2020;20:96. https://doi.org/10.1186/s12874-020-00975-3 .

Sadeghi A, Dehdari Ebrahimi N. Global prevalence of Helicobacter pylori infection among individuals with obesity: A protocol for a systematic review and meta-analysis. Health Sci Rep. 2023;6:e1505. https://doi.org/10.1002/hsr2.1505 .

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5:210. https://doi.org/10.1186/s13643-016-0384-4 .

World Health Organization. Obesity: preventing and managing the global epidemic: report of a WHO consultation. Geneva: World Health Organization; 2000.

Google Scholar  

Pieper D, Puljak L. Language restrictions in systematic reviews should not be imposed in the search strategy but in the eligibility criteria if necessary (in eng). J Clin Epidemiol. 2021;132:146–7. https://doi.org/10.1016/j.jclinepi.2020.12.027 .

Scherer RW, Saldanha IJ. How should systematic reviewers handle conference abstracts? A view from the trenches. Syst Rev. 2019;8:264. https://doi.org/10.1186/s13643-019-1188-0 .

Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data (in eng). Int J Evid Based Healthc. 2015;13(3):147–53. https://doi.org/10.1097/xeb.0000000000000054 .

Munn Z, Moola S, Riitano D, Lisy K. The development of a critical appraisal tool for use in systematic reviews addressing questions of prevalence (in eng). Int J Health Policy Manag. 2014;3(3):123–8. https://doi.org/10.15171/ijhpm.2014.71 .

Freeman MF, Tukey JW. Transformations related to the angular and the square root. Ann Math Stat. 1950;21:607–11. https://doi.org/10.1214/aoms/1177729756 .

Doi SA, Xu C. The Freeman-Tukey double arcsine transformation for the meta-analysis of proportions: recent criticisms were seriously misleading (in eng). J Evid Based Med. 2021;14:259–61. https://doi.org/10.1111/jebm.12445 .

Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis (in eng). Stat Med. 2002;21(11):1539–58. https://doi.org/10.1002/sim.1186 .

Lender N, et al. Review article: associations between Helicobacter pylori and obesity - an ecological study. Aliment Pharmacol Ther. 2014;40(1):24–31. https://doi.org/10.1111/apt.12790 .

Naing L, Winn T, Nordin R. Pratical issues in calculating the sample size for prevalence studies. Arch Orofac Sci. 2006;1:9–14.

Hunter JP, Saratzis A, Sutton AJ, Boucher RH, Sayers RD, Bown MJ. In meta-analyses of proportion studies, funnel plots were found to be an inaccurate method of assessing publication bias (in eng). J Clin Epidemiol. 2014;67(8):897–903. https://doi.org/10.1016/j.jclinepi.2014.03.003 .

Cochrane Handbook for Systematic Reviews of Interventions , 2nd ed, Wiley, Chichester, 2019.

Lees N. The Brandt line after forty years: The more North-South relations change, the more they stay the same? Rev Int Stud. 2021;47(1):85–106. https://doi.org/10.1017/S026021052000039X .

Themrise K, Seye A, Catherine K, Madhukar P. How we classify countries and people—and why it matters. BMJ Glob Health. 2022;7: e009704. https://doi.org/10.1136/bmjgh-2022-009704 .

Nocaj A, Brandes U. Computing voronoi treemaps: faster, simpler, and resolution-independent. Comput Graph Forum. 2012;31:855–64. https://doi.org/10.1111/j.1467-8659.2012.03078.x .

Sasaki H, et al. Hypergastrinemia in obese noninsulin-dependent diabetes: a possible reflection of high prevalence of vagal dysfunction (in eng). J Clin Endocrinol Metab. 1983;56:744–50. https://doi.org/10.1210/jcem-56-4-744 .

Sachs G, Scott D, Weeks D, Melchers K. Gastric habitation by Helicobacter pylori : insights into acid adaptation. Trends Pharmacol Sci. 2000;21:413–6.

Meyer-Rosberg K, Scott DR, Rex D, Melchers K, Sachs G. The effect of environmental pH on the proton motive force of Helicobacter pylori (in eng). Gastroenterology. 1996;111:886–900. https://doi.org/10.1016/s0016-5085(96)70056-2 .

Wisén O, Rössner S, Johansson C. Gastric secretion in massive obesity. Evidence for abnormal response to vagal stimulation (in eng). Dig Dis Sci. 1987;32:968–72. https://doi.org/10.1007/bf01297185 .

Mushref MA, Srinivasan S. Effect of high fat-diet and obesity on gastrointestinal motility (in eng). Ann Transl Med. 2013;1:14. https://doi.org/10.3978/j.issn.2305-5839.2012.11.01 .

Testerman TL, Mobley HLT. Adherence and colonization. In: Mobley HLT, Hazell SL, editors. Helicobacter pylori physiology and genetics. Washington (DC): ASM Press; 2001.

Versalovic J. Manual of clinical microbiology. American Society Mic Series; 2011

Lu C, Yu Y, Li L, Yu C, Xu P. Systematic review of the relationship of Helicobacter pylori infection with geographical latitude, average annual temperature and average daily sunshine. BMC Gastroenterol. 2018;18:50. https://doi.org/10.1186/s12876-018-0779-x .

Yang L, He X, Li L, Lu C. Effect of vitamin D on Helicobacter pylori infection and eradication: a meta-analysis (in eng). Helicobacter. 2019;24: e12655. https://doi.org/10.1111/hel.12655 .

Aziz RK, Khalifa MM, Sharaf RR. Contaminated water as a source of Helicobacter pylori infection: a review (in eng). J Adv Res. 2015;6:539–47. https://doi.org/10.1016/j.jare.2013.07.007 .

Kayali S, et al. Helicobacter pylori , transmission routes and recurrence of infection: state of the art (in eng). Acta Bio-med Atenei Parmensis. 2018;89:72–6. https://doi.org/10.23750/abm.v89i8-S.7947 .

Higgins JPT. Commentary: heterogeneity in meta-analysis should be expected and appropriately quantified. Int J Epidemiol. 2008;37:1158–60. https://doi.org/10.1093/ije/dyn204 .

Migliavaca CB, et al. Meta-analysis of prevalence: I(2) statistic and how to deal with heterogeneity (in eng). Res Synth Methods. 2022;13:363–7. https://doi.org/10.1002/jrsm.1547 .

Barker TH, et al. Conducting proportional meta-analysis in different types of systematic reviews: a guide for synthesisers of evidence. BMC Med Res Methodol. 2021;21:189. https://doi.org/10.1186/s12874-021-01381-z .

Download references

Author information

Authors and affiliations.

Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran

Alireza Sadeghi, Fatemeh Nouri, Ehsan Taherifard, Mohammad Amin Shahlaee & Niloofar Dehdari Ebrahimi

Transplant Research Center, Shiraz University of Medical Sciences, Shiraz, Fars, Iran

Alireza Sadeghi & Niloofar Dehdari Ebrahimi

Gastroenterohepatology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran

Alireza Sadeghi

You can also search for this author in PubMed   Google Scholar

Contributions

AS conceptualized, supervised, performed the search and analysis, and visualized the results. AS and NDE screened the records for eligibility. NDE and FN extracted the data. MAS and ET assessed the quality of the studies. AS and ET provided the draft. NDE revised the manuscript. All authors read and approved the final version of the manuscript.

Corresponding authors

Correspondence to Alireza Sadeghi or Niloofar Dehdari Ebrahimi .

Ethics declarations

Conflict of interest.

The authors declare that the present study was done in the absence of any financial and personal competing interests.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 2412 KB)

Supplementary file2 (jpg 1624 kb), rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Sadeghi, A., Nouri, F., Taherifard, E. et al. Estimates of global and regional prevalence of Helicobacter pylori infection among individuals with obesity: a systematic review and meta-analysis. Infection (2024). https://doi.org/10.1007/s15010-024-02244-7

Download citation

Received : 22 December 2023

Accepted : 18 March 2024

Published : 10 April 2024

DOI : https://doi.org/10.1007/s15010-024-02244-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Meta-analysis
  • Systematic review
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. How to Write a Systematic Review

    example of systematic review research proposal

  2. How to conduct a Systematic Literature Review

    example of systematic review research proposal

  3. (PDF) How to perform a systematic review

    example of systematic review research proposal

  4. FREE 10+ Sample Research Proposal Templates in MS Word

    example of systematic review research proposal

  5. Reflective Essay: How to write a systematic review paper

    example of systematic review research proposal

  6. A Systematic Review of Proposals for the Social Integration of Elderly

    example of systematic review research proposal

VIDEO

  1. Creating a research proposal

  2. Systematic review_01

  3. Systematic Literature Review Technique

  4. Approaches to writing a research proposal

  5. The Power of a Systematic Literature Review: Unearthing Hidden Insights

  6. Ingesting Literature: Other Sources and Bibliomining

COMMENTS

  1. PDF RESEARCH PROPOSAL Written in line with PRISMA-P 2015 statement

    RESEARCH PROPOSAL Written in line with PRISMA-P 2015 statement Title Thoracic dysfunction in whiplash associated disorders: a systematic review and meta-synthesis Registration To be registered in PROSPERO. A protocol following method guidelines Cochrane handbook (Higgins ... sample characteristics, sample size, outcomes, and timescales to ...

  2. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  3. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  4. PDF RESEARCH PROPOSAL Written in line with PRISMA-P 2015 statement (1)

    Research by Vincent R et. al assessed the uptake of the HOME COS in treatment trials, the results showed a lack of universal implementation(5), it follows that the primary outcome of the proposed systematic review is to assess the uptake of the COS domains and instruments in systematic reviews of AD intervention. The four core outcome domains

  5. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr Robert Boyle and his colleagues published a systematic review in ...

  6. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  7. Guidelines for writing a systematic review

    A preliminary review, which can often result in a full systematic review, to understand the available research literature, is usually time or scope limited. Complies evidence from multiple reviews and does not search for primary studies. 3. Identifying a topic and developing inclusion/exclusion criteria.

  8. PDF Conducting a Systematic Review: Methodology and Steps

    guiding reviewers on conducting a systematic review, using examples from published systematic reviews and different types of studies. To illustrate the approach, we use example research questions and elaborate the stepwise proposed methodology for conducting a systematic review. Some of the potential research questions are: 1.

  9. How to properly use the PRISMA Statement

    Thus, the following examples can illustrate the appropriate use of the PRISMA Statement in research reporting: "The reporting of this systematic review was guided by the standards of the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) Statement.". "This scoping review was reported according to the Preferred ...

  10. PDF Guidance notes on planning a systematic review

    A systematic review is a means of identifying, evaluating and interpreting all available research relevant to a particular research question, or topic area, or phenomenon of interest. Individual studies contributing to a systematic review are called primary studies; a systematic review is a form of secondary study.

  11. PDF Writing a Systematic Literature Review: Resources for Students ...

    2 Writing a Systematic Literature Review: Resources for Students and Trainees Some key resources are highlighted in the next few pages - researchers around the world have found these useful - it's worth a look and it might save you a lot of time! PRISMA: Preferred Reporting Items for Systematic reviews and Meta-Analyses: the PRISMA statement ...

  12. Steps of a Systematic Review

    Tools: Steps: PICO template. 1. Id entify your research question. Formulate a clear, well-defined research question of appropriate scope. Define your terminology. Find existing reviews on your topic to inform the development of your research question, identify gaps, and confirm that you are not duplicating the efforts of previous reviews.

  13. Proposal

    The proposal stage is the most important step of a review project as it determines the feasibility of the review and its rationale. The steps are: 1. Determining review question and review type. Right Review: free tool to assist in selecting best review type for a given question. Trying to choose between a scoping or a systematic review? try ...

  14. Guidance to best tools and practices for systematic reviews

    For example, in systematic reviews of ... We look forward to others' ideas and proposals for the advancement of methods for evidence syntheses. ... Sampson M, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016; 13 (5):1-31. doi: 10.1371/journal ...

  15. A young researcher's guide to a systematic review

    A systematic review is a highly rigorous review of existing literature that addresses a clearly formulated question. The review systematically searches, identifies, selects, appraises, and synthesizes research evidence relevant to the question using methodology that is explicit, reproducible, and leads to minimum bias.

  16. How to Write a Research Proposal

    Research proposal examples. Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: "A Conceptual Framework for Scheduling Constraint Management".

  17. Examples of systematic reviews

    Example reviews. Please choose the tab below for your discipline to see relevant examples. For more information about how to conduct and write reviews, please see the Guidelines section of this guide. Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018).

  18. Research Guides: Literature Reviews: Writing a Proposal

    Content of a proposal for a thesis or any research project. Full Pdf from Rutgers University. ... (Literature review) What theory or model is going to guide your ... systematic review. New Jersey Institute of Technology University Heights, Newark, New Jersey 07102-1982 (973) 596-3206 Contact Us | Ask A Librarian | Map & Directions | A to Z ...

  19. Research Proposal for a Systematic Review of the Effectiveness of

    For example, community efforts might be more impactful when they are frequently held to sensitize the population on the actual effects of tobacco use on health and the economic status of the family. ... Research Proposal for a Systematic Review of the Effectiveness of Community Interventions in Decreasing Smoking Prevalence among Adults in the ...

  20. Peer review of health research funding proposals: A systematic map and

    Peer review panel (11 members) with short proposal vs standard 2-reviewer critique (Mayo et al ) Overview: A comparison of two methods of peer review on the probability of funding a research proposal: a panel of reviewers who ranked proposals; and a two peer reviewer method. This was a research project funding competition at a major Canadian ...

  21. (PDF) Systematic Literature Review: Some Examples

    Report. 4. ii. Example for a Systematic Literature Review: In references 5 example for paper that use Systematic Literature Review (SlR) example: ( Event-Driven Process Chain for Modeling and ...

  22. Five tips for developing useful literature summary tables for writing

    Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research.1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis ...

  23. Estimates of global and regional prevalence of

    This study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guideline, which is widely recognized as the standard for transparent reporting of systematic reviews and meta-analyses [].Also, widely endorsed methodological guidelines were utilized to improve the quality of the study [10, 11].