Disclaimer » Advertising

  • HealthyChildren.org

Issue Cover

  • Previous Article
  • Next Article

What is the Purpose of Peer Review?

What makes a good peer reviewer, how do you decide whether to review a paper, how do you complete a peer review, limitations of peer review, conclusions, research methods: how to perform an effective peer review.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • CME Quiz Close Quiz
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Elise Peterson Lu , Brett G. Fischer , Melissa A. Plesac , Andrew P.J. Olson; Research Methods: How to Perform an Effective Peer Review. Hosp Pediatr November 2022; 12 (11): e409–e413. https://doi.org/10.1542/hpeds.2022-006764

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Scientific peer review has existed for centuries and is a cornerstone of the scientific publication process. Because the number of scientific publications has rapidly increased over the past decades, so has the number of peer reviews and peer reviewers. In this paper, drawing on the relevant medical literature and our collective experience as peer reviewers, we provide a user guide to the peer review process, including discussion of the purpose and limitations of peer review, the qualities of a good peer reviewer, and a step-by-step process of how to conduct an effective peer review.

Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1 , 2   It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3   In 2012, there were more than 28 000 scholarly peer-reviewed journals and more than 3 million peer reviewed articles are now published annually. 3 , 4   However, even with this volume, most peer reviewers learn to review “on the (unpaid) job” and no standard training system exists to ensure quality and consistency. 5   Expectations and format vary between journals and most, but not all, provide basic instructions for reviewers. In this paper, we provide a general introduction to the peer review process and identify common strategies for success as well as pitfalls to avoid.

Modern peer review serves 2 primary purposes: (1) as “a screen before the diffusion of new knowledge” 6   and (2) as a method to improve the quality of published work. 1 , 5  

As screeners, peer reviewers evaluate the quality, validity, relevance, and significance of research before publication to maintain the credibility of the publications they serve and their fields of study. 1 , 2 , 7   Although peer reviewers are not the final decision makers on publication (that role belongs to the editor), their recommendations affect editorial decisions and thoughtful comments influence an article’s fate. 6 , 8  

As advisors and evaluators of manuscripts, reviewers have an opportunity and responsibility to give authors an outside expert’s perspective on their work. 9   They provide feedback that can improve methodology, enhance rigor, improve clarity, and redefine the scope of articles. 5 , 8 , 10   This often happens even if a paper is not ultimately accepted at the reviewer’s journal because peer reviewers’ comments are incorporated into revised drafts that are submitted to another journal. In a 2019 survey of authors, reviewers, and editors, 83% said that peer review helps science communication and 90% of authors reported that peer review improved their last paper. 11  

Expertise: Peer reviewers should be up to date with current literature, practice guidelines, and methodology within their subject area. However, academic rank and seniority do not define expertise and are not actually correlated with performance in peer review. 13  

Professionalism: Reviewers should be reliable and objective, aware of their own biases, and respectful of the confidentiality of the peer review process.

Critical skill : Reviewers should be organized, thorough, and detailed in their critique with the goal of improving the manuscript under their review, regardless of disposition. They should provide constructive comments that are specific and addressable, referencing literature when possible. A peer reviewer should leave a paper better than he or she found it.

Is the manuscript within your area of expertise? Generally, if you are asked to review a paper, it is because an editor felt that you were a qualified expert. In a 2019 survey, 74% of requested reviews were within the reviewer’s area of expertise. 11   This, of course, does not mean that you must be widely published in the area, only that you have enough expertise and comfort with the topic to critique and add to the paper.

Do you have any biases that may affect your review? Are there elements of the methodology, content area, or theory with which you disagree? Some disagreements between authors and reviewers are common, expected, and even helpful. However, if a reviewer fundamentally disagrees with an author’s premise such that he or she cannot be constructive, the review invitation should be declined.

Do you have the time? The average review for a clinical journal takes 5 to 6 hours, though many take longer depending on the complexity of the research and the experience of the reviewer. 1 , 14   Journals vary on the requested timeline for return of reviews, though it is usually 1 to 4 weeks. Peer review is often the longest part of the publication process and delays contribute to slower dissemination of important work and decreased author satisfaction. 15   Be mindful of your schedule and only accept a review invitation if you can reasonably return the review in the requested time.

Once you have determined that you are the right person and decided to take on the review, reply to the inviting e-mail or click the associated link to accept (or decline) the invitation. Journal editors invite a limited number of reviewers at a time and wait for responses before inviting others. A common complaint among journal editors surveyed was that reviewers would often take days to weeks to respond to requests, or not respond at all, making it difficult to find appropriate reviewers and prolonging an already long process. 5  

Now that you have decided to take on the review, it is best of have a systematic way of both evaluating the manuscript and writing the review. Various suggestions exist in the literature, but we will describe our standard procedure for review, incorporating specific do’s and don’ts summarized in Table 1 .

Dos and Don’ts of Peer Review

First, read the manuscript once without making notes or forming opinions to get a sense of the paper as whole. Assess the overall tone and flow and define what the authors identify as the main point of their work. Does the work overall make sense? Do the authors tell the story effectively?

Next, read the manuscript again with an eye toward review, taking notes and formulating thoughts on strengths and weaknesses. Consider the methodology and identify the specific type of research described. Refer to the corresponding reporting guideline if applicable (CONSORT for randomized control trials, STROBE for observational studies, PRISMA for systematic reviews). Reporting guidelines often include a checklist, flow diagram, or structured text giving a minimum list of information needed in a manuscript based on the type of research done. 16   This allows the reviewer to formulate a more nuanced and specific assessment of the manuscript.

Next, review the main findings, the significance of the work, and what contribution it makes to the field. Examine the presentation and flow of the manuscript but do not copy edit the text. At this point, you should start to write your review. Some journals provide a format for their reviews, but often it is up to the reviewer. In surveys of journal editors and reviewers, a review organized by manuscript section was the most favored, 5 , 6   so that is what we will describe here.

As you write your review, consider starting with a brief summary of the work that identifies the main topic, explains the basic approach, and describes the findings and conclusions. 12 , 17   Though not universally included in all reviews, we have found this step to be helpful in ensuring that the work is conveyed clearly enough for the reviewer to summarize it. Include brief notes on the significance of the work and what it adds to current knowledge. Critique the presentation of the work: is it clearly written? Is its length appropriate? List any major concerns with the work overall, such as major methodological flaws or inaccurate conclusions that should disqualify it from publication, though do not comment directly on disposition. Then perform your review by section:

Abstract : Is it consistent with the rest of the paper? Does it adequately describe the major points?

Introduction : This section should provide adequate background to explain the need for the study. Generally, classic or highly relevant studies should be cited, but citations do not have to be exhaustive. The research question and hypothesis should be clearly stated.

Methods: Evaluate both the methods themselves and the way in which they are explained. Does the methodology used meet the needs of the questions proposed? Is there sufficient detail to explain what the authors did and, if not, what needs to be added? For clinical research, examine the inclusion/exclusion criteria, control populations, and possible sources of bias. Reporting guidelines can be particularly helpful in determining the appropriateness of the methods and how they are reported.

Some journals will expect an evaluation of the statistics used, whereas others will have a separate statistician evaluate, and the reviewers are generally not expected to have an exhaustive knowledge of statistical methods. Clarify expectations if needed and, if you do not feel qualified to evaluate the statistics, make this clear in your review.

Results: Evaluate the presentation of the results. Is information given in sufficient detail to assess credibility? Are the results consistent with the methodology reported? Are the figures and tables consistent with the text, easy to interpret, and relevant to the work? Make note of data that could be better detailed in figures or tables, rather than included in the text. Make note of inappropriate interpretation in the results section (this should be in discussion) or rehashing of methods.

Discussion: Evaluate the authors’ interpretation of their results, how they address limitations, and the implications of their work. How does the work contribute to the field, and do the authors adequately describe those contributions? Make note of overinterpretation or conclusions not supported by the data.

The length of your review often correlates with your opinion of the quality of the work. If an article has major flaws that you think preclude publication, write a brief review that focuses on the big picture. Articles that may not be accepted but still represent quality work merit longer reviews aimed at helping the author improve the work for resubmission elsewhere.

Generally, do not include your recommendation on disposition in the body of the review itself. Acceptance or rejection is ultimately determined by the editor and including your recommendation in your comments to the authors can be confusing. A journal editor’s decision on acceptance or rejection may depend on more factors than just the quality of the work, including the subject area, journal priorities, other contemporaneous submissions, and page constraints.

Many submission sites include a separate question asking whether to accept, accept with major revision, or reject. If this specific format is not included, then add your recommendation in the “confidential notes to the editor.” Your recommendation should be consistent with the content of your review: don’t give a glowing review but recommend rejection or harshly criticize a manuscript but recommend publication. Last, regardless of your ultimate recommendation on disposition, it is imperative to use respectful and professional language and tone in your written review.

Although peer review is often described as the “gatekeeper” of science and characterized as a quality control measure, peer review is not ideally designed to detect fundamental errors, plagiarism, or fraud. In multiple studies, peer reviewers detected only 20% to 33% of intentionally inserted errors in scientific manuscripts. 18 , 19   Plagiarism similarly is not detected in peer review, largely because of the huge volume of literature available to plagiarize. Most journals now use computer software to identify plagiarism before a manuscript goes to peer review. Finally, outright fraud often goes undetected in peer review. Reviewers start from a position of respect for the authors and trust the data they are given barring obvious inconsistencies. Ultimately, reviewers are “gatekeepers, not detectives.” 7  

Peer review is also limited by bias. Even with the best of intentions, reviewers bring biases including but not limited to prestige bias, affiliation bias, nationality bias, language bias, gender bias, content bias, confirmation bias, bias against interdisciplinary research, publication bias, conservatism, and bias of conflict of interest. 3 , 4 , 6   For example, peer reviewers score methodology higher and are more likely to recommend publication when prestigious author names or institutions are visible. 20   Although bias can be mitigated both by the reviewer and by the journal, it cannot be eliminated. Reviewers should be mindful of their own biases while performing reviews and work to actively mitigate them. For example, if English language editing is necessary, state this with specific examples rather than suggesting the authors seek editing by a “native English speaker.”

Peer review is an essential, though imperfect, part of the forward movement of science. Peer review can function as both a gatekeeper to protect the published record of science and a mechanism to improve research at the level of individual manuscripts. Here, we have described our strategy, summarized in Table 2 , for performing a thorough peer review, with a focus on organization, objectivity, and constructiveness. By using a systematized strategy to evaluate manuscripts and an organized format for writing reviews, you can provide a relatively objective perspective in editorial decision-making. By providing specific and constructive feedback to authors, you contribute to the quality of the published literature.

Take-home Points

FUNDING: No external funding.

CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no potential conflicts of interest to disclose.

Dr Lu performed the literature review and wrote the manuscript. Dr Fischer assisted in the literature review and reviewed and edited the manuscript. Dr Plesac provided background information on the process of peer review, reviewed and edited the manuscript, and completed revisions. Dr Olson provided background information and practical advice, critically reviewed and revised the manuscript, and approved the final manuscript.

Advertising Disclaimer »

Citing articles via

Email alerts.

peer reviewed medical research

Affiliations

  • Editorial Board
  • Editorial Policies
  • Pediatrics On Call
  • Online ISSN 2154-1671
  • Print ISSN 2154-1663
  • Pediatrics Open Science
  • Hospital Pediatrics
  • Pediatrics in Review
  • AAP Grand Rounds
  • Latest News
  • Pediatric Care Online
  • Red Book Online
  • Pediatric Patient Education
  • AAP Toolkits
  • AAP Pediatric Coding Newsletter

First 1,000 Days Knowledge Center

Institutions/librarians, group practices, licensing/permissions, integrations, advertising.

  • Privacy Statement | Accessibility Statement | Terms of Use | Support Center | Contact Us
  • © Copyright American Academy of Pediatrics

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Advertisement

Issue Cover

  • Previous Issue
  • Previous Article
  • Next Article

The Peer Review Imperative

Threats to peer review, public trust in science and medicine, peer review matters: research quality and the public trust.

Michael M. Todd, M.D., served as Handling Editor for this article.

This article has a related Infographic on p. 17A.

Accepted for publication October 13, 2020.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Cite Icon Cite
  • Get Permissions
  • Search Site

Evan D. Kharasch , Michael J. Avram , J. David Clark , Andrew J. Davidson , Timothy T. Houle , Jerrold H. Levy , Martin J. London , Daniel I. Sessler , Laszlo Vutskits; Peer Review Matters: Research Quality and the Public Trust. Anesthesiology 2021; 134:1–6 doi: https://doi.org/10.1097/ALN.0000000000003608

Download citation file:

  • Ris (Zotero)
  • Reference Manager
“Peer review grounds the public trust in the scientific and medical research enterprise…”

Image: Adobe Stock.

Image: Adobe Stock.

In an era of evidence-based medicine, peer review is an engine and protector of that evidence. Such evidence, vetted by and surviving the peer review process, serves to inform clinical decision-making, providing practitioners with the information to make diagnostic and therapeutic decisions. Unfortunately, there is recent and growing pressure to prioritize the speed of research dissemination, often at the expense of careful peer review. It is timely to remind readers and the public of the value brought by peer review, its benefits to patients, how much the public trust in science and medicine rests upon peer review, and how these have become vulnerable.

Peer review has been the foundation of scholarly publishing and scientific communication since the 1665 publication of the Philosophical Transactions of the Royal Society. The benefits and advantages of peer review in scientific research, and particularly medical research, are manifold and manifest. 1   Journals, editors, and peer reviewers hold serious responsibility as stewards of valid information, with accountability to the scientific community and an obligation to maintain the public trust. Anesthesiology states its aspiration and its responsibility on the cover of every issue: Trusted Evidence. Quality peer review (more specifically, closed or single-blind peer review, in which the identity of reviewers is confidential) is a foundational tenet of Anesthesiology.

Peer review grounds the public trust in the scientific and medical research enterprise, as well as the substantial public investment in scientific research. Peer review affords patients some degree of comfort in placing their trust in practitioners, knowing that they should be informed by the best possible, vetted evidence.

Quality peer review enriches and safeguards the scientific content, transparency, comprehensibility, and scientific integrity of published articles. It can enhance published research importance, originality, authenticity, scientific validity, adherence to experimental rigor, and correctness of results and interpretations and can identify errors in research execution. Peer review can help authors improve reporting quality, presentation clarity, and transparency, thereby enhancing comprehension and potential use by clinicians and scientists. Careful scrutiny can identify whether research has appropriate ethical principles, regulatory approvals, compliance, and equitable inclusion of both sexes. Peer review should consider the appropriateness of authorship and can detect duplicate publication, fabrication, falsification, plagiarism, and other misconduct.

Peer review should serve as a tempering factor on overenthusiastic authors and overstated conclusions, unwarranted extrapolations, conflation of association with causality, unsupported clinical recommendations, and spin. Spin is a well known, unfortunately common, and often insidious bias in the presentation and interpretation of results that seeks to convince readers that the beneficial effect of an experimental treatment exceeds what has actually been found or that minimizes untoward effects. 2–4  

Manuscripts often change substantially between the initial submission and the revised and improved published version. Improvement during the peer review process is not apparent to readers, who only see the final, published article, but is well known to authors, reviewers, and editors. Peer review is a defining difference in an era of proliferating predatory journals and other forms of research dissemination. Anesthesiology reviewers and editors devote considerable effort in service to helping authors improve their scientific communications, whether published in this journal or if ultimately elsewhere.

In the domain of clinical research, peer review does not change the scientific premise of an investigation, the hypothesis, or the study design, although it frequently improves their communication. Peer review does not change clinical research data, although it often corrects, enhances, or strengthens the statistical analysis of those data and can markedly improve their presentation and clarity. More importantly, peer review can assess, correct, and improve the interpretation, meaning, importance, and communication of research results—and importantly, confirm that conclusions emanate strictly from those results. Peer review may occasionally fundamentally revise or even reverse clinical research interpretations and recommendations. Each of these many functions enhances reader understanding and should ultimately improve patient care.

Peer review is not a guarantee of truth, and it can be imperfect. Medical history provides many examples of peer-reviewed research that was later found to be incorrect, typically through error or occasionally from misconduct. However, peer review certainly was and remains an essential initial check and quality control that has weeded out, or corrected before publication, innumerable reports of research of insufficient quality or veracity that otherwise would have been published and thereby become publicly accessible. Additionally, science should be “self-correcting,” and peer review is one of the most important factors responsible for such correction. Peer review remains an element by which medical science achieves the “self-correction” that drives progress.

Quality peer review does take time. So also do the initial preparation of manuscripts and the modifications made by authors in response to peer review. Anesthesiology endeavors to provide both quality and timely peer review. Our time to first decision averages only 16 days.

The increasing emphasis on fast research dissemination, often absent quality peer review, comes mostly but not exclusively because of the immediacy of the internet and broader media and societal trends. In an era in which the companies whose major product is the immediacy of information are the economic leaders (Facebook, Twitter, Google, and Apple), it is unsurprising that the immediacy of information is challenging that of quality as the value proposition in the research marketplace. Nevertheless, fast is not synonymous with good. We believe that sacrificing quality on the altar of speed is unwise, benefits no one (except perhaps authors), and may ultimately diminish trust in medical research and possibly even worsen clinical care.

Another recent societal problem is the growing spillover of political and media communication trends into scientific communication. Almost half of Americans believe that science researchers overstate the implications of their research, and three in four think “the biggest problem with news about scientific research findings is the way news reporters cover it.” 5   Scientific conclusions may be perverted through internet-based campaigns of disinformation and misinformation and dissemination of misleading and biased information. 6   This threatens the public trust in the scientific enterprise and scientific knowledge. 7   Social media has made science and health vulnerable to strategic manipulation. 7 , 8   It is also “leaving peer-reviewed communication behind as some scientists begin to worry less about their citation index (which takes years to develop) and more about their Twitter response (measurable in hours).” 8   Peer-reviewed journals cannot reverse these trends, but they can at least ensure that scientific conclusions when presented are correct and clearly stated.

In addition to the premium on dissemination speed versus peer review quality, a new variant of rapid clinical research dissemination has emerged that abrogates peer review entirely: preprints. Preprints are research reports that are posted by authors in a publicly accessible online repository in place of or before publication in a peer-reviewed scholarly journal. The preprint concept is decades old, rooted in physics and mathematics, in which authors traditionally sent their hand- or typewritten manuscript draft to a few colleagues for feedback before submitting it to a journal for publication. With the advent of the internet, this process was replaced by preprint servers and public posting. With the creation of a preprint server for biology and the life sciences (bioRxiv.org), the posting of unreviewed manuscripts by basic biomedical scientists has exploded in popularity and practice. Next came the creation of medRxiv.org, a publicly accessible preprint server for disseminating unpublished and unreviewed clinical research results in their “preliminary form” 9   and more so a call for research funders to require mandatory posting of their grantees’ research reports first on preprint servers before peer-reviewed publication. 10   Lack of peer review is the hallmark of preprints.

The main arguments offered by proponents of preprints are the free and near-immediate access to research results, claimed acceleration of the progress of research by immediate dissemination without peer review, and the assumption that articles will be improved by feedback from a wider group of readers alongside formal review by a few experts. Specifically claimed advantages of preprints are that they bypass the peer review process that adversely delays the dissemination of research results and “lifesaving cures” and “the months-long turnaround time of the publishing process and share findings with the community more quickly.” 11   In addition it is claimed that preprints address “researchers recently becoming vocally frustrated about the lengthy process of distributing research through the conventional pipelines, numerous laments decrying increasingly impractical demands of journals and reviewers, complicated dynamics at play from both authors and publishers that can affect time to press” and enable “sharing papers online before (or instead of) publication in peer-reviewed journals.” 11  

Preprints for clinical research have been justifiably criticized. 2 , 12–15   Most importantly, medical preprints lack safeguards afforded by peer review and increase the possibility of disseminating wrong or incorrectly interpreted results. Related concerns are that preprints are unnecessary for and potentially harmful to scientific progress and a significant threat with potential consequence to patient health and safety. Preprint server proponents “assume that most preprints would subsequently be peer reviewed,” 10   possibly before or after formal publication (if published), thus enabling correction or improvement (before or after publication). However, it is estimated that careful peer review of a manuscript takes 5 to 6 h. 1 , 16   It seems highly unlikely that busy scientists will surf the web in search of preprints on which to spend half a day providing concerted informative peer review.

Preprint enthusiasts claim that peer review after posting will provide scholarly input, facilitate preprint improvement, and enhance research quality. In fact, such peer review has been scant with biologic preprints, and it seems naïve to expect it with medical preprints. In reality, most preprints receive few comments, even fewer formal reviews, and many comments that are “counted” to support the notion that preprints do undergo peer review actually come through social media; a tweet is hardly a substantive review. The idea that comments on servers will replace quality peer review is not happening now and seems unlikely to transpire. Moreover, a survey found that the lack of peer review was an important reason why authors deliberately choose to post via preprint. 17   Additionally, postdissemination peer review takes longer than traditional prepublication peer review, and there remains concern by authors who do value peer review about the quality of the post-preprint peer review process and the quality of posted preprints. 17  

Preprint server proponents state “the work in question would be available to interested readers while these processes (peer review) take place, which is more or less what happens in physics today.” 10   The lives of patients are different than the lives of subatomic particles. Preprints deliberately “decouples the dissemination of manuscripts from the much slower process of evaluation and certification.” 10   However, it is exactly that coupling that validates clinical research, benefits patients, improves health, and engenders public trust.

The potential for free and unfettered distribution of raw, unvetted, and potentially incorrect information to be consumed by clinicians and patients cannot be called a medical advance. Use of such information by news outlets and online web services to promote “new” and “latest” research further misinforms the public and patients and is a disservice.

Relegating peer review to the realm of option and afterthought is not in the interest of research quality and integrity or of patients and public health. There is no apparent value in abrogating peer review of clinical research and all its many attendant benefits in ensuring the quality of clinical research available to practitioners and patients. Practitioners and patients have historically not seen the unreviewed manuscript submissions that eventually become revised peer-reviewed publications. Doing so now, given the sizable fraction of clinical research manuscripts that are rejected for publication and the substantial changes in most that are published, by providing the public with unreviewed preprints seems to carry considerable risk.

An additional problem is that the same research report can be posted on several preprint servers or websites or multiple versions may exist on the same preprint site. Various versions may be the same or different, and the final peer-reviewed published article (if it ever exists) may bear little semblance to the various posted versions, which remain freely available. Which version is correct? Availability of various differing reports of the same research risks competing or incorrect information and can only generate confusion. Scientific publishing decades ago banned publication of the same research in multiple journals owing to concerns about data integrity and inappropriate reuse. Restarting this now, via preprints, seems unwise—especially in medicine.

The public cannot and should not be expected to differentiate between posting and peer-reviewed publication. Unfortunately, and worse, even some practitioners do not understand the difference. Posting is often referred to erroneously as publication. Indeed, even the world’s most prestigious scientific journals refer to posting as publication. 18   Such conflation blurs the validity of information. That peer-reviewed publications and preprints both receive digital object identifiers further blurs their distinction and may give the latter more apparent credibility in the eyes of the lay public. The preprint community (servers and scientists) continues to claim simultaneously that preprints are and are not publications, depending on how such claims meet their proclivities. Although the bioRxiv server contains the disclaimer “readers should be aware that articles on bioRxiv have not been finalized by authors, might contain errors, and report information that has not yet been accepted or endorsed in any way by the scientific or medical community” on a web page, 19   it is not on the preprint itself for readers to see (perhaps this disclaimer, and the one below, should appear on the cover page of every preprint and as a footnote on every page). Fortunately, the medRxiv home page ( http://www.medrxiv.org ) states the following disclaimer: “Preprints are preliminary reports of work that have not been certified by peer review. They should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.” Then why bother?

The popularity of preprints in the basic science world has exploded in the last 5 yr, with the number of documents posted to preprint servers increasing exponentially. 20   While acknowledging the noble reasons given by preprint servers and authors for the dissemination of research by posting, three other apparent reasons are less noble. The first is competition for research funding. Major research funders ( e.g. , the National Institutes of Health) do not allow citation of unpublished manuscripts in grant applications but do allow citation of preprints. 21 , 22   The second is the preoccupation of authors with the speed of availability. There is a growing (and disappointing) trend of authors perceiving a need to claim priority (“we are the first to report…”), grounded perhaps on fear of being “scooped.” The third is the pursuit of academic promotion, which is based largely on the number of peer-reviewed publications listed on a curriculum vitae . We now see faculty listing preprints in the peer-reviewed research publications section of their curriculum vitae. All these drivers (priority, science advancement, reputational reward, and financial return) 7   are investigator centric. They are neither quality-centric nor patient-centric.

Who benefits if clinical research quality is sacrificed at the altar of speed? Certainly, it is not patients, public health, or the public trust in science, medicine, and the research enterprise. Enthusiasm for preprints seems to be emanating mostly from investigators, presumably because of academic or other incentives, 23   including the desire for prominence and further funding. Is this why we do medical research? Should we be investigator- or patient-centric?

Little in the argumentation espoused by proponents of clinical preprints attends to their benefit to patients. Indeed, posted preprints without all the scrutiny and benefits of peer review may lack quality and validity and may report flawed data and conclusions, which may hurt patients. 17 , 23   As stated previously, “clinical studies of poor quality can harm patients who might start or stop therapy in response to faulty data, whereas little short-term harm would be expected from an unreviewed astronomy study.” 12  

The importance of peer review in clinical research and the downside of its absence in posted preprints is illuminated by the COVID-19 pandemic. As of this date (October 1, 2020), there are 9,222 unreviewed COVID-19 SARS–CoV-2 preprints posted: 7,257 on medRxiv and 1,965 on bioRxiv. 24   To date, 33 COVID-19 articles have been retracted (0.37%), and 5 others have been temporarily retracted or have expressions of concern. 25   Of the 33 retractions, 11 (33%) were posted on an Rxiv server. The overall retraction rate in the general peer-reviewed literature is 0.04%. 26  

Based upon one of the unreviewed COVID-19 medical preprints, 27   the Commissioner of the U.S. Food and Drug Administration (the government agency entrusted more than any other to protect public health) and the President of the United States announced that convalescent plasma from COVID-19 survivors was “safe and very effective” and had been “proven to reduce mortality by 35%.” 28   Although the Commissioner later, after scientific uproar over that misinformation, “corrected” his comment in a tweet (a back page retraction to a front page headline), 29   the preprint was used to justify a Food and Drug Administration decision to issue an emergency use authorization for convalescent plasma to treat severe COVID-19. Would these errors have been prevented by peer review? We will never know.

Even if priority in clinical (and basic) research is valued, compared to the unquestionable value of quality, clinical preprints have questionable necessity in establishing precedence in contemporary times. Clinical trials registration, which makes fully public the existence of all such research, establishes both who is doing what and when. Some investigators may even publish their entire clinical protocol, to further make their studies known and by whom and when.

For hundreds of years, patent medicines (exotic concoctions of substances, often addicting and sometimes toxic) were claimed to prevent or cure a panoply of illnesses, without any evidence of effectiveness or safety or warning of potential harm. These medical elixirs, the magic potions of snake oil salesmen and charlatans, were heavily advertised and promoted to ailing, sometimes desperate, and thoroughly unsuspecting citizens—all without any oversight, regulation, quality control, or peer review. It was not until the 20th century that medical peer review and the requirement for evidence of effectiveness and safety reigned in the “Wild West” and launched the modern era of medicine, yielding the scientific discovery, progress, and improvement in human health seen today. This era rests on the bedrock of peer review, the quality ideal, and the evidence that constitutes the foundation for evidence-based medicine.

Will clinical preprints become the patent medicines of the new millennium? Do they portend the unrestricted and unregulated spillage of anything claimed as research, by anyone, and absent the quality control afforded by peer review? Like the patent medicines of a bygone era, which were heavily promoted by the newly developed advertising industry, will “posted” clinical research become fodder for the medical advertising industry and media at large, pushing who knows what information and claims on practitioners and a public already deluged with endless promotions and claims with which they cannot keep up or verify? An unsuspecting public is incapable of differentiating between the “posting” of any research observation by anyone with access to a computer and proper scholarly “publication” of peer-reviewed results and conclusions. This is particularly true of vulnerable patients with severe and/or incurable diseases, who may grasp at anything. Moreover, continuous claims of “breakthroughs” and “proven treatments” based on preprints, followed by backpedaling after challenges and outcries, further reduces public confidence in the scientific endeavor as a whole. This can create the perception that clinical science is unreliable and might be a matter of turf wars and politics instead of reliable valid evidence.

Over the past century and throughout the world, legislation has been passed and government agencies have been created to protect the public and maintain their trust in the medicines they take. Few would advocate dismantling the protections against patent medicines. Why now consider dismantling the peer review process in clinical research?

In 2019, the editors of several journals expressed a well articulated principle that they will not accept clinical research manuscripts that had been previously posted to a preprint server. 30   Their rationale was that the benefit of preprint servers in clinical research did not outweigh the potential harm to patients and scientific integrity. Major specific concerns included: “1) Preprints may be perceived by some (and used by less scrupulous investigators) as evidence even though the studies have not gone through peer review and the public may not be able to discern an unreviewed preprint from a seminal article in a leading journal; 2) It seems unlikely that the kind of prepublication dialogue that has taken place in other academic disciplines (e.g. mathematics and physics) will take place in medicine or surgery because the incentives are very different; 3) Preprints may lead to multiple competing, and perhaps even conflicting, versions of the ‘same’ content being available online at the same time, which can cause (at least) confusion and (at most) grave harm; and 4) For the vast majority of medical diagnoses, a few months of review of a study’s findings do not make a difference; the pace of discovery and dissemination generally is adequate.” These editors’ concerns and approach merit consideration if not more widespread adoption.

The potential for practitioner and public confusion regarding the difference between unregulated preprints and peer-reviewed publication is substantial. Indeed, the posting of preprints is often incorrectly termed “publication.” Peer-reviewed publications versus posted “publications” will soon become a difference without a distinction. Moreover, authors cannot have it both ways. They cannot claim a preprint as a publication for purposes of a grant (and now in some universities potentially for purposes of a degree, appointment, and/or promotion), yet claim it is not a publication for the purposes of submission to a peer-reviewed journal that does not allow prior publication. More importantly, the peer review imperative in clinical research and the role it plays in research quality, the evidence base, and patient care, constitutes an obligation to patient safety that cannot and should not be abrogated.

Peer review, clinical research quality, and the public trust in clinical research all now face an unprecedented assault. Quality peer review is a foundational tenet of Anesthesiology and underlies the Trusted Evidence we publish. Quality, timely, and unpressured peer review will continue to be a hallmark of Anesthesiology , in service to readers, patients, and the public trust.

Acknowledgments

We thank Ryan Walther, Managing Editor, and Vicki Tedeschi, Director of Digital Communications, for their valuable insights.

Competing Interests

Dr. Clark has a consulting agreement with Teikoku Pharma USA (San Jose, California). Dr. Levy reports being on Advisory and Steering Committees for Instrumentation Laboratory (Bedford, Massachusetts), Merck & Co. (Kenilworth, New Jersey), and Octapharma (Lachen, Switzerland). Dr. London reports financial relationships with Wolters Kluwer UptoDate (Philadelphia, Pennsylvania) and Springer (journal honorarium; New York, New York). The remaining authors declare no competing interests.

Citing articles via

Most viewed, email alerts, related articles, social media, affiliations.

  • ASA Practice Parameters
  • Online First
  • Author Resource Center
  • About the Journal
  • Editorial Board
  • Rights & Permissions
  • Online ISSN 1528-1175
  • Print ISSN 0003-3022
  • Anesthesiology
  • ASA Monitor

Silverchair Information Systems

  • Terms & Conditions Privacy Policy
  • Manage Cookie Preferences
  • © Copyright 2024 American Society of Anesthesiologists

This Feature Is Available To Subscribers Only

Sign In or Create an Account

  • Introduction
  • Conclusions
  • Article Information

Linear regression model of mean time in days to return review and variables associated with mean time in days to return reviews, by manuscript.

Changes in aspects of peer reviewer behavior over time, before and during the COVID-19 pandemic, with number of global daily deaths from COVID-19 as an indicator of pandemic intensity.

Data Sharing Statement

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Perlis RH , Kendall-Taylor J , Hart K, et al. Peer Review in a General Medical Research Journal Before and During the COVID-19 Pandemic. JAMA Netw Open. 2023;6(1):e2253296. doi:10.1001/jamanetworkopen.2022.53296

Manage citations:

© 2024

  • Permissions

Peer Review in a General Medical Research Journal Before and During the COVID-19 Pandemic

  • 1 Massachusetts General Hospital, Harvard Medical School, Boston
  • 2 JAMA Network, Chicago, Illinois
  • 3 Harvard Medical School, Boston, Massachusetts
  • 4 Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
  • 5 Rutgers School of Public Health, Piscataway, New Jersey
  • 6 Minneapolis Heart Institute, Minneapolis Heart Institute Foundation, Minneapolis, Minnesota
  • 7 Harvard T.H. Chan School of Public Health, Boston, Massachusetts
  • 8 Hebrew SeniorLife and Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
  • 9 MaineHealth and Maine Medical Center Research Institute, Scarborough
  • 10 Stanford University School of Medicine, Stanford, California
  • 11 NYU Langone Health, New York, New York
  • 12 Carver College of Medicine, University of Iowa, Iowa City
  • 13 Abramson Cancer Center, University of Pennsylvania, Philadelphia
  • 14 Department of Emergency Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois
  • 15 JAMA Network Open , Chicago, Illinois
  • 16 University of Washington School of Medicine, Seattle

Question   How did peer review change from before to during the first year of the COVID-19 pandemic at a large open-access general medical journal?

Findings   In this cohort study of 5013 manuscripts reviewed and peer reviews invited by an open-access general medical journal, peer review turnaround time was slightly shorter and editor-reported review quality was modestly increased during the first year of the pandemic.

Meaning   This study found that peer review at a large, open-access general medical journal remained resilient during the first 15 months of the pandemic despite shifts in in the dynamics of the review process.

Importance   Although peer review is an important component of publication for new research, the viability of this process has been questioned, particularly with the added stressors of the COVID-19 pandemic.

Objective   To characterize rates of peer reviewer acceptance of invitations to review manuscripts, reviewer turnaround times, and editor-assessed quality of reviews before and after the start of the COVID-19 pandemic at a large, open-access general medical journal.

Design, Setting, and Participants   This retrospective, pre-post cohort study examined all research manuscripts submitted to JAMA Network Open between January 1, 2019, and June 29, 2021, either directly or via transfer from other JAMA Network journals, for which at least 1 peer review of manuscript content was solicited. Measures were compared between the period before the World Health Organization declaration of a COVID-19 pandemic on March 11, 2020 (14.3 months), and the period during the pandemic (15.6 months) among all reviewed manuscripts and between pandemic-period manuscripts that did or did not address COVID-19.

Main Outcomes and Measures   For each reviewed manuscript, the number of invitations sent to reviewers, proportions of reviewers accepting invitations, time in days to return reviews, and editor-assessed quality ratings of reviews were determined.

Results   In total, the journal sought review for 5013 manuscripts, including 4295 Original Investigations (85.7%) and 718 Research Letters (14.3%); 1860 manuscripts were submitted during the prepandemic period and 3153 during the pandemic period. Comparing the prepandemic with the pandemic period, the mean (SD) number of reviews rated as high quality (very good or excellent) per manuscript increased slightly from 1.3 (0.7) to 1.5 (0.7) ( P  < .001), and the mean (SD) time for reviewers to return reviews was modestly shorter (from 15.8 [7.6] days to 14.4 [7.0] days; P  < .001), a difference that persisted in linear regression models accounting for manuscript type, study design, and whether the manuscript addressed COVID-19.

Conclusions and Relevance   In this cohort study, the speed and editor-reported quality of peer reviews in an open-access general medical journal improved modestly during the initial year of the pandemic. Additional study will be necessary to understand how the pandemic has affected reviewer burden and fatigue.

The effect of COVID-19 on academic publishing has been the subject of substantial discussion. In particular, the pandemic has reinvigorated conversations about the growing role and variable quality of preprints that do not undergo peer review, 1 the burden of peer review on the academic community, 2 concerns about reviewer fatigue, 3 and how best to ensure the rigor and value of peer-reviewed medical literature. 4 , 5 For medical publishing specifically, increasing volumes of manuscripts related to COVID-19 6 - 8 and expectations for rapid publication and dissemination have further stressed a system that some in medicine already believed was broken. 9

Few empirical observations about the ways in which the peer review process may have changed during the pandemic have been reported. A recent study 7 of manuscripts and reviews submitted to 2329 journals before and during the pandemic found a modest decrease in rates of reviewer acceptance of invitations to review among health and medical journals between February and May 2020, with a more pronounced decrease among potential reviewers who were women, but not men.

To understand how peer review changed with the onset of the COVID-19 pandemic, we examined peer review data from JAMA Network Open , an open-access general medical journal launched in 2018 with a 2020 impact factor of 8.5. 10 Specifically, we aimed to quantify changes in rates of peer reviewer acceptance of invitations to review manuscripts and review quality from the period before to the period during the COVID-19 pandemic.

Manuscripts submitted to JAMA Network Open first undergo technical quality assessment, then close evaluation by an editor. For manuscripts that are deemed of sufficient quality and priority to undergo peer review, editors seek review from 1 or more content reviewers and 1 statistical reviewer. We extracted data from databases used by the JAMA Network to track manuscript submissions and peer reviews. We included all manuscripts received at JAMA Network Open from January 1, 2019, through June 30, 2021, that were categorized as Original Investigations or Research Letters for which at least 1 content review was sought. These manuscripts could be submitted directly to the journal or transferred from other journals within the JAMA Network. The study used deidentified administrative data with no participant contact and followed the journal’s policy for such research, which indicates that information may be systematically collected and analyzed as part of research to improve the quality of the editorial or peer review process. This study was reviewed by the Massachusetts General–Brigham Institutional Review Board and considered to be exempt from informed consent because it uses deidentified data and posed minimal risk. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline for cohort studies. 11

For each manuscript, we determined the total number of individuals who were invited to review the manuscript and the number and proportion of reviewers accepting, declining, or failing to respond to invitations. For the accepted invitations, we determined the mean time (across reviewers) to return the review and the number and proportion of reviews rated by editors as very good or excellent on a 5-point anchored scale: poor, fair, good, very good, or excellent. This process was completed for all manuscripts as part of routine editorial practice. For purposes of analysis, the last 2 categories (very good and excellent) were aggregated to reflect high-quality reviews. For descriptive purposes and because some individuals served as peer reviewers for multiple manuscripts, we also report acceptance rate for reviewer invitations, time to return review, and quality of review at the level of individual reviews.

Additional manuscript features collected at submission included study design 12 (eg, randomized trial, cohort study, or economic evaluation), whether a study was submitted directly or transferred from another JAMA Network journal, and whether a study referenced or addressed COVID-19. The last of these is determined via an automated process implemented to identify any manuscript with any of the following terms in the manuscript text: acute respiratory virus, personal protective equipment, N95, COVID, COVID-19, coronavirus, SARS, or novel virus.

We compared peer review characteristics for submitted manuscripts with an index date before or after March 11, 2020, the date that the World Health Organization declared a COVID-19 pandemic. 13 Thus, data were divided into 2 groups: the prepandemic period from January 1, 2019, to March 10, 2020 (14.3 months), and the pandemic period from March 11, 2020, to June 29, 2021 (15.6 months). We similarly compared characteristics of reviews provided for manuscripts submitted after March 11, 2020, that did or did not address COVID-19. We used multivariable linear regression to calculate effect sizes and 95% CIs for the association between prepandemic and pandemic status for reviewer turnaround time, adjusted for other manuscript characteristics (study design, direct submission vs transfer from other JAMA Network journals, article type, and whether the manuscript addressed COVID-19). (Incorporating clustering by subject area, as a proxy for a handling editor, did not meaningfully change results.)

To visualize changes in peer review characteristics over time—and, in particular, whether secular trends in these characteristics might have preceded the pandemic—we also described manuscript and review features on a weekly basis, with each manuscript assigned to the date of first reviewer invitation. For graphic presentation, we applied a 3-week rolling mean using the rollmean function in R’s zoo library, version 1.8-9.

All analyses used R software, version 4.1.2 14 ; the threshold for statistical significance was considered to be a 2-tailed P  < .05, without adjustment for multiple comparisons.

Between January 1, 2019, and June 30, 2021, the journal sought reviews for 5013 manuscripts (mean [SD], 38.3 [13.3] per week), including 4295 Original Investigations (85.7%) and 718 Research Letters (14.3%). Of these manuscripts, 1860 and 3153 manuscripts were submitted during the prepandemic and pandemic periods, respectively.

Characteristics of manuscripts received before March 11, 2020, or on or after that date are summarized in Table 1 . These manuscripts included 376 clinical trials (7.5%), 1939 cohort studies (38.7%), and 1148 cross-sectional studies (22.9%); among the 5013 manuscripts reviewed, 932 (18.6%) addressed COVID-19. Table 1 also includes univariate comparisons of these and other characteristics between the prepandemic and pandemic periods. The overall mean (SD) volume of manuscripts reviewed per week increased from 30.3 (8.6) to 46.4 (12.2) ( P  < .001), and the mean (SD) number of reviewers invited per manuscript to achieve the minimum number of required reviews increased from 6.0 (3.6) to 7.0 (4.5) ( P  < .001). The mean (SD) proportion of reviewers per manuscript who accepted invitations did not change significantly (39.5% [28.6%] vs 38.4% [28.3%]; P  = .21). However, the mean (SD) number of reviews returned per manuscript also increased from 1.6 (0.6) to 1.7 (0.5) ( P  < .001), as did the mean (SD) number of reviews rated as high quality (ie, very good or excellent) per manuscript (from 1.3 [0.7] to 1.5 [0.7]; P  < .001). Mean (SD) time to return reviews also decreased from 15.8 (7.6) to 14.4 (7.0) days ( P  < .001).

In multivariable linear regression adjusting for baseline manuscript features (article type, study design, and direct submission vs transfer), differences in mean time to return review persisted. Time to return of review was decreased in the pandemic period compared with the prepandemic period (adjusted mean difference of −1.2 days; 95% CI, −0.7 to −1.6) ( Figure 1 ).

In a complementary analysis of 33 615 reviewers invited before (n = 13 208 [39.3%]) or after (n = 20 407 [60.7%]) pandemic onset who returned reviews, the proportion of reviewers accepting invitations was similar (3337 [25.3%] vs 5229 [25.6%]; P  = .46), whereas the mean (SD) time to return reviews decreased from 15.2 (9.2) in the prepandemic period to 14.8 (8.6) during the pandemic ( P  = .02).

A similar pattern of differences to those observed comparing the prepandemic vs pandemic periods was identified in comparisons of manuscripts that did not address COVID-19 (n = 2238) and those that did address COVID-19 (n = 915) ( Table 2 ). COVID-19–related manuscripts required fewer reviewer invitations (mean [SD], 6.4 [4.4] vs 7.2 [4.6]; P  < .001), and the proportion of reviewers who declined invitations to review was lower (mean [SD], 32.8% [22.6%] vs 35.2% [22.3%]; P  = .006). The mean (SD) number of very good or excellent reviews was greater for COVID-19–related manuscripts (1.5 [0.7] vs those not related to COVID-19 (1.4 [0.7]; P  < .001). Mean (SD) time to return reviews was also lower: 14.6 (7.0) days for those not COVID related vs 13.7 (6.8) days for those that were COVID related ( P  = .002).

Figure 2 illustrates changes in peer reviews over time, with the global number of COVID-19 deaths at the bottom for reference. 15 In general, the volume of manuscript submissions increased over time, as did the number of reviewers invited, both continuing qualitative trends beginning before the pandemic. Conversely, a decrease in time to return reviews appears to have followed pandemic onset.

In this cohort study of 5013 manuscripts with reviews solicited by an open-access general medical journal before and during the COVID-19 pandemic, we did not identify evidence of deterioration in the peer review process. The overall rate of reviewer acceptance of review requests remained stable after pandemic onset. However, the time required for reviewers to complete reviews was modestly shorter during the pandemic, and the mean number of high-quality reviews per manuscript was greater compared with the period before the pandemic. Although the association with time to return reviews persisted after adjustment for potential confounding features, such as study design and manuscript type, we cannot exclude unobserved changes in how the editors invited reviewers—for example, making a greater effort to identify interested reviewers or sending personal notes—that may have coincided with the pandemic.

Similar patterns were observed for COVID-19–focused manuscripts. In this case, rates of reviewer acceptance were significantly greater for such manuscripts. Time to return reviews was slightly but statistically significantly shorter, whereas the number of high-quality reviews received per manuscript was slightly greater.

Our results complement those of a recent investigation 7 of peer review during the pandemic. That study found lower rates of review invitation acceptance from health and medical journals among women but not men. 7

This study has multiple limitations. First, we did not have access to reviewer-level characteristics, such as gender, age, race and ethnicity, or academic seniority, that would allow us to quantify the differential effect of the pandemic on some reviewers or compare across demographic characteristics, because these measures are not collected by the journal. Second, we cannot necessarily attribute the changes we observed to the pandemic itself. While JAMA Network Open was publishing for 2 years before the pandemic, submissions increased rapidly, particularly in the first year, as journal recognition and reputation increased. Likewise, reviewers’ willingness to participate in peer review may have changed over time as they became more aware of the journal. As such, there are likely secular trends that may explain some of the changes we observed; the generalizability of our results to other journals will require further study, and we hope this work will encourage such efforts. Third, editors’ estimates of review quality are entirely subjective, so it is possible that editors took extenuating circumstances into account; for example, editors may have “graded on a curve” during the pandemic, and in reality quality could have remained unchanged or diminished. Finally, the automated flagging of submissions as COVID-19 related may have missed relevant submissions that did not include specific keywords and, likewise, may have flagged submissions that included pandemic-related keywords in the introduction or discussion but were not truly related to COVID-19.

In contextualizing these findings, it is also important to consider other changes in the peer review process occurring during the pandemic. For example, reviewers may have been more or less likely to accept invitations to review manuscripts based on eagerness to review new science about COVID-19, availability due to reduced clinical commitments, unavailability due to intensified clinical commitments, concerns about COVID-related misinformation, or reviewer fatigue associated with increase in volume of manuscripts submitted during the pandemic. Moreover, the editors’ behavior may have changed with the increase in number of submissions, such as making more personal requests or opting to proceed with a decision with fewer reviewers for a given manuscript.

Despite these limitations, our results may help inform ongoing conversations about quality and burden of peer review during the COVID-19 pandemic. The findings suggest that the pandemic modestly affected the review process in terms of turnaround time and review quality. However, this apparent stability does not address the extent to which reviewer sentiment toward the peer review process may have shifted or the pandemic’s effect on the ability of invited reviewers to complete other tasks. Other lines of investigation, including surveys, suggest that the pandemic has negatively affected researchers’ quality of life, more so for women than men. 16 , 17

The findings of this pre-post cohort study suggest that the peer review process at a large, open-access journal has continued to function during the COVID-19 pandemic despite changes in both the volume of submissions and the work and home environments of many peer reviewers. Most encouragingly, during the pandemic, review quality did not appear to have diminished. Still, in light of abundant evidence that COVID-19 has negatively impacted researchers, 16 , 17 continued efforts to study and improve the peer review process are needed.

Accepted for Publication: December 9, 2022.

Published: January 27, 2023. doi:10.1001/jamanetworkopen.2022.53296

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2023 Perlis RH et al. JAMA Network Open .

Corresponding Author: Roy H. Perlis, MD, MSc, Massachusetts General Hospital, Harvard Medical School, 185 Cambridge St, Simches Research Bldg, Sixth Floor, Boston, MA 02114 ( [email protected] ).

Author Contributions: Dr Perlis had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Perlis, Berlin, Inouye, Jacobs, Morris, Ogedegbe, Perencevich, Shulman, Fihn, Rivara, Flanagin.

Acquisition, analysis, or interpretation of data: Perlis, Kendall-Taylor, Ganguli, Hart, Berlin, Bradley, Haneuse, Inouye, Trueger, Flanagin.

Drafting of the manuscript: Perlis, Hart, Berlin, Haneuse, Shulman.

Critical revision of the manuscript for important intellectual content: Perlis, Kendall-Taylor, Ganguli, Berlin, Bradley, Haneuse, Inouye, Jacobs, Morris, Ogedegbe, Perencevich, Shulman, Trueger, Fihn, Rivara, Flanagin.

Statistical analysis: Perlis, Hart, Haneuse.

Administrative, technical, or material support: Perlis, Kendall-Taylor, Hart.

Supervision: Perlis.

Conflict of Interest Disclosures: Dr Perlis reported receiving personal fees for scientific advisory board service from Belle Artificial Intelligence, Burrage Capital, Psy Therapeutics, Genomind, Circular Genomics, and RID Ventures outside the submitted work. Ms Flanagin reported being affiliated with the JAMA Network and employed by the American Medical Association, which publishes JAMA Network Open , during the conduct of the study. No other disclosures were reported.

Disclaimer: Dr Rivara is editor in chief; Dr Fihn is deputy editor; Drs Perlis, Ganguli, Bradley, Inouye, Jacobs, Morris, Ogedegbe, Perencevich, and Shulman are associate editors; Drs Berlin and Haneuse are statistical editors; and Dr Trueger is digital media editor of JAMA Network Open , but they were not involved in any of the decisions regarding review of the manuscript or its acceptance.

Data Sharing Statement: See the Supplement .

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts
  • Health Conditions
  • Health Products

What to know about peer review

peer reviewed medical research

Peer review is a quality control measure for medical research. It is a process in which professionals review each other’s work to make sure that it is accurate, relevant, and significant.

Scientific researchers aim to improve medical knowledge and find better ways to treat disease. By publishing their study findings in medical journals, they enable other scientists to share their developments, test the results, and take the investigation further.

Peer review is a central part of the publication process for medical journals. The medical community considers it to be the best way of ensuring that published research is trustworthy and that any medical treatments that it advocates are safe and effective for people.

In this article, we look at the reasons for peer review and how scientists carry them out, as well as the flaws of the process.

Reasons for peer review

men looking at report

Peer review helps prevent the publication of flawed medical research papers.

Flawed research includes:

  • made-up findings and hoax results that do not have a proven scientific basis.
  • dangerous conclusions, recommendations, and findings that could harm people.
  • plagiarized work, meaning that an author has taken ideas or results from other researchers.

Peer review also has other functions. For example, it can guide decisions about grants for medical research funding.

For medical journals, peer review means asking experts from the same field as the authors to help editors decide whether to publish or reject a manuscript by providing a critique of the work.

There is no industry standard to dictate the details of a peer review process, but most major medical journals follow guidance from the International Committee of Medical Journal Editors.

The code offers basic rules, such as, “Reviewers’ comments should be constructive, honest, and polite.”

The Committee on Publication Ethics (COPE) are another association that offer ethical guidelines for medical peer reviewers. COPE also have a large membership among journals.

These associations do not set out rules for individual journals to follow, and they regularly remind reviewers to consult journal editors.

The code summarizes the role of a peer reviewer as follows:

“ The editor is looking to them for subject knowledge, good judgment, and an honest and fair assessment of the strengths and weaknesses of the work and the manuscript.”

The peer review process is usually “blind,” which means that the reviewers do not receive any information about the identity of the authors. In most cases, the authors also do not know who carries out the peer review.

Making the review anonymous can help reduce bias. The reviewer will evaluate the paper, not the author.

For the sake of transparency, some journals, including the BMJ, have an open system, but they discourage direct contact between reviewers and authors.

Peer review helps editors decide whether to reject a paper outright or to ask for various levels of revision before publication. Most medical journals ask authors for at least minor changes.

Quality, relevance, and importance

The exact tasks of a peer reviewer vary widely, depending on the journal in question.

All peer reviewers help editors decide whether or not to publish a paper, but each journal may have different criteria.

A peer review generally addresses three common areas:

  • Quality: How well did the researchers conduct their study, and how reliable are its conclusions? These points test the credibility and accuracy of the science under evaluation.
  • Relevance: Is the paper of interest to readers of this journal and appropriate to this field of work?
  • Importance: What clinical impact could the research have? Do the findings add a new element to existing knowledge or practice?

The editor will need to decide whether a paper is relevant, whether they have space for it, and if it might be more suitable for a different journal.

If the editor decides that it is relevant, they may seek peer reviewers’ opinions on the finer points of scientific interest.

The journal editors make the final decision when it comes to publishing a study. Peer-review processes exist to inform the editor’s decision, but the editor is not under any obligation to accept the recommendations of peer reviewers.

Different methods

Different journals have different aims, and it is possible to see individual titles as “brands.”

The editorial position and best practices of the journal influence its criteria for publishing a paper.

The BMJ , for example, focus on relevant findings that are important to current disease management. They say, “nearly all of the issues we research have relevance for journal editors, authors, peer reviewers and publishers working across biomedical science.”

The Lancet state that they prioritize “reports of original research that are likely to change clinical practice or thinking about a disease.” However, they also place some emphasis on papers that are easy to understand for the “general reader” outside the medical specialty of the author.

The editors of medical journals may publish detailed information about the particular form of review that they use. This information usually appears in guidelines for authors. These policies are another way of setting standards for research quality.

Read about randomized controlled trials, the most reliable method for conducting a study, by clicking here .

What do reviewers look for?

JAMA , for example, outline the qualities that their medical editors evaluate before sending papers to peer reviewers.

This “initial pass” checks for the following points:

  • timely and original material
  • clear writing
  • appropriate study methods
  • reasonable conclusions that the data support

The information must be important, and the topic needs to be of general medical interest.

How do journals respond?

Journals can respond to submissions in a few different ways.

The editors at the New England Journal of Medicine , for instance, either reject the paper outright or use one of three responses after using the peer review process to guide their decision.

These responses are:

  • Major revision: The editor expresses interest in the manuscript, but the authors need to make a revision because the report is “not acceptable” for publication in its current form.
  • Minor revision: “Some revisions” are necessary before the editor can accept the submission for publication.
  • Willing rejection: The authors need to “conduct further research or collect additional data” to make the manuscript suitable for publication.

Other publications might take different actions after completing a peer review.

Although peer review can help a publication retain integrity and publish content that advances the field of science, it is by no means a perfect system.

The number of journals worldwide is increasing, which means that finding an equivalent number of experienced reviewers is difficult. Peer reviewers also rarely receive financial compensation even though the process can be time-consuming and stressful, which might reduce impartiality.

Personal bias may also filter into the process, reducing its accuracy. For example, some conservative doctors, who prefer traditional methods, might reject a more innovative report, even if it is scientifically sound.

Reviewers might also form negative or positive preconceptions depending on their age, gender, nationality, and prestige.

Despite these flaws, journals use peer review to make sure that material is accurate. The editor can always reject reviews that they feel show a form of bias.

Is peer review the most reliable method of checking a research report?

Peer review is not perfect, but it does provide the editor with the opinion of multiple experts in the field or area of focus of the review. As a result, it ensures that the topic of study is relevant, current, and useful to the reader.

Generally, reviewers are researchers or experts in their field, and they are able to gauge the accuracy and any potential bias of a research study.

Last medically reviewed on March 29, 2019

  • Medical Innovation
  • Clinical Trials / Drug Trials

How we reviewed this article:

  • Bohannon, J. (2013). Who’s afraid of peer review? http://www.sciencemag.org/content/342/6154/60.full
  • Carter, B. (2017). Peer review: A good but flawed system? https://journals.sagepub.com/doi/pdf/10.1177/1367493517727320
  • Hames, I. (2013). COPE ethical guidelines for peer reviewers. https://publicationethics.org/files/u7140/Peer%20review%20guidelines.pdf
  • Instructions for authors. (2019). http://jama.jamanetwork.com/public/instructionsForAuthors.aspx#EditorialReviewandPublication
  • Kumar, R. (2013). The Science hoax: Poor  journalology  reflects poor training in peer review. http://www.bmj.com/content/347/bmj.f7465.full
  • Marincola, E. (2013). Science communication: Power of community. http://www.sciencemag.org/content/342/6163/1168.2.full
  • Publishing model. (n.d.). https://www.bmj.com/about-bmj/publishing-model
  • Publication process. (n.d.). http://www.nejm.org/page/media-center/publication-process
  • Responsibilities in the submission and peer-review process. (n.d.). http://www.icmje.org/recommendations/browse/roles-and-responsibilities/responsibilities-in-the-submission-and-peer-peview-process.html
  • The Lancet: Information for authors. (n.d.).  http://www.thelancet.com/lancet-information-for-authors/article-types-manuscript-requirements

Share this article

Latest news

  • Wegovy weight loss benefits sustained for 4 years, doubling prior estimates
  • Could a new Alzheimer's biomarker help diagnose the disease before symptoms show?
  • 'Hungry gut' gene marker may help determine who will benefit the most from weight loss drugs
  • Newly discovered link between Parkinson's and IBD could lead to future treatment
  • Can hormone replacement therapy help treat pulmonary hypertension in women?

Related Coverage

A case-control study, like other medical research, can help scientists find new medications and treatments. Find out how 'cases' are compared with…

Clinical trials are carried out to ensure that medical practices and treatments are safe and effective. People with a health condition may choose to…

Telemedicine enables a medical exchange between the doctor and the patient through technology. Learn more about the uses, benefits, and drawbacks here.

In this ‘In Conversation’ podcast and accompanying feature, Medical News Today’s editors discuss 2021's research and medical news highlights.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on the Health of Select Populations; Committee on the Evaluation of Research Management by DoD Congressionally Directed Medical Research Programs (CDMRP). Evaluation of the Congressionally Directed Medical Research Programs Review Process. Washington (DC): National Academies Press (US); 2016 Dec 19.

Cover of Evaluation of the Congressionally Directed Medical Research Programs Review Process

Evaluation of the Congressionally Directed Medical Research Programs Review Process.

  • Hardcopy Version at National Academies Press

5 Peer Review

A robust peer review process is critical to the Congressionally Directed Medical Research Programs' (CDMRP's) mission of funding innovative health-related research. Peer review is used throughout the scientific research community, both for determining funding decisions for research applications and for publications on research outcomes. The purpose of peer review is for the proposed research to be assessed for scientific and technical merit by experts in the same field who are free from conflicts of interest (COIs) or bias and, in some instances, by lay people who might be affected by the results of the proposed study. The process should be transparent to the applicant as well as to the research and stakeholder communities in order to ensure that each application receives a competent, thorough, timely, and fair review. The peer review process is the standard by which the most meritorious science is developed, assessed, and distributed.

This chapter describes the process of assigning peer reviewers to panels and applications; the criteria and scoring used by peer reviewers; activities that occur prior to, during, and after the peer review meeting; and the quality assurance procedures used by CDMRP to ensure that the applications are scored and critiqued correctly. Peer review occurs after CDMRP has received full applications in response to the programmatic panel activities described in Chapter 4 (see Figure 5-1 ). Peer review panels are responsible for conducting both the scientific and technical merit review and an impact assessment of all of the full applications submitted to each CDMRP research program.

The CDMRP review cycle. The yellow circle and box show the steps in the cycle that are discussed in this chapter. * As needed.

  • PRE-MEETING ACTIVITIES

As described in Chapter 3 , the selection of peer reviewers begins shortly after the award mechanisms are chosen by the programmatic panel during the vision setting meeting. The peer review support contractor works with the CDMRP program manager to determine the number of peer review panels that will be required and the types of expertise that will be necessary to review the anticipated applications.

Panel and Application Assignments

Depending on the program and the number of award mechanisms and anticipated applications, multiple peer review panels—tailored to the specific expertise required by the research program and award mechanism—may be needed for a single funding opportunity or topic area (for example, if several applications propose to investigate a specific protein group or metabolic pathway). In general, a peer review panel reviews applications for a single award mechanism (or a group of similar types of mechanisms). Because the review criteria differ to some extent by mechanism, having multiple panels allows peer reviewers to focus on the specific review criteria and programmatic intent of an award mechanism ( Kaime et al., 2010 ). For example, in 2014 there were 249 separate review panels (and 3,195 peer reviewers) across 23 programs. Large programs, such as the Peer Reviewed Medical Research Program and the Breast Cancer Research Program, had the most peer review panels (85 and 38, respectively), and small programs, such as the Duchenne Muscular Dystrophy Research Program and the Multiple Sclerosis Research Program, had only one panel each ( Salzer, 2016c ).

The number of peer review panels and the mechanism(s) or topic(s) they cover are discussed by the integrated program team and inform the CDMRP program manager's task order assumptions regarding the types and number of reviewers to be recruited ( Salzer, 2016a ). As mentioned in Chapter 3 , the peer review contractor begins recruiting potential reviewers when the program announcement is publicly released. The committee notes that in its scanning of program announcements for 2016, there appear to have been about 5–6 months between the release of the program announcement and the peer review meeting.

In general, a single panel reviews a maximum of 60 applications; there is no minimum number of applications a panel may review ( CDMRP, 2011 ). Scientific review officers (SROs) and consumer review administrators assign, respectively, scientist and consumer reviewers to specific panels ( Kaime et al., 2010 ).

Each application is assigned to at least two scientist reviewers (a primary reviewer and a secondary reviewer) and one consumer reviewer, but reviewers are responsible for being familiar with all applications to be discussed and reviewed by their panel. Each scientist reviewer is generally assigned six to eight applications, but this may vary depending on the program and the guidance provided by the program manager.

One to four consumer reviewers serve on each panel, depending on the number of applications and the number of panels for the program. Each consumer reviewer is assigned at most 20 applications as these reviewers are required to review only selected sections of an application, such as the impact statement and lay abstract. Any consumer who has a scientific background is assigned to a panel that is reviewing applications outside his or her area of expertise to avoid confusion between the “science” and “advocacy” roles of the panel member ( CDMRP, 2016f ). A specialty reviewer may also be assigned to review and assess specific sections of applications, in addition to the two scientist reviewers ( CDMRP, 2011 ). For example, for a clinical trial application, a biostatistician specialty reviewer would review the statistical analysis plan.

The criteria used to assign specific reviewers to an application include COIs (discussed in Chapter 3 ) and expertise. Applicants also assign primary and secondary research classification codes to their applications; the codes can be used to assign applications to peer review panels, inform recruitment of peer reviewers, and balance panel workloads. Panel membership is confidential until the end of the review process for that funding year, when the names and affiliations of all peer reviewers for each research program are posted to the CDMRP website ( CDMRP, 2016g ).

Panel Expertise

Primary peer reviewers are assigned to applications on the basis of their expertise ( CDMRP, 2011 ). Once applications are assigned to a panel, all scientist reviewers indicate their level of expertise for reviewing individual applications using an adjectival rating scale of high, medium, low, or none, based on review of the title and abstract. The scale and description are as follows ( Salzer, 2016c ):

  • High: You are able to review the application with little or no need to make use of background material or the relevant literature. You have likely published in areas closely related to the science/topics presented in the application.
  • Medium: You have most of the knowledge to review the application although it would require some review of relevant literature to fill in details or increase familiarity with the system employed. You may employ similar methodologies in your own work, or study similar molecules, processes, and/or topics, but you may need to review the literature for recent advances pertinent to the application.
  • Low: You understand the broad concepts but are unfamiliar with the specific system or other details, and reviewing the application would require considerable preparation.
  • None: You have only superficial or no familiarity with the concepts and systems described in the application.

The committee finds these definitions of expertise and experience to be very helpful and likely to be quite informative in assigning applications for review. There does not appear to be a similar process at the National Institutes of Health (NIH).

The committee assumes that the SRO uses the reviewers' responses to confirm their expertise and to assign primary, secondary, and specialty reviewers (if applicable). In general, only reviewers who have indicated high or medium expertise for an application are assigned to be primary or secondary reviewers for that application; the primary reviewer for any application must have high expertise ( Salzer, 2016d ). The peer review contractor also ensures that all reviewers on a panel have a minimum level of expertise.

Preliminary Critiques and Initial Scores

Timing of the preliminary reviews.

In general, it appears that the time between the deadline for the submission of a full application and the peer review meeting is about 2 months, although for some programs or awards this may extend to about 3 months. CDMRP states that approximately 40 hours of pre-meeting preparation over 4–6 weeks is required for peer review. This time includes registration, training, reviewing assigned applications, and writing critiques and comments for assigned applications. CDMRP states that peer reviewers typically have 3–4 weeks to review their assigned applications, although review time may vary by program, or if additional reviewers are added to the panel at a later time. In response to the committee's solicitation of input, several peer reviewers stated that they would like more time to review and critique their assigned applications.

Preliminary critiques and scores are submitted to the electronic biomedical research application portal (eBRAP), after which the contractor SRO and the CDMRP program manager review them. Applications, preliminary critiques, and preliminary scores are also available electronically to all other panel members (not just assigned reviewers) except in the case of those who have COI with a particular application.

CDMRP reports that peer reviewers then have 4–5 weeks to review all preliminary critiques and scores assigned to their panel before the panel meeting ( Salzer, 2016c ). This length of time is intended to allow the reviewers to become familiar with all the critiques before the meeting, so that the discussion at the meeting can be focused and the reviewers who were not assigned a specific application will have enough time to be informed about it and contribute to the discussion.

Peer Review Criteria

Each application is evaluated according to the peer review criteria published in the program announcement. Usually, two sets of scores are given during peer review. The first set consists of scores on evaluation criteria such as impact, research strategy and feasibility, and the transition plan; each of these criteria receive numeric scores from the primary and secondary reviewers. Other criteria, such as environment, budget, and application presentation are also evaluated but do not receive numeric scores. The individual criteria are not given different weights, but they are generally presented in order of decreasing importance ( Kaime et al., 2010 ). The scale uses whole numbers from 1 (deficient) to 10 (outstanding) (see Table 5-1 ).

TABLE 5-1. CDMRP Evaluation Criteria Scoring Scale.

CDMRP Evaluation Criteria Scoring Scale.

The second score given is the overall score, which represents an overall assessment of the application's merit. Overall scores range from 1.0 (outstanding) to 5.0 (deficient) (see Table 5-2 ). Some award mechanisms use only an overall score, whereas others only use adjectival scores instead of numeric ones ( Salzer, 2016b ). Reviewers are instructed to base the overall score on the evaluation criterion scores, but they may also consider additional criteria that are not individually scored, such as budget and application presentation ( Kaime et al., 2010 ).

TABLE 5-2. CDMRP Overall Scoring Scale.

CDMRP Overall Scoring Scale.

As can be seen from Tables 5-1 and 5-2 , the scale used to score evaluation criteria is different from the scale for the overall score. The overall and criterion scores are not combined or otherwise mathematically manipulated to connect them, but the overall score is expected to correspond to the individual criterion scores (for example, if the individual criteria receive scores in the excellent range, the overall score should also be in the excellent range) ( MOMRP, 2014 ). The use of two different scales to score the application is deliberate and intended to discourage averaging evaluation criterion scores into an overall score ( Kaime et al., 2010 ). However, several peer and programmatic reviewers who responded to the committee's solicitation of input noted that the use of the two scales was confusing. The committee finds that because CDMRP funding announcements dictate the importance of each criterion for the overall score, it would be easier for reviewers to appropriately consider each criterion if the same scale were used for both overall and criterion scores.

The committee notes that in 2009 the NIH, which reviews tens of thousands of applications per year, moved to a 9-point, whole-number scale (see Table 5-3 ) which may be used for both criterion scoring and the overall score ( NIH, 2015b ). Several other agencies that conduct peer review have adopted NIH's revised scoring system. The new scoring includes additional guidance for the category descriptors. For example, an application that is given a score of 1 has a descriptor of “exceptional,” which corresponds to “exceptionally strong with essentially no weaknesses.” Major, moderate, and minor weaknesses are further defined ( NIH, 2015b ). The NIH 9-point scoring system is based on the specific psychometric properties presented with that scale. CDMRP's use of such a 9-point scale would better reflect current peer review practices and help experts who review for multiple funding agencies be more comfortable with the CDMRP scoring system.

TABLE 5-3. NIH Peer Review Scoring Scale.

NIH Peer Review Scoring Scale.

Reviewer Roles

Scientist reviewers evaluate an entire application; consumer reviewers are required to critique only specific sections—the most important of which is the “impact statement”—but they may read and critique other sections if they choose. The impact statement describes the potential of the proposed research to address the goal of the program and the potential effects it will have on the scientific research community or people who have or are survivors of the health condition. Directions for preparing an impact statement are included in each program announcement, but the specific content varies by award mechanism ( CDMRP, 2016f ). Specialty scientist reviewers are responsible for critiquing and scoring only designated sections of the applications, such as the research plan or statistical design, but, as with consumer reviewers, they may critique additional sections of the application if they choose.

Each peer reviewer writes a preliminary critique on the strengths and weaknesses of his or her assigned applications and scores each criterion; the data are entered directly into eBRAP. Specialty or ad hoc peer reviewers provide criterion scores for those specific areas that they are charged with reviewing (for example, the statistical plan for a biostatistician specialty reviewer). Each assigned reviewer, including specialty reviewers, also provides an overall score ( Salzer, 2016b ). The committee is concerned about having reviewers who may have read only a portion of an application provide an overall score for it. However, the committee recognizes that the preliminary peer review scores, including the overall scores, can be revised by a reviewer during the full panel discussion of an application.

  • THE PEER REVIEW MEETING

There are five possible formats for conducting scientific peer review: onsite, in-person peer review panels; online/virtual peer review panels; teleconference peer review panels; video teleconference peer review panels; and online/electronic individual peer reviews. The peer review format to be used is decided by the program manager and included in the task order assumptions. An onsite peer review meeting lasts approximately 2–3 days, whereas teleconference or videoconference panels generally meet over 1–3 afternoons. In 2014, across 23 CDMRP research programs, there were 124 in-person panels, 23 videoconferences, 41 teleconferences, and 61 online conferences (49 of the online conferences were held by the Peer Reviewed Medical Research Program alone) ( Salzer, 2016c ). The majority of the programs had only in-person meetings or in-person meetings plus another meeting format, a few programs had only teleconferences, and none of the programs used only videoconferences or online meetings. Among peer reviewers who responded to the committee's solicitation of input, many stated that the in-person panel meetings were far superior to meetings held by teleconference or another meeting format.

When possible and depending on program size, most or all peer review panels for a given program are scheduled to take place in consecutive sessions. At the start of peer review panel meetings, the program manager presents a plenary briefing for all reviewers which includes an overview of CDMRP, the history and background of the program, the award mechanisms and their intent, the goals and expectations of the review process, and a summary of how the peer review deliverables inform the programmatic review panel. Program managers observe the performance of the panel as a whole as well as the performance of the individual peer reviewers, the panel chair, and the scientific review officers. Program managers also ensure that the panel discussions are consistent with the program announcement. For large programs with multiple peer review panels, the program manager may request that the program's science officer(s) attend to provide additional oversight ( Salzer, 2016b ).

During the meeting, the chair calls each individual application for discussion. Chairs are responsible for being familiar with all applications assigned to their panel and their preliminary critiques and scores. Each peer review panel must maintain a quorum of at least 80% of all panel members. The discussion of an application begins with the primary reviewer summarizing its goals, strengths, and weaknesses, followed by additional comments from the secondary and specialty peer reviewers, if applicable, and consumer reviewers. For in-person meetings, the chair facilitates further discussion of the application between the assigned reviewers and the other panel members ( Kaime et al., 2010 ).

All panel members assign a final score to each application following deliberations on it ( Kaime et al., 2010 ). Specialty peer reviewers are considered equal voting members of the panel for applications that they reviewed, but their input is limited to their assigned applications and not the panel's entire portfolio ( Salzer, 2016b ). The overall score provided by each voting panel member is averaged to produce the overall score for the application, which is provided to the applicant along with the summary statement ( MOMRP, 2014 ).

Reviewers may revise their preliminary critiques, if necessary, and complete the standardized application evaluation forms following the general discussion of the application. The panel chair then gives an oral summary of the discussion, including any reviewer's revisions. Panel members must finalize their scores immediately after the discussion of each application, and critiques may be modified at any time during the meeting and up to 1 hour after the meeting is concluded ( Salzer, 2016d ). The discussion is incorporated into the final summary statement for the application (discussed under Post-Meeting Activities and Deliverables). All panel reviewers are given an opportunity to comment on the chair's oral summary ( Salzer, 2016b ).

Programs that handle large volumes of applications may use an expedited review process ( Kaime et al., 2010 ). This process is a form of review triage and was instituted to decrease the cost and increase the efficiency of the peer review processes ( Salzer, 2016c ). In this process, all applications are reviewed in the pre-meeting phase by assigned peer reviewers as previously described. Pre-meeting scores are collated by the peer review contractor, who then sends the reviewers' scores (overall and criteria scores) for all applications to the CDMRP program manager. The program manager then reviews the scores and selects the range of scores (and thus applications) that will be assigned for discussion as well as those that will not be discussed at the plenary peer review meeting. Typically, the scoring threshold for discussion is the top 10% (although this could be as much as 40%) of applications. Applications designated for expedited review are not discussed at the plenary peer review meeting unless the application is championed. An expedited application may be championed by any member of the peer review panel and will immediately be added to the docket for full panel discussion ( Salzer, 2016c ).

  • POST-MEETING ACTIVITIES AND DELIVERABLES

The main outcome of a peer review meeting is a summary statement for each application. Summary statements are used to help inform the deliberations of the programmatic panel when making funding recommendations (see Chapter 6 ). After the review process is complete and funding decisions have been made, the summary statements are provided as feedback to the respective principal investigators.

Summary Statements

Following a peer review panel meeting, the peer review contractor generates an overall debriefing report that has a summary of all comments made by the reviewers during the panel discussion. For each application that was reviewed by a panel, the SRO prepares a multi-page summary statement that consists of

  • identifying information for the application;
  • an overview of the proposed research which may include the specific aims;
  • the average overall score from all panel members who participated in the peer review meeting (with standard deviation so that the applicant can see how much variability there was in panel scores);
  • the average criterion-based scores (standard deviation may be provided);
  • for each criterion section, the assigned reviewer's written critiques of the application's strengths and weaknesses, including specialty reviews;
  • any panel discussion notes captured by the SRO during the panel meeting, such as comments from panel members who were not assigned to the application or the chair's oral summary; and
  • for unscored criteria, such as budget and application presentation, a summary of the strengths and weaknesses noted by assigned scientist reviewers and any relevant panel discussion notes ( Salzer, 2016b ).

The SRO does not change or alter the reviewers' written critiques except for formatting, spelling, typographical errors, etc. The SRO drafts all summary statements, and the program manager reviews them ( Salzer, 2016b ).

After the meeting, the CDMRP program manager receives a final scoring report as well as administrative and budgetary notes ( Salzer, 2016b ). Program managers review the summary statements and a final scoring report and may request rewrites from reviewers or the contractor if, for example, the critiques and summary do not match the scores or a summary statement is inadequate. If there are any actions that need to be completed prior to programmatic review, such as clarifying eligibility issues, the program manager will act on them before the programmatic review meeting ( Salzer, 2016b ).

Quality Assurance and Control of the Peer Review Process

The CDMRP program manager reviews all deliverables and evaluates contractor performance according to the individual contract's Quality Assurance Surveillance Plan. Contractor performance is reviewed and evaluated on a quarterly basis. Through the evaluations, CDMRP is able to provide feedback on the members of the peer review panels. All comments are sent to the contracting officer's representative, who gathers them and sends them to the contracting officer. Any discrepancies or deficiencies in the Quality Assurance Surveillance Plan are discussed with the contractor, and a resolution is sought ( Salzer, 2016a ).

For each funding cycle, the U.S. Army Medical Research and Materiel Command (USAMRMC) evaluates a random sample of 20 applications across all programs to assess whether reviewer expertise was appropriate for the application and to compare the information with the reviewer's self-assessment of their areas of expertise ( CDMRP, 2015c ). However, this random sample equates to less than one application to be reviewed per program. No additional information was provided on this evaluation process.

In addition to the USAMRMC evaluation, the program manager reviews at least 10% of the draft summary statements from each panel to assess the reviewers' evaluation of applications and to ensure that the summary statements accurately capture the panel's discussions ( Salzer, 2016b , d ). Summary statements are not chosen randomly; factors that may flag a summary for review include large changes in pre- versus post-discussion scores, applications with high scores, applications for which the panel had disagreements, and applications considered to be “high-profile” ( Salzer, 2016d ). Summary statements are reviewed to ensure that there is concordance among the evaluation criteria, overall scores, and the written critiques; that key issues of the panel discussion were captured; and that the critiques are appropriate for each criterion. If an issue that may have affected the peer review outcome is identified, the program manager may request a new peer review for the application. Other assessments that do not directly affect a peer review outcome are given as feedback to the contractor ( Salzer, 2016d ).

A mechanism for the quality assurance of peer reviewers is the post-meeting evaluation by the CDMRP program manager with support from the contractor's scientific review manager and SRO. The goal of this evaluation is to identify reviewers and chairs who should be asked to serve again, should their expertise be needed. Panel members are also evaluated to identify those who might be good chairs in future review cycles. There is no standardized form or criteria for peer reviewer evaluations ( Salzer, 2016c ), and no examples of program-specific evaluations were provided. The performance assessments of individual peer reviewers consider expertise, the ability to communicate ideas and rationale, group interactions, the ability to present and debate an opposing view in a professional manner, strong critique writing skills, and adherence to policies on nondisclosure and confidentiality ( Salzer, 2016c ). As part of the quality assurance check of summary statements, the overall quality of each reviewer's product is evaluated to be sure that there is a match between scores and written critiques, that there is solid reasoning, and that reviewers have demonstrated an understanding of how to judge each criterion ( Salzer, 2016d ). It is unclear to the committee whether the evaluation also includes any feedback from the peer reviewer, such as whether the panel had the appropriate expertise. Because program needs and award mechanisms may change from year to year, there is no guarantee that a particular expertise will be required each year, so even reviewers or chairs who are exemplary may not be invited to participate in subsequent panels if their expertise is not needed ( Salzer, 2016b ).

Post-Peer Review Survey

Following the peer review meetings, scientist and consumer reviewers complete an online survey and evaluation form to provide feedback on the experience, the process, and the areas that could use improvement ( Salzer, 2016b ). CDMRP informed the committee that the survey includes reviewer demographics; satisfaction with the process, including whether consumers are engaged; an evaluation of pre-meeting support (including logistics, webinars, and review guidance) and the technological interfaces used for review; and whether reviewers have recently submitted an application to another CDMRP research program or to another award mechanism in the same research program. Scores and comments are compiled and used by the peer review contractor and CDMRP program managers to assess the peer review process and to evaluate the program announcement for future modifications—for example, to clarify overall intent of the award, focus areas, or peer review criteria. Other aspects may be discussed at the program's subsequent vision setting meeting ( Salzer, 2016a ).

CDMRP stated that program managers have an opportunity to add program-specific or award mechanism–specific questions as needed to the post-review survey. For example, Box 5-1 shows the questions that were added to the post-peer review survey for the Amyotrophic Lateral Sclerosis Research Program at CDMRP's request in 2014 and 2015.

Questions Added by CDMRP to the Post-Peer Review Survey for the Amyotrophic Lateral Sclerosis Research Program.

The committee requested a copy of the post-peer review survey as well as the aggregated responses from a sample program. However, neither the survey nor the aggregate sample results were provided because the post-peer review survey is owned by and considered a deliverable by the peer review support contractor.

The consumer reviewers meet separately as a group at the end of the meeting to debrief on their experience and provide feedback to CDMRP through the consumer review administrator on how the experience could be improved ( CDMRP, 2016f ). Questionnaires are used to evaluate the mentor program and other consumer aspects of the CDMRP program. The results of the questionnaires were not available to the committee, but individual testimony from consumers who attended the committee's open sessions as well as consumers and scientists who responded to the committee's solicitation of input reported that consumers' input to the peer review process was helpful.

  • Cite this Page National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on the Health of Select Populations; Committee on the Evaluation of Research Management by DoD Congressionally Directed Medical Research Programs (CDMRP). Evaluation of the Congressionally Directed Medical Research Programs Review Process. Washington (DC): National Academies Press (US); 2016 Dec 19. 5, Peer Review.
  • PDF version of this title (3.3M)

In this Page

Recent activity.

  • Peer Review - Evaluation of the Congressionally Directed Medical Research Progra... Peer Review - Evaluation of the Congressionally Directed Medical Research Programs Review Process

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

DoD Peer Reviewed Medical Research Program

Discovery award.

Amount of funding:  Direct Costs budgeted for the entire period of performance will not exceed $200,000. Purpose:   The intent of the PRMRP DA is to support innovative, non-incremental, high-risk/potentially high-reward research that will provide new insights, paradigms, technologies, or applications. Studies supported by this award are expected to lay the groundwork for future avenues of scientific investigation. The proposed research project should include a well-formulated, testable hypothesis based on a sound scientific rationale and study design. This award mechanism may not be used to conduct clinical trials; however, non-interventional clinical research studies are allowed .  PI Eligibility:   Independent investigators  - Faculty with PI eligibility and CE faculty (with an approved  CE faculty PI waiver  obtained through their RPM in RMG prior to the pre-application/letter of intent) may be the PI or Partnering PI. Stanford eligibility clarification - Fellows are not eligible : even though the guidelines indicate "postdoctoral fellow or clinical fellow" may serve as PI, the application and guidelines do not require to a mentor. Therefore, this award is considered a research grant in which the Stanford PIship policy only permits faculty to apply. Since it is not mentored career development award, Instructors, Clinical Instructors, Academic Staff-Research (i.e., research associates), Postdocs, and Clinical fellows are not eligible (they may not submit career development PI waivers). Timeline: Pre-Application (Letter of Intent) Submission Deadline: March 29, 2023 via eBrap Please include your RPM’s name as business official in the pre-application. Institutional representative (RPM/RMG or CGO/OSR) Deadline: April 19, 2023 Application submission Deadline:  April 26, 2023  via grants.gov Guidelines:  https://cdmrp.army.mil/funding/prmrp

Focused Program Award

Amount of funding:  Direct Costs budgeted for the entire period of performance will not exceed $7.2M. Purpose:   The FY23 PRMRP Focused Program Award (FPA) is intended to optimize research and accelerate solutions to a critical question related to one of the congressionally directed FY23 PRMRP Topic Areas and one of the FY23 PRMRP Strategic Goals through a synergistic, multidisciplinary research program.  Lead PI Eligibility : Stanford Full Professor with PI eligibility. The PI is required to devote a minimum of 20% effort to this award.  Project Leader Eligibility : must be at or above the level of Assistant Professor with PI eligibility or a CE Assistant Professor (with an approved  CE faculty PI waiver  obtained through their RPM in RMG prior to the pre-application/letter of intent) Not eligible : Instructors, Clinical Instructors, Academic staff-research (i.e., research associates), and postdocs are   not  eligible for this RFP because Stanford does not consider them to be independent positions. Timeline: REQUIRED Pre-Application (Pre-Proposal) Deadline:  April 12, 2023  via eBrap Please include your RPM’s name as business official in the pre-application. Institutional representative (RPM/RMG or CGO/OSR) Deadline:  July 12, 2023 Application Submission Deadline: July 19, 2023 via grants.gov Guidelines:  https://cdmrp.army.mil/funding/prmrp

Investigator-Initiated Research Award

Amount of funding:  Direct Costs budgeted for the entire period of performance will not exceed $1.6M. Direct Costs budgeted for the entire period of performance with the Partnering PI Option will not exceed $2M. Purpose:   The PRMRP Investigator-Initiated Research Award (IIRA) is intended to support studies that will make an important contribution toward research and/or patient care for a disease or condition related to one of the FY23 PRMRP Topic Areas and one of the FY23 PRMRP Strategic Goals. PI Eligibility: Independent investigators - Faculty with PI eligibility and CE faculty (with an approved  CE faculty PI waiver  obtained through their RPM in RMG prior to the pre-application/letter of intent) may be the PI or Partnering PI. Not eligible : Instructors, Clinical Instructors, Academic staff-research (i.e., research associates), and postdocs are   not  eligible for this RFP because Stanford does not consider them to hold independent positions. Timeline: Pre-Application (Letter of Intent) Submission Deadline: April 19, 2023 via eBrap Please include your RPM’s name as business official in the pre-application. Institutional representative (RPM/RMG or CGO/OSR) Deadline: May 23, 2023 Application submission Deadline:  May 31, 2023  via grants.gov Guidelines:  https://cdmrp.army.mil/funding/prmrp

Technology/Therapeutic Development Award

Amount of funding:  Funding   Level 1 Direct Costs will not exceed $2M. Funding   Level 2 Direct Costs will not exceed $4M.  Purpose:   The PRMRP Technology/Therapeutic Development Award (TTDA) is a product-driven award mechanism intended to provide support for the translation of promising preclinical findings into products for clinical applications, including prevention, detection, diagnosis, treatment, or quality of life, for a disease or condition related to one of the FY23 PRMRP Topic Areas and one DOD FY23 Peer Reviewed Medical Technology/Therapeutic Development Award 11 of the FY23 PRMRP Strategic Goals. Products in development should be responsive to the health care needs of military Service Members, Veterans, and/or beneficiaries. This award mechanism may not be used to conduct clinical trials. Eligibility: Independent investigators -  Faculty with PI eligibility and CE faculty (with an approved  CE faculty PI waiver  obtained through their RPM in RMG prior to the pre-application/letter of intent) may be the PI.  Not eligible : Instructors, Clinical Instructors, Postdoctoral Fellows, Clinical Fellows, Academic staff-researchers (i.e., research associates) are not eligible because Stanford does not consider them to hold independent or faculty-level positions. Timeline: Pre-Application (Letter of Intent) Submission Deadline: April 19, 2023 via eBrap Please include your RPM’s name as business official in the pre-application. Institutional representative (RPM/RMG or CGO/OSR) Deadline: May 23, 2023 Application submission Deadline:  May 31, 2023  via grants.gov Guidelines:  https://cdmrp.army.mil/funding/prmrp

Clinical Trial Award

Amount of funding:  Direct Costs budgeted for the entire period of performance will not exceed $500,000. Purpose:   The FY23 PRMRP Clinical Trial Award (CTA) supports the rapid implementation of clinical trials with the potential to have a significant impact on a disease or condition addressed in one of the congressionally directed FY23 PRMRP Topic Areas and FY23 PRMRP Strategic Goals. Clinical trials may be designed to evaluate promising new products, pharmacologic agents (drugs DOD FY23 Peer Reviewed Medical Clinical Trial Award 11 or biologics), devices, clinical guidance, and/or emerging approaches and technologies. Proposed projects may range from small proof-of-concept trials (e.g., pilot, first in human, phase 0) to demonstrate the feasibility or inform the design of more advanced trials through large-scale trials to determine efficacy in relevant patient populations .  PI Eligibility:   Independent investigators - Faculty with PI eligibility and CE faculty (with an approved  CE faculty PI waiver  obtained through their RPM in RMG prior to the pre-application/letter of intent) may be the PI or Partnering PI. Not eligible : Instructors, Clinical Instructors, Academic staff-research (i.e., research associates), and Postdocs are   not  eligible for this RFP because Stanford does not consider them to hold independent positions. Timeline: REQUIRED Pre-Application (Pre-Proposal) Deadline:  April 12, 2023  via eBrap Please include your RPM’s name as business official in the pre-application. Institutional representative (RPM/RMG or CGO/OSR) Deadline:  July 12, 2023 Application Submission Deadline: July 19, 2023 via grants.gov Guidelines:  https://cdmrp.army.mil/funding/prmrp

Lifestyle and Behavioral Health Interventions Research Award

Amount of funding:  Direct Costs for the entire period of performance will not exceed $3M.  Purpose:   The FY23 PRMRP Lifestyle and Behavioral Health Interventions Research Award (LBIRA) supports clinical research and/or clinical trials using a combination of scientific disciplines including behavioral health, psychology, psychometrics, biostatistics and epidemiology, surveillance, and public health. Applications are required to address and provide a solution to DOD FY23 Peer Reviewed Medical Lifestyle and Behavioral Health Interventions Research Award 11 one of the congressionally directed FY23 PRMRP Topic Areas and FY23 PRMRP Strategic Goals. Eligibility: Independent investigators -  Faculty with PI eligibility and CE faculty (with an approved  CE faculty PI waiver  obtained through their RPM in RMG prior to the pre-application/letter of intent) may be the PI.  Not eligible : Instructors, Clinical Instructors, Postdoctoral Fellows, Clinical Fellows, Academic staff-researchers (i.e., research associates) are not eligible because Stanford does not consider them to hold independent or faculty-level positions. Timeline: Pre-Application (Letter of Intent) Submission Deadline: April 19, 2023 via eBrap Please include your RPM’s name as business official in the pre-application. Institutional representative (RPM/RMG or CGO/OSR) Deadline: May 23, 2023 Application submission Deadline:  May 31, 2023  via grants.gov Guidelines:  https://cdmrp.army.mil/funding/prmrp

  Additional Information

Please include your institutional official's name (RPM/RMG or CGO/OSR) as business official in the pre-application/LOI.

eBRAP Funding Opportunities and Forms  (including the General Application Instructions).

On this page:

You are using an outdated browser. Please upgrade your browser to improve your experience.

U.S. Army logo

  • MRDC FAMILY

CDMRP logo

DEPARTMENT OF DEFENSE - CONGRESSIONALLY DIRECTED MEDICAL RESEARCH PROGRAMS

Contact us | site map.

facebook icon

Transforming Health Care through Innovative and Impactful Research

  • Research Programs

Peer Reviewed Cancer

Vision – To advance mission readiness of U.S. military members affected by cancer.

Peer Reviewed Cancer  Research Program Cover Image

The PRCRP has developed a strategy to address multiple issues in cancer research over the spectrum of different cancer topics considered for funding. These Overarching Challenges are critical gaps in cancer research, care, and/or patient outcomes that will advance mission readiness of U.S. military members affected by cancer and improve quality of life by decreasing the burden of cancer on Service Members, their families, Veterans, and the American public.

  • Investigate primary, secondary, and tertiary prevention interventions/strategies to decrease cancer burden.
  • Determine the risk factors, etiology, or mechanisms underlying cancer development to improve prevention interventions.
  • Diagnostics/Prognostics
  • Identify approaches to predict treatment resistance, recurrence, and the development of advanced disease.
  • Distinguish unique features driving cancer occurrence across the spectrum of ages.
  • Develop and improve minimally invasive methods for neoplasia detection, initiation, progression, and recurrence.
  • Therapeutics
  • Transform cancer treatment, especially for advanced, recurrent, and metastatic disease.
  • Improve current therapies including systemic and local treatments.
  • Evaluate disease progression and/or treatment response over time.
  • Leverage the mechanisms of cancer development to improve treatment methods for all communities.
  • Patient Well-Being and Survivorship
  • Study methods to address survivorship issues, including quality of life, wellness, mental health, psychological impact of recurrence, reproductive/sexual health, and/or disability.
  • Reduce short- and long-term treatment toxicities, including neurocognitive and physical effects.
  • Investigate ways to bridge gaps between treatment and survivorship, including alternative medicine, nutrition and lifestyle factors, and supportive care.
  • Understand and address the immediate and enduring burdens on caregivers, families, and communities.
  • Improve prevention strategies, diagnosis, treatment, and outcomes for patients in underserved or under recognized populations.
  • Study methods to improve accessibility to care and address survivorship.
  • Advance health equity and reduce disparities in cancer care, including telehealth.
  • Develop strategies to understand barriers to and improve communication amongst provider, patient, and care network.

Congressional Appropriations

Congressional Appropriations

  • $784.8 million FY09-22
  • $130 million FY23
  • Topic Areas Offered by Year (FY09-24)
  • Annual Reports to Congress

Funding Summary

Funding Summary

  • 1,021 Awards in FY09-22
  • Recent Applications Recommended for Funding
  • Program Portfolio

Programmatic Panels

Programmatic Panels

  • FY24 Programmatic Panel
  • Previous Years' Programmatic Panels

Peer Review Participants

Peer Review Participants

  • FY23 Peer Review Participants
  • Previous Years' Peer Review Participants

Related Videos

The peer reviewed cancer research program (prcrp) vision, the convergent science virtual cancer center (csvcc), convergent science virtual cancer center supporting military health, news & highlights.

  • FY24 PRCRP Funding Opportunities Now Available!
  • Department of Defense Peer Reviewed Cancer Research Program Funding Opportunities for Fiscal Year 2024 (FY24)
  • FY23 PRCRP Recommended for Funding List
  • Chaotic protein that fuels 75% of all cancers can be controlled with new therapy (external link)
  • Convergent Science Virtual Cancer Center Supporting Military Health Video
  • New Behavioral Health Science Awards Aim to Improve Quality of Life in Pediatric, Adolescent, and Young Adult Cancer Survivors
  • Gene engineered cell therapy developed to target brain metastatic melanoma (external link)
  • Researchers discover molecular fail-safe that keeps bladder tissues from turning cancerous (external link)
  • Novel Targets for the Treatment of Metastatic Colorectal Cancer
  • IN FOCUS: Patient Well-Being and Survivorship in Cancer Care and Research
  • Hard To Lose Mutations In Tumors Predict Response To Immunotherapy (external link)
  • Novel CAR-T Therapy Targeting BAFF-R Against B-Cell Lymphomas
  • In Her Own Words: Melinda Bachini Speaks about Her Journey with a Rare Cancer and Her Experiences with Living Life to the Fullest
  • A Promising Nanotech Approach to Enhance Immunotherapy in Liver Cancer
  • Annie Horner PRCRP Consumer Reviewer
  • PRCRP Infographic
  • PRCRP Program Summary Sheet A
  • PRCRP Program Summary Sheet B
  • More…

Consumer Stories

  • Coming soon!

Research Highlights

  • Utilizing a New Orthotopic Xenograft Model of Diffuse Intrinsic Pontine Glioma (DIPG) to Develop Therapeutic Strategies

To advance mission readiness of U.S. military members affected by cancer.

To successfully promote high-impact research in cancer prevention, detection, treatment, quality of life, and survivorship for Service Members, their families, Veterans, and the American public.

Last updated Wednesday, May 8, 2024

FiveThirtyEight

Jul. 8, 2021 , at 6:00 AM

How Science Moved Beyond Peer Review During The Pandemic

And what scientists learned they still needed it for..

By Maggie Koerth

Filed under COVID-19

peer reviewed medical research

PHOTO ILLUSTRATION BY EMILY SCHERER / GETTY IMAGES

When papers from China began flooding the websites bioRxiv and medRxiv in the first months of 2020, it was a strange and notable change. Founded as places where scientists could post drafts of research papers before those papers went through a traditional peer review, these sites had never really advertised much in China or gotten many submissions from scientists in that country before, said Richard Sever, the co-founder of bioRxiv and medRxiv. The sudden shift turned out to be a preview of the pandemic to come. “We got a wave of submissions from China and then a wave of submissions from Italy. And I remember being with a colleague, looking at submission numbers, and the chart was so eerily familiar,” Sever said. “It looks just like the progression of pandemic caseloads.”

“Preprint,” or “prepress,” servers have been around for decades, but during the COVID-19 pandemic, they took on a new notoriety and level of importance. Two of these sites alone, bioRxiv and medRxiv, hosted 25 percent of all COVID-19-related scientific research published during the pandemic’s first 10 months — more than 10,000 papers. In contrast, only 78 preprints were uploaded to bioRxiv during the entire 2015-16 Zika epidemic. (The medRxiv site didn’t exist yet.) COVID-19 proved to scientists that preprint servers are crucial resources in a crisis. At the same time, though, experts said the pandemic also made the shortcomings of scientific publishing clear. Preprints share a lot of the same limitations as peer-reviewed research, especially when it comes to how the media and the public use research once it’s out in the world for all to see.

Websites that post scientific research before it has been peer-reviewed and published in a scientific journal have been active since at least 1991, when the classic arXiv site, originally used mainly by physicists, went live. Even before that, scientists have shared drafts and notes among themselves through personal correspondence ever since “science” as a field became a thing. Those lines of communication exist parallel to the traditional process of peer review, which puts gatekeepers between research and publication. First, a paper has to be accepted by the editors of a scientific journal. Then, it goes to a (usually anonymous) panel of other scientists for critique. Their notes will lead to edits, which usually lead to more experiments, and then (hopefully) eventual publication. 

But the peer-review process does not happen quickly . The editors are sorting through thousands of dense submissions, and the reviewers are volunteers reading in their free time. On average, it takes about six months for a life-sciences research paper to go from acceptance to publication. “During the outbreak we maintained the same peer-review and editing process, just on an accelerated schedule … but I don’t think that this is a sustainable model,” said Dr. Eric Rubin, editor in chief of the New England Journal of Medicine. 

He and other scientists said the issue of speed was particularly important for clinical research during the pandemic because doctors needed the most up-to-date information to make life-saving decisions for critically ill patients. For example, the large trial of the steroid dexamethasone — which first demonstrated that this cheap, widely available drug could reduce the likelihood of death in COVID-19 patients — was originally posted as a preprint . It would show up, traditionally published , in the New England Journal of Medicine eight months later, but the preprint made it possible to get that knowledge into doctors’ hands faster. “I had one MD who contacted me, and he said, ‘You know, there are probably people who are alive today who would have been dead if not for preprints,’” Sever said.

Successes like that have contributed to making preprints more widely accepted among biological and medical scientists. It also helps that research in preprints isn’t a lot different statistically from how those same papers eventually end up being published in a peer-reviewed scientific journal. A 2020 study of pre-COVID-19 bioscience that compared preprint papers and their later, peer-reviewed versions, for example, found that the biggest differences were in details like how clearly the title reflected the conclusions or how easy it was to find relevant information in the article.

related: COVID-19 Was Always Going To Be A Struggle For The CDC. But Trump Sure Didn’t Help. Read more. »

But that should be understood as a critique of the peer-review process rather than a glowing endorsement of preprint paper accuracy, said Alice Fleerackers, a graduate student and researcher at the Scholarly Communications Lab, a joint project of the Simon Fraser University and University of Ottawa. “There’s really a perception that peer review is a trustworthy quality-control mechanism,” Fleerackers said. But research hasn’t been able to support that idea . All the jokes and paranoia about COVID-19 being spread by 5G cellphone technology , for instance, trace their origin to a paper published in — and later retracted by — a scientific journal that claims to peer-review its content.

“We’ve seen with lots of the misinformation and many of the retractions that came out during COVID … ‘high-quality’ — and I’m using air quotes here — research in peer-reviewed publications can be just as flawed as research in preprint,” Fleerackers said. 

The real trouble with preprints — which is, funnily enough, also the real trouble with peer-reviewed research — is how those studies are promoted and written about on social media and by the press, experts told me. 

“Prior to the pandemic, I had fewer concerns about poor-quality science being preprinted and then widely disseminated,” said Maia Majumder, a computational epidemiologist at Harvard Medical School. “But now, everyday people are reading them too — and the media is covering them at a rate that far outpaces pre-2020. This means that when bad science is preprinted and happens to be sensational enough to get traction, it can quickly shape the discourse around a given topic.”

One of the top-tweeted and top-reported-on preprints in the first 10 months of the pandemic was a study that tried to estimate the share of the population in Santa Clara County, California, who had already been infected with COVID-19. The results, which found evidence for a higher rate of infection in the general population than anyone had previously calculated, was used to support the idea that COVID-19 would cause few deaths and that no lockdowns or distancing measures would be necessary. It was heavily critiqued and estimates had to be revised downward before BuzzFeed News revealed the research had been funded in part by an airline magnate who was critical of COVID-19-related social restrictions.

It’s unclear how much of the study’s coverage in the media was actually reporting the findings as opposed to the criticism and controversy it caused, but Fleerackers’s research suggests digital media outlets aren’t routinely providing readers with the context they need to understand the limitations of preprint studies. In a study published in January , Fleerackers found that, about half the time, digital publications made no mention that a COVID-19 preprint study being discussed even was a preprint. 

Reporting on preprints without explaining their limitations risks misleading the public the same way that reporting on peer-reviewed research as inherently accurate does, Fleerackers said. Though this reality was certainly known before the pandemic, the importance of preprint servers over the past year and a half has really driven the lesson home. Servers like bioRxiv and medRxiv have added prominent disclaimers to their pages. University press offices have begun to institute new rules that control the way preprint articles are promoted to the media. There’s even a new journal specifically dedicated to reviewing hyped COVID-19 preprints — and debunking them, if necessary. 

But the pandemic has also made clear how much the information in a preprint slips out of scientists’ control the moment it is posted online. “I hope they’re developing an awareness,” said Fleerackers, “that if you put something in public it is fair game for journalists, and for the public … and for conspiracy theorists.”

Maggie Koerth was a senior reporter for FiveThirtyEight. @maggiekb1

Filed under

COVID-19 (438 posts) Meta-Science (19)

IMAGES

  1. Medical Peer Review & Its Impact on a Personal Injury Claim

    peer reviewed medical research

  2. Peer-Reviewed Research as Medical Progress

    peer reviewed medical research

  3. Strategies to Leverage Research Funding: Guiding DOD's Peer Reviewed Medical Research Programs

    peer reviewed medical research

  4. The Peer Reviewed Medical Research Program

    peer reviewed medical research

  5. Peer-Reviewed Research as Medical Progress

    peer reviewed medical research

  6. What Is A Peer Reviewed Source

    peer reviewed medical research

VIDEO

  1. Who can take part in health and care research

  2. Peer Review and Its Impact on Quality

  3. How PubMed Works: Selection. March 9, 2023

  4. FSGS and Nephrotic Syndrome

  5. How Not to Age—Dr. Michael Greger with Dr. Khoi Le

  6. Cecilia Dupecher, PhD, Peer Reviewed Medical Research Program (PRMRP)

COMMENTS

  1. The New England Journal of Medicine

    The New England Journal of Medicine (NEJM) is a weekly general medical journal that publishes new medical research and review articles, and editorial opinion on a wide variety of topics of ...

  2. JAMA

    24,060. 23,322. Explore the latest in medicine including the JNC8 blood pressure guideline, sepsis and ARDS definitions, autism science, cancer screening guidelines, and.

  3. Peer Reviewed Medical, Congressionally Directed Medical Research Programs

    The Peer Reviewed Medical Research Program (PRMRP), established in fiscal year 1999 (FY99), has supported research across the full range of science and medicine, with an underlying goal of enhancing the health, care, and well-being of military Service members, Veterans, retirees, and their family members. Program oversight is provided by a ...

  4. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  5. A step-by-step guide to peer review: a template for patients and novice

    The peer review template for patients and novice reviewers ( table 1) is a series of steps designed to create a workflow for the main components of peer review. A structured workflow can help a reviewer organise their thoughts and create space to engage in critical thinking. The template is a starting point for anyone new to peer review, and it ...

  6. Publication Process

    The peer review process works to improve research reports while preventing overstated results from reaching physicians and the public. Each published NEJM manuscript benefits from hundreds of ...

  7. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer Review is defined as "a process of subjecting an author's scholarly work, research or ideas to the scrutiny of others who are experts in the same field" ( 1 ). Peer review is intended to serve two primary purposes. Firstly, it acts as a filter to ensure that only high quality research is published, especially in reputable journals ...

  8. Research Methods: How to Perform an Effective Peer Review

    Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1,2 It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3 In 2012, there were more than 28 000 scholarly ...

  9. Peer-Review

    Peer review is the first stage of a discussion among scientists as to whether the methods of the study support the conclusions made. Critical evaluation of the paper by external peers helps add clarity and acknowledge and report uncertainties and limitations, place new work in the context of the totality of available evidence and prevent over ...

  10. Peer Review Matters: Research Quality and the Public Trust

    The benefits and advantages of peer review in scientific research, and particularly medical research, are manifold and manifest. 1 Journals, editors, and peer reviewers hold serious responsibility as stewards of valid information, with accountability to the scientific community and an obligation to maintain the public trust.

  11. Peer Review in a General Medical Research Journal Before and During

    Key Points. Question How did peer review change from before to during the first year of the COVID-19 pandemic at a large open-access general medical journal?. Findings In this cohort study of 5013 manuscripts reviewed and peer reviews invited by an open-access general medical journal, peer review turnaround time was slightly shorter and editor-reported review quality was modestly increased ...

  12. Statistical Methods in Medical Research: Sage Journals

    Statistical Methods in Medical Research is a highly ranked, peer reviewed scholarly journal and is the leading vehicle for articles in all the main areas of medical statistics and therefore an essential reference for all medical statisticians. It is particularly useful for medical researchers dealing with data and provides a key resource for medical and statistical libraries, as well as ...

  13. Peer review: What is it and why do we do it?

    Flaws. Peer review is a quality control measure for medical research. It is a process in which professionals review each other's work to make sure that it is accurate, relevant, and significant ...

  14. Reducing bias and improving transparency in medical research: a

    Underpowered research or questionable methodological or statistical decisions can be identified and addressed through peer-review prior to study conduct. Since journals commit to publication upon review of study plans, rather than finished papers, registered reports may reduce the incentive for authors to 'spin' and for reviewers to request ...

  15. Congressionally Directed Medical Research Programs

    Peer Reviewed Cancer Peer Reviewed Medical Peer Reviewed Orthopaedic Prostate Cancer Rare Cancers Reconstructive Transplant Research Scleroderma Spinal Cord Injury Tick-Borne Disease Toxic Exposures Traumatic Brain Injury and Psychological Health Tuberous Sclerosis Complex Vision

  16. Journal of International Medical Research: Sage Journals

    Journal of International Medical Research is a peer-reviewed open access journal that focuses on innovative clinical and preclinical research, systematic reviews and meta-analyses. All manuscripts must follow ICMJE standards and will receive rigorous peer review. If accepted, they receive a full technical edit to make them highly accessible to the international medical community.

  17. Peer Review

    A robust peer review process is critical to the Congressionally Directed Medical Research Programs' (CDMRP's) mission of funding innovative health-related research. Peer review is used throughout the scientific research community, both for determining funding decisions for research applications and for publications on research outcomes. The purpose of peer review is for the proposed research ...

  18. DoD Peer Reviewed Medical Research Program

    DoD Peer Reviewed Medical Research Program Discovery Award Amount of ... for a disease or condition related to one of the FY23 PRMRP Topic Areas and one DOD FY23 Peer Reviewed Medical Technology/Therapeutic Development Award 11 of the FY23 PRMRP Strategic Goals. Products in development should be responsive to the health care needs of military ...

  19. Peer Reviewed Cancer Research Program, Congressionally Directed Medical

    Since Fiscal Year 2009, the Peer Reviewed Cancer Research Program (PRCRP) has been charged by U.S. Congress to fund innovative basic, applied, and translational cancer research to support Service members, their families, and the American public. Members of the military are exposed to hazardous environments due to the nature of their service and ...

  20. PDF COVID-19 Antibody Seroprevalence in Santa Clara County ...

    We can use our prevalence estimates to approximate the infection fatality rate from COVID-19 in Santa Clara County. As of April 10, 2020, 50 people have died of COVID-19 in the County, with an average increase of 6% daily in the number of deaths. If our estimates of 48,000-81,000 infections represent the cumulative total on April 1, and we ...

  21. PDF Peer Review and Focused Professional Practice Evaluation Policy for

    D. Peer: a practitioner with competencies equal to or greater than the practitioner whose practice is being reviewed. E. Care Review: conducted by the medical staff using its own members to perform review of professional competence of privileged practitioners for performance improvement and provision of safe and quality patient care.

  22. How Science Moved Beyond Peer Review During The Pandemic

    First, a paper has to be accepted by the editors of a scientific journal. Then, it goes to a (usually anonymous) panel of other scientists for critique. Their notes will lead to edits, which ...

  23. PDF Journal of Diagnostic Medical Sonography Reviewer Guidelines

    Reviewers will be assessed annually based on: 1) the number of manuscripts reviewed, 2) overall acceptance/decline rate, 3) turnaround time, and 4) quality of feedback provided. Reviewers who have not or cannot fulfill the position expectations may be removed from the list. Reviewers who wish to discontinue Reviewer duties may remove themselves ...