Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

What is peer review?

From a publisher’s perspective, peer review functions as a filter for content, directing better quality articles to better quality journals and so creating journal brands.

Running articles through the process of peer review adds value to them. For this reason publishers need to make sure that peer review is robust.

Editor Feedback

"Pointing out the specifics about flaws in the paper’s structure is paramount. Are methods valid, is data clearly presented, and are conclusions supported by data?” (Editor feedback)

“If an editor can read your comments and understand clearly the basis for your recommendation, then you have written a helpful review.” (Editor feedback)

Principles of Peer Review

Peer Review at Its Best

What peer review does best is improve the quality of published papers by motivating authors to submit good quality work – and helping to improve that work through the peer review process. 

In fact, 90% of researchers feel that peer review improves the quality of their published paper (University of Tennessee and CIBER Research Ltd, 2013).

What the Critics Say

The peer review system is not without criticism. Studies show that even after peer review, some articles still contain inaccuracies and demonstrate that most rejected papers will go on to be published somewhere else.

However, these criticisms should be understood within the context of peer review as a human activity. The occasional errors of peer review are not reasons for abandoning the process altogether – the mistakes would be worse without it.

Improving Effectiveness

Some of the ways in which Wiley is seeking to improve the efficiency of the process, include:

  • Reducing the amount of repeat reviewing by innovating around transferable peer review
  • Providing training and best practice guidance to peer reviewers
  • Improving recognition of the contribution made by reviewers

Visit our Peer Review Process and Types of Peer Review pages for additional detailed information on peer review.

Transparency in Peer Review

Wiley is committed to increasing transparency in peer review, increasing accountability for the peer review process and giving recognition to the work of peer reviewers and editors. We are also actively exploring other peer review models to give researchers the options that suit them and their communities.

  • Technical Support
  • Find My Rep

You are here

What is peer review.

Peer review is ‘a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.’ You can learn more in this explainer from the Social Science Space.  

A picture showing a manuscript with annotations, a notebook, and a journal.

Peer review brings academic research to publication in the following ways:

  • Evaluation – Peer review is an effective form of research evaluation to help select the highest quality articles for publication.
  • Integrity – Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.
  • Quality – The filtering process and revision advice improve the quality of the final research article as well as offering the author new insights into their research methods and the results that they have compiled. Peer review gives authors access to the opinions of experts in the field who can provide support and insight.

Types of peer review

  • Single-anonymized  – the name of the reviewer is hidden from the author.
  • Double-anonymized  – names are hidden from both reviewers and the authors.
  • Triple-anonymized  – names are hidden from authors, reviewers, and the editor.
  • Open peer review comes in many forms . At Sage we offer a form of open peer review on some journals via our Transparent Peer Review program , whereby the reviews are published alongside the article. The names of the reviewers may also be published, depending on the reviewers’ preference.
  • Post publication peer review can offer useful interaction and a discussion forum for the research community. This form of peer review is not usual or appropriate in all fields.

To learn more about the different types of peer review, see page 14 of ‘ The Nuts and Bolts of Peer Review ’ from Sense about Science.

Please double check the manuscript submission guidelines of the journal you are reviewing in order to ensure that you understand the method of peer review being used.

  • Journal Author Gateway
  • Journal Editor Gateway
  • Transparent Peer Review
  • How to Review Articles
  • Using Sage Track
  • Peer Review Ethics
  • Resources for Reviewers
  • Reviewer Rewards
  • Ethics & Responsibility
  • Sage Editorial Policies
  • Publication Ethics Policies
  • Sage Chinese Author Gateway 中国作者资源
  • Open Resources & Current Initiatives
  • Discipline Hubs
  • Find My Rep

You are here

What is peer review.

Peer review is ‘a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.’ You can learn more in this explainer from the Social Science Space.  

A picture showing a manuscript with annotations, a notebook, and a journal.

Peer review brings academic research to publication in the following ways:

  • Evaluation – Peer review is an effective form of research evaluation to help select the highest quality articles for publication.
  • Integrity – Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.
  • Quality – The filtering process and revision advice improve the quality of the final research article as well as offering the author new insights into their research methods and the results that they have compiled. Peer review gives authors access to the opinions of experts in the field who can provide support and insight.

Types of peer review

  • Single-anonymized  – the name of the reviewer is hidden from the author.
  • Double-anonymized  – names are hidden from both reviewers and the authors.
  • Triple-anonymized  – names are hidden from authors, reviewers, and the editor.
  • Open peer review comes in many forms . At Sage we offer a form of open peer review on some journals via our Transparent Peer Review program , whereby the reviews are published alongside the article. The names of the reviewers may also be published, depending on the reviewers’ preference.
  • Post publication peer review can offer useful interaction and a discussion forum for the research community. This form of peer review is not usual or appropriate in all fields.

To learn more about the different types of peer review, see page 14 of ‘ The Nuts and Bolts of Peer Review ’ from Sense about Science.

Please double check the manuscript submission guidelines of the journal you are reviewing in order to ensure that you understand the method of peer review being used.

  • Journal Author Gateway
  • Journal Editor Gateway
  • Transparent Peer Review
  • How to Review Articles
  • Using Sage Track
  • Peer Review Ethics
  • Resources for Reviewers
  • Reviewer Rewards
  • Ethics & Responsibility
  • Sage Editorial Policies
  • Publication Ethics Policies
  • Sage Chinese Author Gateway 中国作者资源
  • Open Resources & Current Initiatives
  • Discipline Hubs

sdsu library logo

  • Collections
  • Services & Support

facebook logo

All About Peer Review

The peer review process, part 1: watch the video, discussion questions, part 2: practice, for instructors.

So you need to use scholarly, peer-reviewed articles for an assignment...what does that mean? 

Peer review  is a process for evaluating research studies before they are published by an academic journal. These studies typically communicate  original research  or analysis for other researchers. 

The Peer Review Process at a Glance:

1. Researchers conduct a study and write a draft.

Looking for peer-reviewed articles?  Try searching in OneSearch or a library database  and look for options to limit your results to scholarly/peer-reviewed or academic journals. Check out this brief tutorial to show you how:   How to Locate a Scholarly (Peer Reviewed) Article

Part 1: Watch the video All About Peer Review (3 min.) and reflect on discussion questions.

After watching the video, reflect on the following questions:

  • According to the video, what are some of the pros and cons of the peer review process?
  • Why is the peer review process important to scholarship?
  • Do you think peer reviewers should be paid for their work? Why or why not?

Part 2: Take an interactive tutorial on reading a research article for your major.

Includes a certification of completion to download and upload to Canvas.

Speech bubbles over network pattern.

Social Sciences

(e.g. Psychology, Sociology)

Test tubes and line graph.

(e.g. Health Science, Biology)

Book and paint pallet.

Arts & Humanities

(e.g. Visual & Media Arts, Cultural Studies, Literature, History)

Click on the handout to view in a new tab, download, or print.

Anatomy of a Research Article

  • Teaching Peer Review for Instructors

In class or for homework, watch the video “All About Peer Review” (3 min.) .

Video discussion questions:

  • According to the video, what are some of the pros and cons of the peer review process

Assignment Ideas

  • Ask students to conduct their own peer review of an important journal article in your field. Ask them to reflect on the process. What was hard to critique?
  • Have students examine a journals’ web page with information for authors. What information is given to the author about the peer review process for this journal?
  • Assign this reading by CSUDH faculty member Terry McGlynn, "Should journals pay for manuscript reviews?" What is the author's argument? Who profits the most from published research? You could also hold a debate with one side for paying reviewers and the other side against.
  • Search a database like Cabell’s for information on the journal submission process for a particular title or subject. How long does peer review take for a particular title? Is it is a blind review? How many reviewers are solicited? What is their acceptance rate?
  • Assign short readings that address peer review models. We recommend this issue of Nature on peer review debate and open review and this Chronicle of Higher Education article on open review in Shakespeare Quarterly .

Proof of Completion

Mix and match this suite of instructional materials for your course needs!

Questions about integrating a graded online component into your class, contact the Online Learning Librarian, Rebecca Nowicki ( [email protected] ).

Example of a certificate of completion:

Sample certificate of completion for a SDSU Library tutorial.

  • Last Updated: Sep 20, 2023 2:41 PM
  • URL: https://libguides.sdsu.edu/UnderstandingPeerReview
  • SpringerLink shop

What is peer review?

Prepublication peer review has been part of science for a long time. Philosophical Transactions, the first peer-reviewed journal, published its first paper in 1665. But peer review may be even older still, because there are records of physicians in the Arab world reviewing the effectiveness of each other’s treatments in the 9th century.

Peer review is a critical part of the modern scientific process. For science to progress, research methods and findings need to be closely examined to decide on the best direction for future research. After a study has gone through peer review and is accepted for publication, scientists and the public can be confident that the study has met certain standards, and that the results can be trusted.

After an editor receives a manuscript, their first step is to check that the manuscript meets the journal’s rules for content and format. If it does, then the editor moves to the next step, which is peer review. The editor will send the manuscript to one or more experts in the field to get their opinion. The experts – called peer reviewers – will then prepare a report that assesses the manuscript, and return it to the editor. After reading the peer reviewer's report, the editor will decide to do one of three things: reject the manuscript, accept the manuscript, or ask the authors to revise and resubmit the manuscript after responding to the peer reviewer feedback. If the authors resubmit the manuscript, editors will sometimes ask the same peer reviewers to look over the manuscript again to see if their concerns have been addressed.

Some of the problems that peer reviewers may find in a manuscript include errors in the study’s methods or analysis that raise questions about the findings, or sections that need clearer explanations so that the manuscript is easily understood. From a journal editor’s point of view, comments on the importance and novelty of a manuscript, and if it will interest the journal’s audience, are particularly useful in helping them to decide which manuscripts are likely to be highly read and cited, and thus which are worth publishing.

--- Commentary ---

Original URL: http://www.springer.com/authors/journal+authors/peer-review-academy?SGWID=0-1741413-12-959404-0

View: 

Picture Remarks:

Banner

Peer Reviewed Literature

What is peer review, terminology, peer review what does that mean, what types of articles are peer-reviewed, what information is not peer-reviewed, what about google scholar.

  • How do I find peer-reviewed articles?
  • Scholarly vs. Popular Sources

Research Librarian

For more help on this topic, please contact our Research Help Desk: [email protected] or 781-768-7303. Stay up-to-date on our current hours . Note: all hours are EST.

what is meant by peer review in research publication

This Guide was created by Carolyn Swidrak (retired).

Research findings are communicated in many ways.  One of the most important ways is through publication in scholarly, peer-reviewed journals.

Research published in scholarly journals is held to a high standard.  It must make a credible and significant contribution to the discipline.  To ensure a very high level of quality, articles that are submitted to scholarly journals undergo a process called peer-review.

Once an article has been submitted for publication, it is reviewed by other independent, academic experts (at least two) in the same field as the authors.  These are the peers.  The peers evaluate the research and decide if it is good enough and important enough to publish.  Usually there is a back-and-forth exchange between the reviewers and the authors, including requests for revisions, before an article is published. 

Peer review is a rigorous process but the intensity varies by journal.  Some journals are very prestigious and receive many submissions for publication.  They publish only the very best, most highly regarded research. 

The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.

Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.)  For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

Peer-reviewed and refereed are identical terms.

From  Peer Review in 3 Minutes  [Video], by the North Carolina State University Library, 2014, YouTube (https://youtu.be/rOCQZ7QnoN0).

Peer reviewed articles can include:

  • Original research (empirical studies)
  • Review articles
  • Systematic reviews
  • Meta-analyses

There is much excellent, credible information in existence that is NOT peer-reviewed.  Peer-review is simply ONE MEASURE of quality. 

Much of this information is referred to as "gray literature."

Government Agencies

Government websites such as the Centers for Disease Control (CDC) publish high level, trustworthy information.  However, most of it is not peer-reviewed.  (Some of their publications are peer-reviewed, however. The journal Emerging Infectious Diseases, published by the CDC is one example.)

Conference Proceedings

Papers from conference proceedings are not usually peer-reviewed.  They may go on to become published articles in a peer-reviewed journal. 

Dissertations

Dissertations are written by doctoral candidates, and while they are academic they are not peer-reviewed.

Many students like Google Scholar because it is easy to use.  While the results from Google Scholar are generally academic they are not necessarily peer-reviewed.  Typically, you will find:

  • Peer reviewed journal articles (although they are not identified as peer-reviewed)
  • Unpublished scholarly articles (not peer-reviewed)
  • Masters theses, doctoral dissertations and other degree publications (not peer-reviewed)
  • Book citations and links to some books (not necessarily peer-reviewed)
  • Next: How do I find peer-reviewed articles? >>
  • Last Updated: Feb 12, 2024 9:39 AM
  • URL: https://libguides.regiscollege.edu/peer_review

Explainer: what is peer review?

what is meant by peer review in research publication

Professor of Organisational Behaviour, Cass Business School, City, University of London

what is meant by peer review in research publication

Novak Druce Research Fellow, University of Oxford

Disclosure statement

Thomas Roulet does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

Andre Spicer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

City, University of London provides funding as a founding partner of The Conversation UK.

University of Oxford provides funding as a member of The Conversation UK.

View all partners

what is meant by peer review in research publication

We’ve all heard the phrase “peer review” as giving credence to research and scholarly papers, but what does it actually mean? How does it work?

Peer review is one of the gold standards of science. It’s a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already knew.

Most scientific journals, conferences and grant applications have some sort of peer review system. In most cases it is “double blind” peer review. This means evaluators do not know the author(s), and the author(s) do not know the identity of the evaluators. The intention behind this system is to ensure evaluation is not biased.

The more prestigious the journal, conference, or grant, the more demanding will be the review process, and the more likely the rejection. This prestige is why these papers tend to be more read and more cited.

The process in details

The peer review process for journals involves at least three stages.

1. The desk evaluation stage

When a paper is submitted to a journal, it receives an initial evaluation by the chief editor, or an associate editor with relevant expertise.

At this stage, either can “desk reject” the paper: that is, reject the paper without sending it to blind referees. Generally, papers are desk rejected if the paper doesn’t fit the scope of the journal or there is a fundamental flaw which makes it unfit for publication.

In this case, the rejecting editors might write a letter summarising his or her concerns. Some journals, such as the British Medical Journal , desk reject up to two-thirds or more of the papers.

2. The blind review

If the editorial team judges there are no fundamental flaws, they send it for review to blind referees. The number of reviewers depends on the field: in finance there might be only one reviewer, while journals in other fields of social sciences might ask up to four reviewers. Those reviewers are selected by the editor on the basis of their expert knowledge and their absence of a link with the authors.

Reviewers will decide whether to reject the paper, to accept it as it is (which rarely happens) or to ask for the paper to be revised. This means the author needs to change the paper in line with the reviewers’ concerns.

Usually the reviews deal with the validity and rigour of the empirical method, and the importance and originality of the findings (what is called the “contribution” to the existing literature). The editor collects those comments, weights them, takes a decision, and writes a letter summarising the reviewers’ and his or her own concerns.

It can therefore happen that despite hostility on the part of the reviewers, the editor could offer the paper a subsequent round of revision. In the best journals in the social sciences, 10% to 20% of the papers are offered a “revise-and-resubmit” after the first round.

3. The revisions – if you are lucky enough

If the paper has not been rejected after this first round of review, it is sent back to the author(s) for a revision. The process is repeated as many times as necessary for the editor to reach a consensus point on whether to accept or reject the paper. In some cases this can last for several years.

Ultimately, less than 10% of the submitted papers are accepted in the best journals in the social sciences. The renowned journal Nature publishes around 7% of the submitted papers.

Strengths and weaknesses of the peer review process

The peer review process is seen as the gold standard in science because it ensures the rigour, novelty, and consistency of academic outputs. Typically, through rounds of review, flawed ideas are eliminated and good ideas are strengthened and improved. Peer reviewing also ensures that science is relatively independent.

Because scientific ideas are judged by other scientists, the crucial yardstick is scientific standards. If other people from outside of the field were involved in judging ideas, other criteria such as political or economic gain might be used to select ideas. Peer reviewing is also seen as a crucial way of removing personalities and bias from the process of judging knowledge.

Despite the undoubted strengths, the peer review process as we know it has been criticised . It involves a number of social interactions that might create biases – for example, authors might be identified by reviewers if they are in the same field, and desk rejections are not blind.

It might also favour incremental (adding to past research) rather than innovative (new) research. Finally, reviewers are human after all and can make mistakes, misunderstand elements, or miss errors.

Are there any alternatives?

Defenders of the peer review system say although there are flaws, we’re yet to find a better system to evaluate research. However, a number of innovations have been introduced in the academic review system to improve its objectivity and efficiency.

Some new open-access journals (such as PLOS ONE ) publish papers with very little evaluation (they check the work is not deeply flawed methodologically). The focus there is on the post-publication peer review system: all readers can comment and criticise the paper.

Some journals such as Nature, have made part of the review process public (“open” review), offering a hybrid system in which peer review plays a role of primary gate keepers, but the public community of scholars judge in parallel (or afterwards in some other journals) the value of the research.

Another idea is to have a set of reviewers rating the paper each time it is revised. In this case, authors will be able to choose whether they want to invest more time in a revision to obtain a better rating, and get their work publicly recognised.

  • Peer review

what is meant by peer review in research publication

Senior Lecturer - Earth System Science

what is meant by peer review in research publication

Strategy Implementation Manager

what is meant by peer review in research publication

Sydney Horizon Educators (Identified)

what is meant by peer review in research publication

Deputy Social Media Producer

what is meant by peer review in research publication

Associate Professor, Occupational Therapy

Libraries & Cultural Resources

Research guides, academic publishing demystified.

  • How do I choose a suitable publication venue?
  • What is a predatory publisher?

What is peer review?

  • What is an impact factor?
  • Can I get help with writing?
  • Mental health & wellness
  • Resources from retreat

Until you've been through it at least once, the review process can be a confusing and opaque. Although there is a great deal of variation in how peer review works at different publication venues, the resources on this page can help you "see inside" the process before you submit. Resources on this page focus on scholarly journal publishing; other venues (e.g. University presses) will use slightly different processes.

Review process:

peer review process infographic

This image is an adaptation of Types of Review by Jessica Lange. from McGill Library  and is used under a Creative Commons CC BY 4.0 International  license.

Peer Review Video

  • Transcript - Peer review Download video transcript as a .txt file

Editorial Review

When you submit an article for publication, typically the first person to take a look at it is a member of the journal's editorial staff. Typically, these people are looking to assess whether an article is within the scope of the journal (topic, length, format, etc) and that it is of sufficient quality to warrant peer review.

Sometimes, certain sections of publications are subject only to editorial review. This is more common for non-research articles such as reviews, commentary, letters, etc.

Peer Review

Experts in the subject area of your article will review your article and provide feedback on it. Depending on the journal and the availability of reviewers, it is typical for one to three external experts to review your paper.

There are a number of different types of peer review. It's good to know what type your target publication uses, and this information should be on the publication's website. The table below outlines some of the types.

* These terms were formerly referred to as single-blind and double-blind. Anonymous is now the preferred term.

Source: PKP School, Different Types of Peer Review . 

Peer Review: Your Questions Answered

Can I submit a work to more than one publication at a time? No. This is to ensure that the labour involved in the review process is protected. If you are rejected from one publication, only then can you submit to a second.

How do I ensure that my work is anonymized prior to submitting? If you're submitting to a venue that uses double anonymous peer review, you will be directed to anonymize your manuscript. Remove any references to yourself or to things that could identify you:

  • References to your own works in the text: (Author, 2021).
  • Bibliography: Author. 2021.
  • Any roles, collaborators, institutions, etc. 

You will also have to anonymize the 'hidden' metadata of your paper in the word processor you use.

Is there any guarantee that my work will be accepted? No. Even for experienced authors, it is the peer review process that will decide whether or not a piece is accepted. It is normal for papers to be rejected multiple times, and for the whole process to take months or even years. This is why many authors will have more than one work in the publication "pipelline" at any one time.

How long does the peer review process take? It can be a long time! One study found that the average time from submission to acceptance is five months, with delays of over a year being common. If the process seems very slow, feel free to follow up with an editor. You are also free to withdraw your item and submit elsewhere if the timeline is too long.

How do I address the feedback of peer reviewers? You will be asked to make revisions. This is normal, and it's ok to have feelings (sad, grumpy, outraged) about this. Many experienced authors suggest waiting for several hours or days before addressing revisions. You should  address  every revision made by reviewers, but you don't have to  adopt  every revision. Some authors find it very helpful to use a chart to respond to revisions; the example below shows the difference between addressing and adopting suggestions.

Access a Google doc version of this table that you can adapt for your own use.

The reviewers have asked for so many changes! Is this normal?! The volume, nature, and style of reviewer comments can vary hugely, but it is absolutely normal for authors -- even very experienced ones -- to receive a seemingly overwhelming amount of feedback. Using the techniques outlined above, including parsing each piece of feedback into a table, can help you manage the feedback.

I don't agree with the feedback I received. What do I do? Sometimes you may receive contradictory feedback, or a response that does not fit the aims of your paper. This may happen for a variety of reasons: perhaps that section of your work was unclear, and the reviewer misunderstood you. Other times, the reviewer may not have accurate topic knowledge, or they were looking for a different type of paper altogether. Remember, you know your topic and you can push back against inappropriate feedback, as long as this is done respectfully. If feedback is very contradictory or just plain rude, you may wish to raise this issue with the editor.  Unacceptable reviewer comments include, use of swear words or profanity, discriminatory language or comments, and personal attacks on character or ability. You should contact the editor if you observe any of these kinds of remarks.

Acknowledgements

Some of the content on this page has been adapted from:

Scholarly Journal Publishing Guide by Jessica Lange , licensed under a Creative Commons Attribution 4.0 International License .

Introduction to Peer Review: Authors by Kate Cawthorn , licensed under a Creative Commons Attribution Share Alike 4.0 International License .

Unless otherwise noted, content is this guide is licensed under a Creative Commons Attribution 4.0 International License . 

  • << Previous: What is a predatory publisher?
  • Next: What is an impact factor? >>
  • Last Updated: Apr 4, 2024 10:11 AM
  • URL: https://libguides.ucalgary.ca/publishing

Libraries & Cultural Resources

  • 403.220.8895

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on 6 May 2022 by Tegan George . Revised on 2 September 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer review.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism, run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymised) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymised comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymised) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymised) review – where the identities of the author, reviewers, and editors are all anonymised – does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimises potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymise everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimise back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarise the argument in your own words

Summarising the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organised. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticised, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the ‘compliment sandwich’, where you ‘sandwich’ your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarised or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • Reject the manuscript and send it back to author, or
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, September 02). What Is Peer Review? | Types & Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/peer-reviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a double-blind study | introduction & examples, a quick guide to experimental design | 5 steps & examples, data cleaning | a guide with examples & steps.

Green River Logo

Holman Library

Ask a Librarian

Research Guide: Scholarly Journals

  • What does "Peer-Reviewed" mean?
  • Why Use Scholarly Journals?
  • What is *NOT* a Scholarly Journal Article?
  • Interlibrary Loan for Journal Articles
  • Introduction: Hypothesis/Thesis
  • Reading the Citation
  • Authors' Credentials
  • Literature Review
  • Methodology
  • Results/Data
  • Discussion/Conclusions
  • APA Citations for Scholarly Journal Articles
  • MLA Citations for Scholarly Journal Articles

Attribution & Thanks!

Permission to reuse content in this guide, namely regarding the peer-review process , was generously given by the librarians at the  Lloyd Sealy Library at the John Jay College of Criminal Justice .

What is Peer-Review?

  • What is peer-review?
  • Features of a peer-reviewed article
  • Video: How do you find a peer-reviewed article?

What is the peer review process?

In academic publishing, the goal of peer review is to  assess the quality  of articles submitted for publication in a scholarly journal. Before an article is deemed appropriate to be published in a peer-reviewed journal, it must undergo the following process:

  • The author of the article must submit it to the journal editor who forwards the article to experts in the field. Because the reviewers specialize in the same scholarly area as the author, they are considered the author’s peers (hence “peer review”).
  • These impartial reviewers are charged with carefully evaluating the quality of the submitted manuscript.
  • The peer reviewers check the manuscript for accuracy and assess the validity of the research methodology and procedures.
  • If appropriate, they suggest revisions. If they find the article lacking in scholarly validity and rigor, they reject it.

Because a peer-reviewed journal will not publish articles that fail to meet the standards established for a given discipline, peer-reviewed articles that are accepted for publication exemplify the best research practices in a field.

Attribution: Much of the information in these boxes about the  the peer-review process  was used with permission from the awesome librarians at the  Lloyd Sealy Library at the John Jay College of Criminal Justice .

Common elements of a peer-reviewed article

When you are determining whether or not the article you found is a peer-reviewed article, you should consider the following.

(Click on image to enlarge) 

This is an image showing parts of a scholarly journal highlighted, such as the authors and their credentials, the serious tone and in-depth coverage of the topic and various sections like an abstract, data, methods, discussions, and references

Also consider...

  • Is the journal in which you found the article published or sponsored by a professional scholarly society, professional association, or university academic department? Does it describe itself as a peer-reviewed publication? (To know that, check the journal's website). 
  • Did you find a citation for it in one of the  databases that includes scholarly publications? (Academic Search Complete, PsycINFO, etc.)?  Read the database description to see if it includes scholarly publications.
  • In the database, did you limit your search to scholarly or peer-reviewed publications? (See video tutorial below for a demonstration.)
  • Is the topic of the article narrowly focused and explored in depth?
  • Is the article based on either original research or authorities in the field (as opposed to personal opinion)?
  • Is the article written for readers with some prior knowledge of the subject?
  • Introduction
  • Theory or Background
  • Literature review

The easiest and fastest way to find peer-reviewed articles is to  search the online library databases , many of which include peer-reviewed journals. To make sure your results come from peer-reviewed (also called "scholarly" or "academic") journals, do the following:

Read the database description  to determine if it features peer-reviewed articles. All of the GRC databases have short descriptions about what kinds of topics they cover and why types of articles they house. Many, if not most, of the database house journal articles. 

When you search for articles, choose the Advanced Search option. On the search screen, look for  a check-box that allows you to limit your results to peer-reviewed  only. Often, you can see the option to limit to peer-review as well as "full-text" in the advanced settings, or off to the left hand side of the database's results page.

Consider the video below

Source: "Peer Review in 3 Minutes" by libncsu , is licensed under a Standard YouTube License.

Video: Peer Review in 3 Minutes

  • << Previous: What is a Scholarly Journal Article?
  • Next: What is *NOT* a Scholarly Journal Article? >>
  • Last Updated: Mar 15, 2024 1:18 PM
  • URL: https://libguides.greenriver.edu/scholarlyjournals
  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

Computer Science Library Research Guide

What is peer review.

  • Get Started
  • How to get the full-text
  • Find Books in the SEC Library This link opens in a new window
  • Find Conference Proceedings
  • Find Dissertations and Theses
  • Find Patents This link opens in a new window
  • Find Standards
  • Find Technical Reports
  • Find Videos
  • Ask a Librarian This link opens in a new window
  • << Previous: How to get the full-text
  • Next: Find Books in the SEC Library >>
  • Last Updated: Feb 27, 2024 1:52 PM
  • URL: https://guides.library.harvard.edu/cs

Harvard University Digital Accessibility Policy

University of Texas

  • University of Texas Libraries
  • UT Libraries

Finding Journal Articles 101

Peer-reviewed or refereed.

  • Research Article
  • Review Article
  • By Journal Title

What Does "Peer-reviewed" or "Refereed" Mean?

Peer review is a process that journals use to ensure the articles they publish represent the best scholarship currently available. When an article is submitted to a peer reviewed journal, the editors send it out to other scholars in the same field (the author's peers) to get their opinion on the quality of the scholarship, its relevance to the field, its appropriateness for the journal, etc.

Publications that don't use peer review (Time, Cosmo, Salon) just rely on the judgment of the editors whether an article is up to snuff or not. That's why you can't count on them for solid, scientific scholarship.

Note:This is an entirely different concept from " Review Articles ."

How do I know if a journal publishes peer-reviewed articles?

Usually, you can tell just by looking. A scholarly journal is visibly different from other magazines, but occasionally it can be hard to tell, or you just want to be extra-certain. In that case, you turn to Ulrich's Periodical Directory Online . Just type the journal's title into the text box, hit "submit," and you'll get back a report that will tell you (among other things) whether the journal contains articles that are peer reviewed, or, as Ulrich's calls it, Refereed.

Remember, even journals that use peer review may have some content that does not undergo peer review. The ultimate determination must be made on an article-by-article basis.

For example, the journal  Science  publishes  a mix  of peer-reviewed and non-peer-reviewed content. Here are two articles from the same issue of  Science . 

This one is not peer-reviewed:  https://science-sciencemag-org.ezproxy.lib.utexas.edu/content/303/5655/154.1  This one is a peer-reviewed research article:  https://science-sciencemag-org.ezproxy.lib.utexas.edu/content/303/5655/226

That is consistent with the Ulrichsweb  description of  Science , which states, "Provides news of recent international developments and research in all fields of science. Publishes original research results, reviews and short features."

Test these periodicals in Ulrichs :

  • Advances in Dental Research
  • Clinical Anatomy
  • Molecular Cancer Research
  • Journal of Clinical Electrophysiology
  • Last Updated: Aug 28, 2023 9:25 AM
  • URL: https://guides.lib.utexas.edu/journalarticles101

Creative Commons License

Library Homepage

What are Peer-Reviewed Journals?

  • A Definition of Peer-Reviewed
  • Research Guides and Tutorials
  • Library FAQ Page This link opens in a new window

Research Help

540-828-5642 [email protected] 540-318-1962

  • Bridgewater College
  • Church of the Brethren
  • regional history materials
  • the Reuel B. Pritchett Museum Collection

Additional Resources

  • What are Peer-reviewed Articles and How Do I Find Them? From Capella University Libraries

Introduction

Peer-reviewed journals (also called scholarly or refereed journals) are a key information source for your college papers and projects. They are written by scholars for scholars and are an reliable source for information on a topic or discipline. These journals can be found either in the library's online databases, or in the library's local holdings. This guide will help you identify whether a journal is peer-reviewed and show you tips on finding them.

undefined

What is Peer-Review?

Peer-review is a process where an article is verified by a group of scholars before it is published.

When an author submits an article to a peer-reviewed journal, the editor passes out the article to a group of scholars in the related field (the author's peers). They review the article, making sure that its sources are reliable, the information it presents is consistent with the research, etc. Only after they give the article their "okay" is it published.

The peer-review process makes sure that only quality research is published: research that will further the scholarly work in the field.

When you use articles from peer-reviewed journals, someone has already reviewed the article and said that it is reliable, so you don't have to take the steps to evaluate the author or his/her sources. The hard work is already done for you!

Identifying Peer-Review Journals

If you have the physical journal, you can look for the following features to identify if it is peer-reviewed.

Masthead (The first few pages) : includes information on the submission process, the editorial board, and maybe even a phrase stating that the journal is "peer-reviewed."

Publisher: Peer-reviewed journals are typically published by professional organizations or associations (like the American Chemical Society). They also may be affiliated with colleges/universities.

Graphics:  Typically there either won't be any images at all, or the few charts/graphs are only there to supplement the text information. They are usually in black and white.

Authors: The authors are listed at the beginning of the article, usually with information on their affiliated institutions, or contact information like email addresses.

Abstracts: At the beginning of the article the authors provide an extensive abstract detailing their research and any conclusions they were able to draw.

Terminology:  Since the articles are written by scholars for scholars, they use uncommon terminology specific to their field and typically do not define the words used.

Citations: At the end of each article is a list of citations/reference. These are provided for scholars to either double check their work, or to help scholars who are researching in the same general area.

Advertisements: Peer-reviewed journals rarely have advertisements. If they do the ads are for professional organizations or conferences, not for national products.

Identifying Articles from Databases

When you are looking at an article in an online database, identifying that it comes from a peer-reviewed journal can be more difficult. You do not have access to the physical journal to check areas like the masthead or advertisements, but you can use some of the same basic principles.

Points you may want to keep in mind when you are evaluating an article from a database:

  • A lot of databases provide you with the option to limit your results to only those from peer-reviewed or refereed journals. Choosing this option means all of your results will be from those types of sources.  
  • When possible, choose the PDF version of the article's full text. Since this is exactly as if you photocopied from the journal, you can get a better idea of its layout, graphics, advertisements, etc.  
  • Even in an online database you still should be able to check for author information, abstracts, terminology, and citations.
  • Next: Research Guides and Tutorials >>
  • Last Updated: Dec 12, 2023 4:06 PM
  • URL: https://libguides.bridgewater.edu/c.php?g=945314

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 16 April 2024

Structure peer review to make it more robust

what is meant by peer review in research publication

  • Mario Malički 0

Mario Malički is associate director of the Stanford Program on Research Rigor and Reproducibility (SPORR) and co-editor-in-chief of the Research Integrity and Peer Review journal.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

In February, I received two peer-review reports for a manuscript I’d submitted to a journal. One report contained 3 comments, the other 11. Apart from one point, all the feedback was different. It focused on expanding the discussion and some methodological details — there were no remarks about the study’s objectives, analyses or limitations.

My co-authors and I duly replied, working under two assumptions that are common in scholarly publishing: first, that anything the reviewers didn’t comment on they had found acceptable for publication; second, that they had the expertise to assess all aspects of our manuscript. But, as history has shown, those assumptions are not always accurate (see Lancet 396 , 1056; 2020 ). And through the cracks, inaccurate, sloppy and falsified research can slip.

As co-editor-in-chief of the journal Research Integrity and Peer Review (an open-access journal published by BMC, which is part of Springer Nature), I’m invested in ensuring that the scholarly peer-review system is as trustworthy as possible. And I think that to be robust, peer review needs to be more structured. By that, I mean that journals should provide reviewers with a transparent set of questions to answer that focus on methodological, analytical and interpretative aspects of a paper.

For example, editors might ask peer reviewers to consider whether the methods are described in sufficient detail to allow another researcher to reproduce the work, whether extra statistical analyses are needed, and whether the authors’ interpretation of the results is supported by the data and the study methods. Should a reviewer find anything unsatisfactory, they should provide constructive criticism to the authors. And if reviewers lack the expertise to assess any part of the manuscript, they should be asked to declare this.

what is meant by peer review in research publication

Anonymizing peer review makes the process more just

Other aspects of a study, such as novelty, potential impact, language and formatting, should be handled by editors, journal staff or even machines, reducing the workload for reviewers.

The list of questions reviewers will be asked should be published on the journal’s website, allowing authors to prepare their manuscripts with this process in mind. And, as others have argued before, review reports should be published in full. This would allow readers to judge for themselves how a paper was assessed, and would enable researchers to study peer-review practices.

To see how this works in practice, since 2022 I’ve been working with the publisher Elsevier on a pilot study of structured peer review in 23 of its journals, covering the health, life, physical and social sciences. The preliminary results indicate that, when guided by the same questions, reviewers made the same initial recommendation about whether to accept, revise or reject a paper 41% of the time, compared with 31% before these journals implemented structured peer review. Moreover, reviewers’ comments were in agreement about specific parts of a manuscript up to 72% of the time ( M. Malički and B. Mehmani Preprint at bioRxiv https://doi.org/mrdv; 2024 ). In my opinion, reaching such agreement is important for science, which proceeds mainly through consensus.

what is meant by peer review in research publication

Stop the peer-review treadmill. I want to get off

I invite editors and publishers to follow in our footsteps and experiment with structured peer reviews. Anyone can trial our template questions (see go.nature.com/4ab2ppc ), or tailor them to suit specific fields or study types. For instance, mathematics journals might also ask whether referees agree with the logic or completeness of a proof. Some journals might ask reviewers if they have checked the raw data or the study code. Publications that employ editors who are less embedded in the research they handle than are academics might need to include questions about a paper’s novelty or impact.

Scientists can also use these questions, either as a checklist when writing papers or when they are reviewing for journals that don’t apply structured peer review.

Some journals — including Proceedings of the National Academy of Sciences , the PLOS family of journals, F1000 journals and some Springer Nature journals — already have their own sets of structured questions for peer reviewers. But, in general, these journals do not disclose the questions they ask, and do not make their questions consistent. This means that core peer-review checks are still not standardized, and reviewers are tasked with different questions when working for different journals.

Some might argue that, because different journals have different thresholds for publication, they should adhere to different standards of quality control. I disagree. Not every study is groundbreaking, but scientists should view quality control of the scientific literature in the same way as quality control in other sectors: as a way to ensure that a product is safe for use by the public. People should be able to see what types of check were done, and when, before an aeroplane was approved as safe for flying. We should apply the same rigour to scientific research.

Ultimately, I hope for a future in which all journals use the same core set of questions for specific study types and make all of their review reports public. I fear that a lack of standard practice in this area is delaying the progress of science.

Nature 628 , 476 (2024)

doi: https://doi.org/10.1038/d41586-024-01101-9

Reprints and permissions

Competing Interests

M.M. is co-editor-in-chief of the Research Integrity and Peer Review journal that publishes signed peer review reports alongside published articles. He is also the chair of the European Association of Science Editors Peer Review Committee.

Related Articles

what is meant by peer review in research publication

  • Scientific community
  • Peer review

Chemistry lab destroyed by Taiwan earthquake has physical and mental impacts

Correspondence 23 APR 24

Breaking ice, and helicopter drops: winning photos of working scientists

Breaking ice, and helicopter drops: winning photos of working scientists

Career Feature 23 APR 24

India’s 50-year-old Chipko movement is a model for environmental activism

Is ChatGPT corrupting peer review? Telltale words hint at AI use

Is ChatGPT corrupting peer review? Telltale words hint at AI use

News 10 APR 24

Three ways ChatGPT helps me in my academic writing

Three ways ChatGPT helps me in my academic writing

Career Column 08 APR 24

Is AI ready to mass-produce lay summaries of research articles?

Is AI ready to mass-produce lay summaries of research articles?

Nature Index 20 MAR 24

Postdoctoral Research Fellow

Description Applications are invited for a postdoctoral fellow position at the Lunenfeld-Tanenbaum Research Institute, Sinai Health, to participate...

Toronto (City), Ontario (CA)

Sinai Health

what is meant by peer review in research publication

Postdoctoral Research Associate - Surgery

Memphis, Tennessee

St. Jude Children's Research Hospital (St. Jude)

what is meant by peer review in research publication

Open Rank Faculty Position in Biochemistry and Molecular Genetics

The Department of Biochemistry & Molecular Genetics (www.virginia.edu/bmg) and the University of Virginia Cancer Center

Charlottesville, Virginia

Biochemistry & Molecular Genetics

what is meant by peer review in research publication

Postdoctoral Position - Synthetic Cell/Living Cell Spheroids for Interactive Biomaterials

Co-assembly of artificial cells and living cells to make co-spheroid structures and study their interactive behavior for biomaterials applications.

Mainz, Rheinland-Pfalz (DE)

University of Mainz

2024 Recruitment notice Shenzhen Institute of Synthetic Biology: Shenzhen, China

The wide-ranging expertise drawing from technical, engineering or science professions...

Shenzhen,China

Shenzhen Institute of Synthetic Biology

what is meant by peer review in research publication

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Search Site
  • Campus Directory
  • Online Forms
  • Odum Library
  • Visitor Information
  • About Valdosta
  • VSU Administration
  • Virtual Tour & Maps Take a sneak peek and plan your trip to our beautiful campus! Our virtual tour is mobile-friendly and offers GPS directions to all campus locations. See you soon!
  • Undergraduate Admissions
  • Graduate Admissions
  • Meet Your Counselor
  • Visit Our Campus
  • Financial Aid & Scholarships
  • Cost Calculator
  • Search Degrees
  • Online Programs
  • How to Become a Blazer Blazers are one of a kind. They find hands-on opportunities to succeed in research, leadership, and excellence as early as freshman year. Think you have what it takes? Click here to get started.
  • Academics Overview
  • Academic Affairs
  • Online Learning
  • Colleges & Departments
  • Research Opportunities
  • Study Abroad
  • Majors & Degrees A-Z You have what it takes to change the world, and your degree from VSU will get you there. Click here to compare more than 100 degrees, minors, endorsements & certificates.
  • Student Affairs
  • Campus Calendar
  • Student Access Office
  • Safety Resources
  • University Housing
  • Campus Recreation
  • Health and Wellness
  • Student Life Make the most of your V-State Experience by swimming with manatees, joining Greek life, catching a movie on the lawn, and more! Click here to meet all of our 200+ student organizations and activities.
  • Booster Information
  • V-State Club
  • NCAA Compliance
  • Statistics and Records
  • Athletics Staff
  • Blazer Athletics Winners of 7 national championships, VSU student athletes excel on the field and in the classroom. Discover the latest and breaking news for #BlazerNation, as well as schedules, rosters, and ticket purchases.
  • Alumni Homepage
  • Get Involved
  • Update your information
  • Alumni Events
  • How to Give
  • VSU Alumni Association
  • Alumni Advantages
  • Capital Campaign
  • Make Your Gift Today At Valdosta State University, every gift counts. Your support enables scholarships, athletic excellence, facility upgrades, faculty improvements, and more. Plan your gift today!

Psychology Research: What is Peer Review?

  • Finding Empirical Articles
  • PubMed Guide This link opens in a new window
  • What is Peer Review?
  • Primary versus Secondary Resources
  • Citation Styles and Plaigarism This link opens in a new window
  • Using the Library This link opens in a new window

Peer Review

"Peer review" (or "refereed") means that an article is reviewed by experts in that field before the article gets published. This means that if a scientist writes an article on stem cells, other experts on stem cells will review the article to make sure it’s of high enough quality to be published.  The peer review or referee process ensures that the research described in a journal's articles is sound and of high quality.

Types of Peer Review 

Single-blind .

  • Reviewers know who wrote the article. 

Double-blind

  • Reviewers do not know who wrote the article. 
  • Designed to increase objectivity in the review process. 

Searching for Peer-Reviewed Articles

Many databases available through GALILEO give you the option to search for only peer reviewed items.  

For example: 

  • Enter your search terms (keywords)
  • Click on the box beside Scholarly (Peer Reviewed) Journals
  • You can also select the option for peer reviewed from your results screen

Advanced Search in Academic Search Complete. Towards the bottom of the screen there is an option for Scholarly (Peer Reviewed) Journals. It is circled in red.

Using Ulrich's to Check Peer Review Status

  • Ulrich's Periodicals Directory (Ulrichsweb) You can search in Ulrich's to see if the article is Refereed (Peer Reviewed).
  • Do not search for the article title. If you are unsure which title is the journal title,  ask a librarian
  • Click the magnifying glass icon
  • Some journals have very similar names, librarians can help you find the correct journal.
  • You can click on the journal title to see more information. Look for the Refereed under the Basic Description. If it says Refereed and Yes, then the journal is peer-reviewed.

Screenshot from Ulrich's. A red circle surrounds a referee shirt with a line and text that says This title is refereed (peer reviewed). There is a second red circle around a corresponding blank space with the words This title is not refereed (peer reviewed)

  • Refereed is the same as peer-reviewed. 
  • When a journal is peer-reviewed, it means that most of the articles published in it are peer-reviewed. Other content, such as editorials, letter to the editor, and responses to previously published articles, are not peer-reviewed.

Peer Review and Other Journal Information From Database

Some databases will tell you if the article is in a peer-reviewed journal. 

  • Click on the article title to see more information about the article.
  • In some databases you will see more information about the journal, including if it is peer reviewed.
  • In other databases, clicking on the journal title runs a search for all articles published in that title.

Journal Title Information:

Publication Details for Sport Psychologist. Provides information like ISSN, Publisher Information, Bibliographic Records, Publication Type  - this journal is an Academic Journal, Subjects, Description, Publisher URL, Frequency, and Peer Reviewed. At the bottom Peer Reviewed is circled in Red and next to it, it says yes.

  • << Previous: PubMed Guide
  • Next: Primary versus Secondary Resources >>
  • Last Updated: Jan 11, 2024 3:37 PM
  • URL: https://libguides.valdosta.edu/psychologyresearch
  • Virtual Tour and Maps
  • Safety Information
  • Ethics Hotline
  • Accessibility
  • Privacy Statement
  • Site Feedback
  • Clery Reporting
  • Request Info
  • Chester Fritz Library
  • Library of the Health Sciences
  • Thormodsgard Law Library
  • University of North Dakota
  • Research Guides
  • SMHS Library Resources

Scholarly Publishing

  • What is "peer-review"?
  • Where to publish?
  • What is Open Access?
  • Federally-Funded Research
  • Citation Styles
  • Deceptive Journals
  • Who owns your work?
  • Licensing your work as Creative Commons
  • Licensing your work with Copyright
  • Navigating Copyright, Reusing Online Images

Is your article scholarly?

What is "peer review", and is it the same as "scholarly".

People often use "peer review" and "scholarly" interchangeably, but they aren't the same.

Peer review happens at the article level. A Journal is peer reviewed if its articles* are all peer reviewed.

An article has been "peer reviewed" if it has been reviewed by a group of the article author's peers prior to that article being published. Articles need to pass this peer review process before they are published, and sometimes articles have to undergo multiple rounds of review, with the author being required to edit anything from their grammar, to tables portraying data, to the structure of the article.

*Most journals only peer-review the research articles they publish, and not the editorials, commentaries, and reviews that they also publish as a part of each journal issue. These other types of publications are usually labeled within the journal, though not always in databases, and they are structurally different than research articles, not usually containing the intro, methods, results, and discussion sections that research articles contain.

A colorful diagram representing the manuscript submission and review process

How do you know if an article has been peer reviewed?

There is one way to check for sure:

  • Look up the peer review process of the journal that published this article. The journal website should have a section discussing its peer-review process and should also list the members of its editorial board (those "peers" who do the reviewing). It's a sign of a bad journal if it doesn't provide this info.

What databases have peer-reviewed articles?

In practice, most people assume that research articles listed in library databases are peer-reviewed..

This is usually a safe assumption, however, some databases like CINAHL contain resources that are not peer-reviewed, like theses and news articles, though you can often filter non-articles out of your results.

what is meant by peer review in research publication

But even then, the only way to be really sure is to look into the peer review process of each article's journal.

Find a scholarly article levels of evidence

Scholarly-ness and levels of evidence.

Sometimes, in the health sciences and biomedical disciplines, "scholarly" means a certain level of evidence. Different types of research are considered to be higher or lower levels of evidence, and are sometimes arranged in a pyramid, called "the Pyramid of Evidence":

image: By CFCF -Licensed under Creative Commons CC BY-SA 4.0

In the above pyramid, Meta-Analyses are considered the highest level of evidence, with Case Reports being the lowest. Some other pyramids place animal research on a level below that.

What is the catch to "is your article scholarly?"

Sometimes when instructors say "Find a scholarly article", what they mean is "Find primary or secondary research carried out by a qualified researcher".

Primary Research

Primary research (also known as original research) is a direct or first-hand account of research or an experience. In primary research, the author is usually the one who carried out and is reporting about their research. Randomized control trials, cohort studies, and case control studies, etc., are all primary research.

Secondary Research

Secondary research is a second-hand account. Usually, in secondary research, someone other than the original researcher is writing about the research. Meta-analyses and Systematic Reviews are secondary research because the authors collect existing research, summarize the findings, and report about that. It is a good idea to include both primary and secondary research in your study, and beginning with secondary research can give you a quick birds-eye view of the current state of a field.

  • << Previous: Where to publish?
  • Next: What is Open Access? >>
  • Last Updated: Apr 9, 2024 4:41 PM
  • URL: https://libguides.und.edu/scholarly_publishing

Featured Clinical Reviews

  • Screening for Atrial Fibrillation: US Preventive Services Task Force Recommendation Statement JAMA Recommendation Statement January 25, 2022
  • Evaluating the Patient With a Pulmonary Nodule: A Review JAMA Review January 18, 2022

Select Your Interests

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

Peer Review and Scientific Publication at a Crossroads : Call for Research for the 10th International Congress on Peer Review and Scientific Publication

  • 1 Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California
  • 2 Department of Medicine, Stanford University School of Medicine, Stanford, California
  • 3 JAMA and the JAMA Network, Chicago, Illinois
  • 4 The BMJ , London, England
  • Editorial Three Decades of Peer Review Congresses Drummond Rennie, MD; Annette Flanagin, RN, MA JAMA

The way science is assessed, published, and disseminated has markedly changed since 1986, when the launch of a new Congress focused on the science of peer review was first announced. There have been 9 International Peer Review Congresses since 1989, typically running on an every-4-year cycle, and most recently in 2022 after a 1-year delay due to the COVID-19 pandemic. 1 Here, we announce that the 10th International Congress on Peer Review and Scientific Publication will be held in Chicago, Illinois, on September 3-5, 2025.

The congresses have been enormously productive, incentivizing and publicizing important empirical work into how science is produced, evaluated, published, and disseminated. 2 - 4 However, peer review and scientific publication are currently at a crossroads and their future more difficult than ever to predict. After decades of experience and research in these fields, we have learned a lot about a wide range of aspects of peer review and scientific publication. 2 - 5 We have accumulated a large body of empirical evidence on how systems function and how they can malfunction. There is also growing evidence on how to make peer review, publication, and dissemination processes more efficient, fair, open, transparent, reliable, and equitable. 6 - 15 Experimental randomized evaluations of peer review practices are only a small part of the literature, but their numbers have been growing since the early trials of anonymized peer review. 16 - 22 Research has revealed a rapidly growing list of biases, inefficiencies, and threats to the trustworthiness of published research, some now well recognized, others deserving of more attention. 2 , 3 Moreover, practices continue to change and diversify in response to new needs, tools, and technologies as well as the persistent “publish or perish” pressures on scientists-as-authors.

With the continued evolution of electronic platforms and tools—most recently the emergence and use of large language models and artificial intelligence (AI)—peer review and scientific publication are rapidly evolving to address new opportunities and threats. 23 , 24 Moreover, a lot of money is at stake; scientific publishing is a huge market with one of the highest profit margins among all business enterprises, and it supports a massive biomedical and broader science economy. Many stakeholders try to profit from or influence the scientific literature in ways that do not necessarily serve science or enhance its benefits to society. The number of science journal titles and articles is steadily increasing 25 ; many millions of scientists coauthor scientific papers, and perverse reward systems do not help improve the quality of this burgeoning corpus. Furthermore, principled mandates for immediate and open access to research and data may not be fully understood, accepted, or funded. Many other new, often disruptive, ideas abound on how to improve dissemination of and access to science, some more speculative, utopian, or self-serving than others. In addition, deceptive, rogue actors, such as predatory and pirate publishers, fake reviewers, and paper mills continue to threaten the integrity of peer review and scientific publication. Careful testing of the many proposals to improve peer review and publication and of interventions and processes to address threats to their integrity in a rigorous and timely manner are essential to the future of science and the scholarly publishing enterprise.

Proposed remedies for several of the problems and biases have been evaluated, 4 but many are untested or have inconclusive evidence for or against their use. New biases continue to appear (or at least to be recognized). In addition, there is tension about how exactly to correct the scientific literature, where a large share of what is published may not be replicable or is obviously false. 26 Even outright fraud may be becoming more common—or may simply be recognized and reported more frequently than before. 27 , 28

By their very nature, peer review and scientific publication practices are in a state of flux and may be unstable as they struggle to serve rapidly changing circumstances, technologies, and stakeholder needs and goals. Therefore, some unease would exist even in the absence of major perturbations, even if all the main stakeholders (authors, journals, publishers, funders) simply wanted to continue business as usual. However, the emergence of additional rapid changes further exacerbates the challenges, while also providing opportunities to improve the system at large. The COVID-19 crisis was one major quake that shook the way research is designed, conducted, evaluated, published, disseminated, and accessed. 29 , 30 Advances in AI and large language models may be another, potentially even larger, seismic force, with some viewing the challenge posed by these new developments as another hyped tempest in a teapot and others believing them to be an existential threat to truth and all of humanity. Scientific publication should fruitfully absorb this energy. 23 , 24 Research has never been needed more urgently to properly examine, test, and correct (in essence: peer review) scientific and nonscientific claims for the sake of humanity’s best interests. The premise of all Peer Review Congresses is that peer review and scientific publication must be properly examined, tested, and corrected in the same way the scientific method and its products are applied, vetted, weighted, and interpreted. 2

The range of topics on which we encourage research to be conducted, presented, and discussed at the 10th International Congress on Peer Review and Scientific Publication expands what was covered by the 9 previous iterations of the congress ( Box ). 1 , 2 , 4 We understand that new topics may yet emerge; 2 years until September 2025 is a relatively long period, during which major changes are possible, and even likely. Therefore, we encourage research in any area of work that may be relevant to peer review and scientific publication, including novel empirical investigations of processes, biases, policies, and innovations. The congress has the ambitious goal to cover all branches and disciplines of science. It is increasingly recognized that much can be learned by comparing experiences in research and review practices across different disciplines. While biomedical sciences have had the lion’s share in empirical contributions to research on peer review in the past, we want to help correct this imbalance. Therefore, we strongly encourage the contribution of work from all scientific disciplines, including the natural and physical sciences, social sciences, psychological sciences, economics, computer science, mathematics, and new emerging disciplines. Interdisciplinary work is particularly welcome.

Topics of Interest for the 10th International Congress on Peer Review and Scientific Publication

Efforts to avoid, manage, or account for bias in research methods, design, conduct, and reporting and interpretation

Publication and reporting bias

Bias on the part of researchers, authors, reviewers, editors, funders, commentators, influencers, disseminators, and consumers of scientific information

Interventions to address gender, race and ethnicity, geographic location, career stage, and discipline biases in peer review, publication, research dissemination, and impact

Improving and measuring diversity, equity, and inclusion of authors, reviewers, editors, and editorial board members

Motivational factors for bias related to rewards and incentives

New forms of bias introduced by wider use of large language models and other forms of artificial intelligence (AI)

Editorial and Peer Review Decision-Making

Assessment and testing of models of peer review and editorial decision-making and workflows used by journals, publishers, funders, and research disseminators

Evaluations of the quality, validity, and practicality of peer review and editorial decision-making

Challenges, new biases, and opportunities with mega-journals

Assessment of practices related to publication of special issues with guest editors

Economic and systemic evaluations of the peer review machinery and the related publishing business sector

Methods for ascertaining use of large language models and other forms of AI in authoring and peer review of scientific papers

AI in peer review and editorial decision-making

Quality assurance for reviewers, editors, and funders

Editorial policies and responsibilities

Editorial freedom and integrity

Peer review of grant proposals

Peer review of content for meetings

Editorial handling of science journalism

Role of journals as publishing venues vs peer review venues

COVID-19 pandemic and postpandemic effects

Research and Publication Ethics

Ethical concerns for researchers, authors, reviewers, editors, publishers, and funders

Authorship, contributorship, accountability, and responsibility for published material

Conflicts of interest (financial and nonfinancial)

Research and publication misconduct

Editorial nepotism or favoritism

Paper mills

Citation cartels, citejacking, and other manipulation of citations

Conflicts of interest among those who critique or criticize published research and researchers

Ethical review and approval of studies

Confidentiality considerations

Rights of research participants in scientific publication

Effects of funding and sponsorship on research and publication

Influence of external stakeholders: funders, journal owners, advertisers/sponsors, libraries, legal representatives, news media, social media, fact-checkers, technology companies, and others

Tools and software to detect wrongdoing, such as duplication, fraudulent manuscripts and reviewers, image manipulation, and submissions from paper mills

Corrections and retractions

Legal issues in peer review and correction of the literature

Evaluations of censorship in science

Intrusion of political and ideological agendas in scientific publishing

Science and scientific publication under authoritarian regimes

Improving Research Design, Conduct, and Reporting

Effectiveness of guidelines and standards designed to improve the design, conduct, and reporting of scientific studies

Evaluations of the methodological rigor of published information

Data sharing, transparency, reliability, and access

Research reanalysis, reproducibility, and replicability

Approaches for efficient and effective correction of errors

Curtailing citation and continued spread of retracted science

Innovations in best, fit-for-purpose methods and statistics, and ways to improve their appropriate use

Implementations of AI and related tools to improve research design, conduct, and reporting

Innovations to improve data and scientific display

Quality and reliability of data presentation and scientific images

Standards for multimedia and new content models for dissemination of science

Quality and effectiveness of new formats for scientific articles

Fixed articles vs evolving versions and innovations to support updating of scientific articles and reviews

Models for Peer Review and Scientific Publication

Single-anonymous, double-anonymous, collaborative, and open peer review

Pre–study conduct peer review

Open and public access

Preprints and prepublication posting and release of information

Prospective registration of research

Postpublication review, communications, and influence

Engaging statistical and other technical expertise in peer review

Evaluations of reward systems for authors, reviewers, and editors

Approaches to improve diversity, equity, and inclusion in peer review and publication

Innovations to address reviewer fatigue

Scientific information in multimedia and new media

Publication and performance metrics and usage statistics

Financial and economic models of peer-reviewed publication

Quality and influence of advertising and sponsored publication

Quality and effectiveness of content tagging, markup, and linking

Use of AI and software to improve peer review, decision-making, and dissemination of science

Practices of opportunistic, predatory, and pirate operators

Threats to scientific publication

The future of scientific publication

Dissemination of Scientific and Scholarly Information

New technologies and methods for improving the quality and efficiency of, and equitable access to, scientific information

Novel mechanisms, formats, and platforms to disseminate science

Funding and reward systems for science and scientific publication

Use of bibliometrics and alternative metrics to evaluate the quality and equitable dissemination of published science

Best practices for corrections and retracting fraudulent articles

Comparisons of and lessons from various scientific disciplines

Mapping of scientific methods and reporting practices and of meta-research across disciplines

Use and effects of social media

Misinformation and disinformation

Reporting, publishing, disseminating, and accessing science in emergency situations (pandemics, natural disasters, political turmoil, wars)

The congress is organized under the auspices of JAMA and the JAMA Network, The BMJ , and the Meta-research Innovation Center at Stanford (METRICS) and is guided by an international panel of advisors who represent diverse areas of science and of activities relevant to peer review and scientific publication. 4 The abstract submission site is expected to open on December 1, 2024, with an anticipated deadline for abstract submission by January 31, 2025. Announcements will appear on the congress website ( https://peerreviewcongress.org/ ). 4

Corresponding Author: John P. A. Ioannidis, MD, DSc, Stanford Prevention Research Center, Stanford University, 1265 Welch Rd, MSOB X306, Stanford, CA 94305 ( [email protected] ).

Published Online: September 22, 2023. doi:10.1001/jama.2023.17607

Conflict of Interest Disclosures: All authors serve as directors or coordinators of the Peer Review Congress. Ms Flanagin reports serving as an unpaid board member for STM: International Association of Scientific, Technical, and Medical Publishers. Dr Bloom reports being a founder of medRxiv and a member of the Board of Managers of American Institute of Physics Publishing.

Additional Information: Drs Ioannidis and Berkwits are directors; Ms Flanagin, executive director; and Dr Bloom, European director and coordinator for the International Congress on Peer Review and Scientific Publication.

Note: This article is being published simultaneously in The BMJ and JAMA .

See More About

Ioannidis JPA , Berkwits M , Flanagin A , Bloom T. Peer Review and Scientific Publication at a Crossroads : Call for Research for the 10th International Congress on Peer Review and Scientific Publication . JAMA. 2023;330(13):1232–1235. doi:10.1001/jama.2023.17607

Manage citations:

© 2024

Artificial Intelligence Resource Center

Cardiology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

Abstract eyes looking around

3d rendering of eyes on stalks looking around against a purple background

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Leave a Comment Cancel reply

  • Research article
  • Open access
  • Published: 22 April 2024

What does it mean to be good at peer reviewing? A multidimensional scaling and cluster analysis study of behavioral indicators of peer feedback literacy

  • Yi Zhang   ORCID: orcid.org/0000-0001-7153-0955 1 ,
  • Christian D. Schunn 2 &
  • Yong Wu 3  

International Journal of Educational Technology in Higher Education volume  21 , Article number:  26 ( 2024 ) Cite this article

2 Altmetric

Metrics details

Peer feedback literacy is becoming increasingly important in higher education as peer feedback has substantially grown as a pedagogical approach. However, quality of produced feedback, a key behavioral aspect of peer feedback literacy, lacks a systematic and evidence-based conceptualization to guide research, instruction, and system design. We introduce a novel framework involving six conceptual dimensions of peer feedback quality that can be measured and supported in online peer feedback contexts: reviewing process, rating accuracy, feedback amount, perceived comment quality, actual comment quality, and feedback content. We then test the underlying dimensionality of student competencies through correlational analysis, Multidimensional Scaling, and cluster analysis, using data from 844 students engaged in online peer feedback in a university-level course. The separability of the conceptual dimensions is largely supported in the cluster analysis. However, the cluster analysis also suggests restructuring perceived and actual comment quality in terms of initial impact and ultimate impact. The Multi-Dimensional Scaling suggests the dimensions of peer feedback can be conceptualized in terms of relative emphasis on expertise vs. effort and on overall review quality vs. individual comment quality. The findings provide a new road map for meta-analyses, empirical studies, and system design work focused on peer feedback literacy.

Introduction

Peer review, as a student-centered pedagogical approach, has become widely used in higher education (Gao et al., 2023 ; Kerman et al., 2024 ). In recent years, higher education research has begun to investigate peer feedback literacy (Dawson et al., 2023 ; Little et al., 2024 ; Nieminen & Carless, 2023 ). Peer feedback literacy refers to the capacity to comprehend, interpret, provide, and effectively utilize feedback in a peer review context (Dong et al., 2023 ; Man et al., 2022 ; Sutton, 2012 ). It supports learning processes by fostering critical thinking, enhancing interpersonal skills, and promoting active engagement in course groupwork (Hattie & Timperley, 2007 ). To date, conceptualizations of peer feedback literacy have primarily been informed by interview and survey data (e.g., Dong et al., 2023 ; Woitt et al., 2023 ; Zhan, 2022 ). These methods have provided valuable insights into learners’ knowledge of and attitudes towards peer feedback. However, they have not generally examined the behavioral aspect of peer feedback literacy, especially the quality of the feedback that students with high feedback literacy produce (Gielen et al., 2010 ). Knowledge and attitudes to not always translate into effective action (Becheikh et al., 2010 ; Huberman, 1990 ), and the quality of feedback that students actually produce play an important role in their learning from the process (Lu et al., 2023 ; Topping, 2023 ; Zheng et al., 2020 ; Zong et al., 2021a , b ).

In order to make progress on behavioral indicators of peer feedback literacy, it is important to recognize a lack of agreement in the literature in defining the key aspects of “quality” of peer feedback. In fact, collectively, a large number of different conceptualizations and measures have been explored (Jin et al., 2022 ; Noroozi et al., 2022 ; Patchan et al., 2018 ; Tan & Chen, 2022 ), and their interrelationships have not been examined. Further, much of the literature to date has investigated peer feedback quality at the level of individual comments and ratings. Individual comments and ratings can be driven by characteristics of the object being studied, moment-to-moment fluctuations in attention and motivation, as well as feedback literacy of the reviewer. To understand the dimensionality of feedback literacy, investigations of reviewing quality must be conducted at the level of reviewers, not individual comments. For example, specific comment choices may have weak or even negative relationships based upon alternative structures (i.e., a reviewer might choose between two commenting strategies in a given comment), but at the individual level (as a reviewer) the same elements might be positively correlated reflecting more general attitudes or skills.

Integrating across many prior conceptualizations and empirical investigations, we propose a new conceptual framework that broadly encompasses many dimensions of reviewing quality. We then present an empirical investigation using multidimensional scaling and cluster analysis of the dimensionality of peer reviewing quality at the reviewer level (i.e., the behavioral component of peer feedback literacy), utilizing a large peer review dataset in a university-level course.

Literature review

While most studies of peer reviewing quality have tended to focus on one or two specific measures, a few authors considered peer reviewing quality more broadly. In building a tool for university computer science courses that automatically evaluates peer feedback quality, Ramachandran et al. ( 2017 ) proposed conceptualizing peer feedback quality in terms of six specific measures such as whether the feedback is aligned to the rubric dimensions, whether the feedback has a balanced tone, and whether the feedback was copied from another review. Since their focus was on tool building, they did not consider the dimensionality of the specific measures.

More recently, Zhang and Schunn ( 2023 ) proposed a five-dimensional conceptual framework for assessing the quality of peer reviews: accuracy, amount, impact, features, and content. The larger framework was not tested, and only a few specific measures were studied in university biology courses. Using a broader literature review, here we expand and refine this framework to include six dimensions: reviewing process, rating accuracy, amount, perceived comment quality, actual comment quality, and feedback content (see Table  1 ).

The first dimension, reviewing process , pertains to varying methods students use while reviewing, significantly affecting feedback quality. This includes aspects like time devoted to reviewing or use of drafting of comments. Studies conducted in a lab and on MOOCs found a positive correlation between efficient time management and improved review accuracy (Piech et al., 2013 ; Smith & Ratcliff, 2004 ). However, such easily-collected process measures may not accurately represent effective processes. For instance, time logged in an online system may not reflect actual working time. Indeed, another study found that spending slightly below-average time reviewing correlated with higher reliability (Piech et al., 2013 ). To address this concern, Xiong and Schunn ( 2021 ) focused on whether reviews were completed in extremely short durations (< 10 min) instead of measuring the total time spent on a review. Similarly, numerous revisions while completing a review could signify confusion rather than good process. Methods like eye-tracking (Bolzer et al., 2015 ) or think-aloud techniques (Wolfe, 2005 ) could provide additional measures related to peer reviewing processes.

The second dimension, rating accuracy , focuses on peer assessment and the alignment between a reviewers’ ratings and a document’s true quality. True document quality is ideally determined by expert ratings, but sometimes, more indirect measures like instructor or mean multi-peer ratings are used. Across varied terms like error, validity, or accuracy, the alignment of peer ratings with document quality is typically quantified either by measuring agreement (i.e., distance from expert ratings—Li et al., 2016 ; Xiong & Schunn, 2021 ) or by measuring evaluator consistency (i.e., having similar rating patterns across document and dimension—Schunn et al., 2016 ; Tong et al., 2023 ; Zhang et al., 2020 ). Past studies typically focused on specific indicators without examining their interrelations or their relationship with other dimensions of peer reviewing quality.

The third dimension, amount , can pertain to one peer feedback component (i.e., the number or length of comments in a review) or broadly to peer review (i.e., the number of reviews completed). Conceptually, this dimension may be especially driven by motivation levels and attitudes towards peer feedback, but the amount produced can also reflect understanding and expertise (Zong et al., 2022 ). Within amount, a distinction has been made between frequency—defined by the number of provided comments or completed reviews as a kind of behavioral engagement (Zong et al., 2021b ; Zou et al., 2018 )—and comment length, indicating cognitive engagement and learning value (Zong et al., 2021a ). While comment length logically correlates with quality dimensions focused on the contents of a comment (i.e., adding explanations or potential solutions increases length), its associations with many other dimensions, like accuracy in ratings, reviewing process, or feedback content, remain unexplored.

The fourth dimension, perceived comment quality , focuses on various aspects of comments from the feedback recipient’s perspective; peer feedback is a form of communication, and recipients are well positioned to judge communication quality. This dimension may focus on the initial processing of the comment (e.g., was it understandable?; Nelson & Schunn, 2009 ) or its ultimate impact (e.g., was it accepted? was it helpful for revision? did the recipient learn something?; Huisman et al., 2018 ), typically measured using Likert scales. Modern online peer feedback systems used in university contexts often incorporate a step where feedback recipients rate the received feedback’s helpfulness (Misiejuk & Wasson, 2021 ). However, little research has explored the relation between perceived comment quality and other reviewing quality dimensions, especially at the grain size of a reviewer (e.g., do reviewers whose comments are seen as helpful tend to put more effort into reviewing, produce more accurate ratings, or focus on critical aspects of the document?).

The fifth dimension, actual comment quality , revolves around the comment’s objective impact (e.g., is it implementable or what is processed by the reviewer?) or concrete, structural elements influencing its impact (e.g., does it provide a solution, is the tone balanced, does it explain the problem?). This impact, or feedback uptake (Wichmann et al., 2018 ), typically pertains to the comment’s utilization in revisions (Wu & Schunn, 2021b ). However, as comments might be ignored for reasons unrelated to their comment content (Wichmann et al., 2018 ), some studies focus upon potential impact (Cui et al., 2021 ; Leijen, 2017 ; Liu & Sadler, 2003 ; Wu & Schunn, 2023 ). Another approach examines comment features likely to influence their impact, like the inclusion of explanations, suggestions, or praise (Lu et al., 2023 ; Tan & Chen, 2022 ; Tan et al., 2023 ; Wu & Schunn, 2021a ). Most studies on actual comment quality have explored how students utilize received feedback (van den Bos & Tan, 2019 ; Wichmann et al., 2018 ; Wu & Schunn, 2023 ), with much less attention given to how actual comment quality is related to other dimensions of feedback quality, particularly at the level of feedback providers (e.g., do reviewers who provide more explanations give more accurate ratings?).

The last dimension, feedback content , shifts from the structure of the comment (e.g., was it said in a useful way?) to the semantic topic of the content (i.e., was the comment about the right content?). Content dimensions explored thus far include whether the review comments were aligned with the rubric provided by the instructor (Ramachandran et al., 2017 ), whether they covered the whole object being reviewed (Ramachandran et al., 2017 ), whether they attend to the most problematic issues in the document from an expert perspective (e.g., Gao et al., 2019 ), whether they focused on pervasive/global issues (Patchan et al., 2018 ) or higher-order writing issues (van den Bos & Tan, 2019 ) rather than sentence level issues, whether the comments were self-plagiarized or copied from other reviewers (Ramachandran et al., 2017 ), or whether multiple peers also referred to these same issues (Leijen, 2017 ), which indicates that many readers find it problematic. It is entirely possible that reviewers give many well-structured comments but generally avoid addressing the most central or challenging issues in a document perhaps because those require more work or intellectual risk (Gao et al., 2019 ). It could be argued that high peer feedback literacy involves staying focused on critical issues. However, it is unknown whether reviewers who tend to give well-structured comments when provided a focused rubric tend to give more accurate ratings or address critical issues in the documents they are reviewing.

The present study

In the current study, we seek to expand upon existing research on peer reviewing quality by examining its multidimensional structure, at the reviewer level, in essence developing behavioral dimensions of peer review literacy. This exploration is critical for theoretical and practical reasons: the dimensionality of peer reviewing quality is foundational to conceptualizations of peer feedback literacy, sampling plans for studies of peer feedback literacy, and interventions designed to improve peer feedback literacy.

To make it possible to study many dimensions and specific measures of peer feedback quality at once, we leverage an existing dataset involving a university-level course in which different studies have collectively developed measures and data for a wide range of reviewing quality constructs. We further add a few measures that can be efficiently added using mathematical formulas. As a result, we are able to study five of the six dimensions (all but feedback content) and specifically eighteen specific measures. Our primary research question is: What is the interrelationship among different dimensions and measures of peer reviewing quality at the reviewer level? Specifically, we postulate that the five dimensions—reviewing process, rating accuracy, amount of feedback, perceived comment quality, and actual comment quality—are interconnected in strong ways within a dimension and in relatively weaker ways across dimensions.

Participants

Participants were 844 students enrolled in an Advanced Placement course in writing at nine secondary schools distributed across the United States. Participants were predominately female (59%; 4% did not report gender) and Caucasian (55%), followed by Asian (12%), African American (7%), and Hispanic/Latino (7%; 19% did not report their ethnicity). The mean age was 17 years ( SD  = 1.8).

The Advanced Placement (AP) course is a higher education course aimed for advanced high school students who are ready for instruction at the higher education level, similar to cases in which advanced high school students attend a course at a local university. This course is typically taken by students who are only 1 year younger than first-year university students, the point at which this specific course is normally taken, and by students who are especially likely to go on to university and wanting to be able to get credit for university-level courses to reduce their university degree time and costs. Since student enrollment in higher education and studies of their behavior focus on their general level of proficiency rather than age, students in this course should be thought of as more similar to entry-level university students than they are to general high school students. Further, the course is designed and regulated by a national organization, the College Board, to be entirely equivalent to a university course in content and grading.

The AP English Language and Composition course focuses on argument and rhetorical elements of writing, equivalent to the first-writing course that is required at most universities in the US (College Board, 2021 ). For a study on peer feedback within this course context, students from a school were taught by the same teacher, interacting online for peer feedback activities. Nine eligible teachers with experience in teaching this AP course were recruited. The selected teachers met the following eligibility criteria: 1) they had previously taught the course; 2) they were teaching at least two sections of the course during the study period; 3) they agreed to participate in training on effective use of the online peer feedback approach and study requirements; 4) they were willing to assign a specific writing assignment to students and require peer feedback on that assignment using the online system; and 5) they collectively represented a diverse range of regions in the US and student demographics.

All data were collected via an online peer-reviewing system, Peerceptiv ( https://peerceptiv.com ; Schunn, 2016 ), a system predominantly used at the university level (Yu & Schunn, 2023 ). The system provided access to data organized by research ids to protect student privacy, and the Human Research Protection Office at the University of Pittsburgh approved research on this data.

The task involved analyzing rhetorical strategies in a provided persuasive essay, with the specific prompt from a prior year’s end-of-year test. Students needed to: 1) submit their own document using a pseudonym; 2) review at least four randomly-assigned peer documents and rate document quality using seven 7-point rubrics, along with providing comments supported by seven corresponding comment prompts; 3) back-evaluate the helpfulness of received comments using a 5-point scale; and 4) submit a revised document. Half the students used an experimental version of the system that necessitated the use of a revision planning tool to indicate which received comments would be implemented in the revision and their priority, on a 3-point scale.

Measures of reviewing quality

This study examined 18 measures of peer reviewing quality in five categories (see Table  2 ), utilizing both simple mathematics calculations (like mean rating and word count) and labor-intensive hand-coding for comment content analysis. The hand-coding was aggregated from four prior studies (Wu & Schunn, 2020a , b , 2021a , b ). This analysis introduces new elements: novel measures (priority, agreement measures, number of features), integration of measures not previously examined together, and an analysis of the data aggregated to the reviewer-level data. The detailed hand coding processes are described in the prior publications. Here we give brief summaries of the measures and their coding reliabilities.

The amount and mean perceived comment quality measures were directly calculated by computer from the raw data. All the remaining measures involving data coded by a trained pool of four undergraduate research assistants and six writing experts (all with years of experience teaching writing and familiarity with specific writing assignment and associated reviewing rubrics used in the study). A given measure involved either undergraduate assistants or expertise depending upon the level of expertise required. Artifacts were coded by two individuals to assess reliability; discrepancies were resolved through discussion to improve data quality. Coding on each dimension for both research assistants and experts involved a training phase in which coders iteratively coded a subset of artifacts and discussed discrepancies/revised coding manuals until acceptable levels of reliability were obtained.

Before all hand-coding procedures, comments were segmented by idea units by a research assistant if a given textbox included comments about two or more different issues, resulting in 24,816 comments. Then, given the focus of the writing assignment on learning complex elements of writing, comments about low-level writing issues (i.e., typos, spelling, grammar) were excluded from further coding and data analysis, resulting in 20,912 high-level comments.

Reviewing process

The duration of the review process was determined by the recorded time interval between the point at which a document assigned for review was downloaded and the point at which the completed review was submitted. Reviews completed within a duration of less than 10 min were likely expedited, given the need to attend to seven dimensions, even for the expert evaluators (Xiong & Schunn, 2021 ). Here we used the converse, Not speeded, to refer to positive feedback quality.

Rating accuracy

As a reminder, both students and experts rated the quality of the documents submitted for peer review based on seven 1-to-7 scales. Accuracy was separately defined in terms of both rating agreement and rating consistency (Tong et al., 2023 ; Xiong & Schunn, 2021 ) and in regard to the standard of expert judgments and mean peer judgments. Expert judgments are considered the gold standard of validity, but mean peer judgments are often the only available standard in studies with very large datasets. In practice, expert ratings and mean peer ratings are often highly correlated (Li et al., 2016 ).

Expert agreement was calculated as the negative sum absolute value of the difference between the true document quality (assessed by the trained experts; kappa = 0.73) and each reviewer’s judgment of the document quality across the seven dimensions and documents. The peer agreement was calculated in the same way but used the mean ratings across the peers rather than the expert judgments. The negation was applied to the absolute error to create an accuracy measurement in which higher values indicated higher accuracy. A constant of 42 (maximum difference 6 * 7 dimensions) was added to minus the absolute error to make most values sit between 0 and 42, with 42 reflecting high accuracy.

The expert consistency was calculated as the linear correlation between true document quality (assessed by the trained experts) and each reviewer’s judgment of document quality across the seven dimensions. The peer consistency was calculated in the same way, but again using mean ratings across the peers instead of expert ratings. Values logically could vary between -1 and 1 (though rarely were valued negatively), with higher consistency values indicating higher accuracy.

Students were assigned a fixed number of documents to review but sometimes did not complete all the required reviews and sometimes completed extra reviews. Within a review, students had to give at least one comment for each of the seven dimensions, but they could give more than one comment for each dimension, and there was no required minimum or maximum length for a given comment. As a result, students could provide one or several comments, each consisting of a single word or several paragraphs. Prior research on peer feedback has found that comments involving more than 50 words typically include useful information for receivers (Wu & Schunn, 2020a ) and tend to produce more learning for comment providers (Zong et al., 2022 ). Also, there may be a tradeoff in that students could submit fewer longer comments or more total comments. Thus, we also calculated the percentage of long comments: the total number of long comments (i.e., having more than 50 words) divided by the total number of comments. To capture the three main ways in which amount varied, we included the number of reviews completed for the peer assessment task ( #Reviews ), the mean number of comments ( #Comments ), the percentage of long comments ( %Long comments ).

Perceived comment quality

All students were required to judge the helpfulness of the comments they received on a 1-to-5 scale, and students using the experimental revision planning interface had to select the priority with which they would implement each comment on a 1-to-3 scale. Both sources of data address perceived comment quality, with one involving a mixture of the value of comments for revision and for learning, and the other focusing exclusively on whether comments were useful for revision. Thus, two measures were created, one based on mean comment helpfulness and the other based on mean comment implementation priority.

Actual comment quality

The measures of actual comment quality were based upon hand-coding by the experts and trained research assistants. The first approach to actual comment quality focused on the usefulness of the comments. The experts coded feedback in terms of implementation in three ways: implementable (Kappa = 0.92), implemented (Kappa = 0.76) and improvement (Kappa = 0.69). Implementable ( N  = 14,793) refers to whether the comments could be addressed in a revision (i.e., was not pure praise or just a summary of the author’s work). By contrast, implemented refers to whether the comment was incorporated in the submitted document revision: a change in the document was made that could be related to the provided comment ( N  = 11,252). Non implementable comments were coded, by definition, as not implemented.

The improvement value of comments was coded by the experts for how much the comment could improve document quality ( N  = 1,758; kappa = 0.69). The two points were given when addressing a comment would measurably improve the document’s quality on the given rubrics (e.g., moving from a 5 to a 7 on a scale). One point was awarded when addressing a comment could improve document quality in terms of the underlying rubric dimensions, but not by enough to be a measurable change on the 7-point rubric scale. No points were given when addressing a comment would not improve document quality, would make the document worse, or would involve both improvements and declines (Wu & Schunn, 2020b ). Improvement was only coded for implementable comments.

Another approach to actual comment quality focused on specific feedback features that typically are helpful for revision or learning (Jin et al., 2022 ; Tan & Chen, 2022 ; Wu & Schunn, 2020a ). Research assistants coded the comments for whether they provided a specific solution (Kappa = 0.76), gave a more general suggestion for how to address the problem but not an exact solution (Kappa = 0.79), explicitly identified the problem (Kappa = 0.81) and explained the problem (Kappa = 0.80). Separate measures were created for each feature, calculated as the percentage of comments having each feature. There was also an aggregate features measure, calculated as the mean number of features contained in each comment ( #Features ).

Data analysis

Table 4 in Appendix shows the descriptive information for all the measures of peer reviewing quality at the reviewer level. Because of the different data sources, N s varied substantially across measures. In addition, some of the measures tended to have relatively high means with negative skews, like # of reviews, rating agreement and rating accuracy measures, and helpfulness. Other measures had low means and positive skews, like the specific comment features, %implemented, and mean improvement.

The peer reviewing measures were first analyzed for reliability across reviews. Conceptually, this analysis examines whether reviewers tended to give reviews of similar quality on a given measure across the reviews they completed on an assignment. It is possible that the reviewing quality was heavily influenced by characteristics of the object being reviewed (e.g., it is easier to include solutions for weaker documents), and thus not a measure of peer feedback literacy. Other incidental factors such as order of the reviews or presence of a distraction could also have mattered, but those factors likely would influence the reliability of all the measures rather than just isolated measures.

Reliability was measured via an Intraclass Correlation Coefficient ( ICC ). There are many forms of ICC. In terms of the McGraw and Wong ( 1996 ) framework, we used ICC(k) , which represents the agreement reliability (meaning level of deviation from the same exact rating) across k ratings (typically 4 in our data) using a one-way random analysis, because each reviewer was given different documents to review from a larger population of possible documents (Koo & Li, 2016 ). We used the Landis and Koch ( 1977 ) guidelines for interpreting the ICC values for the reliabilty of the measures: almost perfect for values above 0.80; substantial for values from 0.61 to 0.80; moderate for values of 0.41 to 0.60; fair for values of 0.21 to 0.40; slight for values of 0.01 to 0.20, and poor for values less than 0.

Finally, to show the interrelationship among the variables, we conducted a three-step process of: 1) pairwise correlation among all measures with pairwise rather than listwise deletion given the high variability in measure N s (see Figure 3 in Appendix for sample sizes); 2) multidimensional scaling (MDS) applied to the correlation data to visualize the relative proximity of the measures; and 3) a hierarchical cluster analysis applied to the results of the correlation matrix to extract conceptual clusters of measures. We conducted the analyses in R: pairwise correlations using the “GGally” package, multidimensional scaling using the “magrittr” package, and hierarchical clustering using the “stats” package. For the correlational analysis, we applied both linear and rank correlations since there were strong skews to some of the measures. The two approaches produced similar results. 

Multidimensional scaling (MDS) is a statistical technique employed to visualize and analyze similarities or dissimilarities among variables in a dataset (Carroll & Arabie, 1998 ). While factor analysis is typically used to test or identify separable dimensions among many specific measures, MDS provides a useful visualization of the interrelationship of items, particularly when some items inherently straddle multiple dimensions. It also provides a useful visualization of the interrelationship of the dimensions rather than just of the items (Ding, 2006 ). The outcome of MDS is a “map” that represents these variables as points within a lower-dimensional space, typically two or three dimensions, while preserving the original distances between them as much as possible (Hout et al., 2013 ). In the current study, we chose two dimensions based on a scree plot of the eigenvalues associated with each MDS dimension (see Figure 4 in Appendix )—two dimensions offered a relatively good fit and is much easier to visualize. We expected measures within each conceptual dimension to sit close together on the MDS map.

Hierarchical cluster analysis, a general family of algorithms, is the dominant approach to grouping similar variables or data points based on their attributes or features (Murtagh & Contreras, 2017 ). It can accurately identify patterns within even small datasets (e.g., a 18*18 correlation matrix) since it leverages pairwise distances between all contributing measures. Further, it requires no assumptions about cluster shape, while other common algorithms like K-means assume that clusters are spherical and have similar sizes. However, we note that a K-means clustering algorithm produced similar clusters, so the findings are not heavily dependent upon the algorithm that was used. We expected to obtain the five clusters of dimensions as proposed in Table  2 .

We first focus on the reliability of each peer reviewing quality (defined by agreement in values across completed reviews). As shown by the blue cells along the main diagonal in Fig.  1 , the measures #Comments , %Long comments, and %Suggestions showed perfect reliability [0.81, 0.95], and the rest of measures of peer reviewing quality, except Improvement , showed moderate to substantial reliability [0.48, 0.79]. Only the Improvement measure showed only a slight level of measure reliability across reviews. It is possible that Improvement is primarily driven by the document, perhaps because some documents have limited potential for improvement or that the scope for improvement relies heavily on the match between what the reviewer can perceive and the specific needs of the document. Taken together, all but one measures fell within the required range to be considered reliable, and the results involving the Improvement measure may be inconsistent due to measurement noise.

figure 1

Measure reliability (diagonal cells and white font; / = NA) and linear inter-correlations (bold values for p  < .05, and italic values for not significant values), organized by proposed peer feedback literacy dimension

The linear measure intercorrelation shown in Fig.  1 revealed that, except for Peer agreement , almost all measures were significantly and positively correlated with one another. Based on the patterns, one of the measures— %Long comment was removed from the amount dimension in the analyses that follow. Focusing on the rating accuracy measures, except for the correlations of Peer agreement with Expert consistency and Peer consistency with Expert agreement , all the correlations were positive and statistically significant. Further, the correlations with measures in other dimensions were often non-significant and always small: Peer agreement , Max out group  = 0.18; Peer consistency , Max out group  = 0.18; Expert Agreement , Max out group  = 0.31; and Expert consistency , Max out group  = 0.26. The largest cross-dimension correlations occurred for the two expert accuracy measures with actual comment quality measures such as %Implementable and Improvement . The results supported treating these measures as one dimension, even though the intercorrelations within the dimensions are relatively weak.

Turning to the amount dimension, we again note that %Long Comments only had weak correlations with #Reviews and #Comments ( r  = 0.15 and r  = 0.1) compared to the relationship between #Reviews and #Comments ( r  = 0.63). After removing %Long Comments from the amount dimension, the in-group correlation ( r  = 0.63) was much higher than the out-group correlations ( #Reviews , Max out group  = 0.14; #Comments , Max out group  = 0.20). Thus, the support for treating amount involving #Review and #Comment as a dimension was strong.

The support for a perceived quality dimension, as originally defined, was weak. The two measures correlated with one another at only r  = 0.22. Correlations with measures in the amount and accuracy dimensions were also weak, but correlations with actual quality measures were often moderate. The results suggest some reorganization of the perceived and actual comment quality dimensions may be required.

Finally, the eight measures in the actual comment quality dimension were generally highly correlated with one another. Compared with out-group correlations, %Implementable (Min in group  = 0.32 > Max out group  = 0.31), %Implemented (Min in group  = 0.41 > Max out group  = 0.34), #Features (Min in group  = 0.51 > Max out group  = 0.39) and %Identifications (Min in group  = 0.34 > Max out group  = 0.25) were well nested in this group. However, some measures blurred somewhat with measures in the perceived comment quality dimension: Improvement (Min in group  = 0.22 < Max out group  = 0.28), %Solution (Min in group  = 0.22 < Max out group  = 0.28), %Suggestions (Min in group  = 0.34 = Max out group  = 0.34), %Explanation s (Min in group  = 0.34 < Max out group  = 0.36). Overall, the correlation results revealed some overlap with perceived comment quality, particularly for %Solutions .

Further, to better understand the similarities among these measures, MDS and hierarchical cluster analysis were conducted based on measure intercorrelation data. The MDS results are shown in Fig.  2 . Conceptually, the y-axis shows reviewing quality measures reflecting effort near the bottom (e.g., #Reviews and #Comments ) and reviewing quality measures reflecting expertise near the top (e.g., the rating accuracy group and Improvement ). By contrast, the x-axis involves review-level measures to the left and comment-level measures to the right. This pattern within the intercorrelations of measures illustrates what can be learned from MDS but would be difficult to obtain from factor analysis.

figure 2

A map of peer feedback literacy based upon MDS and cluster analysis

The clustering algorithm produced five clusters, which are labeled and color-coded in Fig.  2 . The five clusters were roughly similar to the originally hypothesized construct groups in Table  1 , especially treating rating accuracy, amount, and reviewing process as distinct from each other and from perceived/actual comment quality. However, perceived and actual comment quality did not separate as expected. In particular, %Long comments and %Solutions were clustered together with helpfulness and priority. We call this new dimension Initial Impact , reflecting comment recipients’ initial reactions to feedback (without having to consider the feedback in light of the document). The remaining measures that were all proposed to be part of the actual comment quality dimension clustered together. We propose calling this dimension Ultimate Impact , reflecting their closer alignment with actual improvements and the aspects of comments are most likely to lead to successful revisions.

General discussion

Understanding the fundamental structure of peer review literacy from a behavioral/skills perspective, rather than a knowledge and attitudes perspective, was a fundamental goal of our study. With the support of online tools, peer feedback is becoming increasingly implemented in a wide range of educational levels, contexts, disciplines, course types, and student tasks. As a form of student-centered instruction, it has great potential to improve learning outcomes, but then also critically depends upon effective full participation by students in their reviewing roles. Thus, it is increasingly important to fully conceptualize and develop methods for studying and supporting peer feedback literacy.

Our proposed framework sought to build a coherent understanding of peer reviewing quality in terms of six dimensions—reviewing process, rating accuracy, feedback amount, perceived comment quality, actual comment quality, and feedback content—offering a unified perspective on the scattered and fragmented notions of peer reviewing quality (Ramachandran et al., 2017 ; Yu et al., 2023 ). Consolidating the disparate measures from the literature into dimensions serves many purposes. For example, when university educators understand the intricacies of the reviewing process, they can provide clearer guidance and training to students, improving the quality of feedback provided. Similarly, understanding the dimensional structure can organize investigations of what dimensions are shaped by various kinds of supports/training, and which dimensions influence later learning outcomes, either for the reviewer or the reviewee.

Unlike previous studies that primarily explored relationships among reviewing quality dimensions at the comment level (Leijen, 2017 ; Misiejuk et al., 2021 ; Wu & Schunn, 2021b ), our work focuses on the reviewer level, as an approach to studying the behavioral elements of peer feedback literacy, complementing the predominantly knowledge and attitudes focus of interview and survey studies on peer feedback literacy. This shift in level of analysis is important because reviewing quality measures at the comment level might exhibit weak or even negative relationships due to varied structures or intentions. However, at the reviewer level, these measures may exhibit positive correlations, reflecting overarching strategies, motivations, or skills.

Our findings, as illustrated by the linear intercorrelation analysis, illuminate the interconnectedness of various factors shaping peer feedback literacy. The overarching theme emerging from the analysis is inherent multidimensionality, a facet of peer review literacy that has been previously highlighted in the literature (Winstone & Carless, 2020 ). The findings from the current study also suggest that peer feedback literacy can be organized into relative emphasis on expertise vs. effort and relative focus on review level vs. comment level aspects. It will be especially interesting to examine the ways in which training and motivational interventions will shape those different behavioral indicators.

It is important to note that survey-based measures of peer feedback literacy find that all of the dimensions identified within those studies were strongly correlated with one another (e.g., Dong et al., 2023 ) to the extent that the pragmatic and theoretical value of measuring them separately could be questioned. For example, feedback-related knowledge and willingness to participate in peer feedback were correlated at r  = 0.76, and all the specific indicators on those scales loaded at high levels on their factors. Within our framework, those factors could be framed as representing the expertise vs. effort ends of the literacy continuum, which our findings suggest should be much more distinguishable than r  = 0.76. Indeed, we also found dimensional structure to peer feedback literacy, but the correlations among dimensions are quite low, and even the correlations among different measures within a dimension were modest. If survey measures are going to be used in future studies on peer feedback literacy, it will be important to understand how well they align with students’ actual behaviors. Further, it may be necessary to extend what kinds of behaviors are represented on those surveys.

Our findings also suggest a strong separation of ratings accuracy from the impact that comments will have on their recipients. While there is some relationship among the two, particularly when focusing on expert evaluations of ratings accuracy and expert judgments of the improvement that comments will produce, the r  = 0.26 correlation is quite modest. Both constructs represent a kind of expertise in the reviewer. But ratings accuracy represents attending to and successfully diagnosing all the relative strengths and weaknesses in a submission (i.e., having a review level competence), whereas improvements offered in comments can involve more focus on particular problems, not requiring a reviewer to be broadly proficient (i.e., having a comment level competence). In addition, especially useful comments require not only diagnosing a major problem but also offering strategies addressing that problem.

Our findings also help to situate specific measures of feedback quality that have drawn increasing attention given their pragmatic value in data collection and data analysis: comment helpfulness ratings and %long comments. On the one hand, they are central measures of the larger landscape of peer feedback quality. On the other hand, the only represent one dimension of peer feedback literacy: the initial impact of the comments being produced. Adding in rating accuracy measures like peer agreement or peer consistency and amount measures likes # of reviews and # of comments, would provide a broader measurement of peer feedback literacy while still involving easy to collect and analyze measures. To capture the ultimate impact dimension, studies would need to invest in the laborious task of hand coding comments (which is still much less laborious than hand coding implementation or expensive than expert coding of improvement) or perhaps turn to innovations in NLP and generative AI to automatically code large numbers of comments.

Limitations and future directions

We note two key limitations to our current study. First, the exclusion of the feedback content dimension potentially left out a critical element of the peer reviewing process, which future research should aim to incorporate, possibly being implemented with larger datasets like the current study through automated techniques like Natural Language Processing (Ramachandran et al., 2017 ). Such technological advances could reveal hidden patterns and correlations with the feedback content, potentially leading to a more comprehensive understanding of peer reviewing quality.

Furthermore, the geographical and contextual constraints of our study—specifically to an introductory university writing course in the US using one online peer feedback system—may limit the generalizability of our findings. Past meta-analyses and meta-regressions suggest minimal impact of discipline, class size, or system setup on the validity of peer review ratings or the derived learning benefits (Li et al., 2016 ; Sanchez et al., 2017 ; Yu & Schunn, 2023 ). However, it is important to replicate these novel findings of this study across various contexts.

Our investigation sought to investigate the dimensionality of peer feedback literacy, a common concern in ongoing research in this domain. In previous studies, the dimensionality of peer feedback literacy has been largely shaped by data from interviews and surveys (e.g., Dong et al., 2023 ; Zhan, 2022 ). These approaches offered valuable insights into domains of learners’ knowledge and attitudes towards peer feedback (e.g., willingness to participate in peer feedback is separable from appreciation of its value or knowledge of how to participate). But such studies provided little insight into the ways in which the produced feedback varied in quality, which can be taken as the behavioral dimensions of peer feedback literacy (Gielen et al., 2010 ). It is important to note that knowledge and attitudes do not always lead to effective action (Becheikh et al., 2010 ; Huberman, 1990 ). Further, the actual quality of feedback generated by students is crucial for their learning through the process (Lu et al., 2023 ; Topping, 2023 ; Zheng et al., 2020 ; Zong et al., 2021a , b ). In the current study, we have clarified the dimensionality of the behavioral dimension, highlighting motivational vs. expertise elements at review and comment levels. These findings can become the new foundations of empirical investigations and theoretical development into the causes and consequences of peer feedback literacy.

The current findings offer actionable recommendations for practitioners (e.g., instructors, teaching assistants, instructional designers, online tool designers) for enhancing peer review processes. First, our findings identify four major areas in which practitioners need to scaffold peer reviewing quality: rating accuracy, the volume of feedback, the initial impact of comments, and the ultimate impact of comments. Different approaches are likely required to address these major areas given their relative emphasis on effort vs. expertise. For example, motivational scaffolds and considerations (e.g., workload) may be needed for improving volume of feedback, back-evaluations steps for improvement of initial impact, training on rubric dimensions for improvement of rating accuracy, and training on effective feedback structure for improvement of ultimate impact. Secondly, when resources are very constrained such that assessing the more labor-intensive dimensions of feedback quality is not possible, the multidimensional scale results suggest that length of comments and helpfulness ratings can be taken as an efficiently assessed proxy for overall feedback quality involving a mixture of effort and expertise at the review and comment levels.

Availability of data and materials

The data used to support the findings of this study are available from the corresponding author upon request.

Becheikh, N., Ziam, S., Idrissi, O., Castonguay, Y., & Landry, R. (2010). How to improve knowledge transfer strategies and practices in education? Answers from a systematic literature review. Research in Higher Education Journal , 7 , 1–21.

Google Scholar  

Bolzer, M., Strijbos, J. W., & Fischer, F. (2015). Inferring mindful cognitive-processing of peer-feedback via eye-tracking: Role of feedback-characteristics, fixation-durations and transitions. Journal of Computer Assisted Learning , 31 (5), 422–434.

Article   Google Scholar  

Carroll, J. D., & Arabie, P. (1998). Multidimensional scaling. Measurement, Judgment and Decision Making , 179–250.  https://www.sciencedirect.com/science/article/abs/pii/B9780120999750500051

Cheng, K. H., & Hou, H. T. (2015). Exploring students’ behavioural patterns during online peer assessment from the affective, cognitive, and metacognitive perspectives: A progressive sequential analysis. Technology, Pedagogy and Education , 24 (2), 171–188.

Article   MathSciNet   Google Scholar  

College Board. (2021). Program summary report. https://reports.collegeboard.org/media/pdf/2021-ap-program-summary-report_1.pdf

Cui, Y., Schunn, C. D., Gai, X., Jiang, Y., & Wang, Z. (2021). Effects of trained peer vs. Teacher feedback on EFL students’ writing performance, self-efficacy, and internalization of motivation. Frontiers in Psychology , 12, 5569.

Darvishi, A., Khosravi, H., Sadiq, S., & Gašević, D. (2022). Incorporating AI and learning analytics to build trustworthy peer assessment systems. British Journal of Educational Technology , 53 (4), 844–875.

Dawson, P., Yan, Z., Lipnevich, A., Tai, J., Boud, D., & Mahoney, P. (2023). Measuring what learners do in feedback: The feedback literacy behaviour scale. Assessment & Evaluation in Higher Education , Advanced Published Online. https://doi.org/10.1080/02602938.2023.2240983

Ding, C. S. (2006). Multidimensional scaling modelling approach to latent profile analysis in psychological research. International Journal of Psychology, 41(3), 226–238.

Dong, Z., Gao, Y., & Schunn, C. D. (2023). Assessing students’ peer feedback literacy in writing: Scale development and validation. Assessment & Evaluation in Higher Education , 48(8), 1103–1118.

Gao, Y., Schunn, C. D. D., & Yu, Q. (2019). The alignment of written peer feedback with draft problems and its impact on revision in peer assessment. Assessment & Evaluation in Higher Education , 44 (2), 294–308.

Gao, X., Noroozi, O., Gulikers, J. T. M., Biemans, H. J., & Banihashem, S. K. (2023). A systematic review of the key components of online peer feedback practices in higher education. Educational Research Review , 42, 100588.

Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction , 20 (4), 304–315.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research , 77 (1), 81–112.

Hout, M. C., Papesh, M. H., & Goldinger, S. D. (2013). Multidimensional scaling. Wiley Interdisciplinary Reviews: Cognitive Science , 4 (1), 93–103.

Howard, C. D., Barrett, A. F., & Frick, T. W. (2010). Anonymity to promote peer feedback: Pre-service teachers’ comments in asynchronous computer-mediated communication. Journal of Educational Computing Research , 43 (1), 89–112.

Huberman, M. (1990). Linkage between researchers and practitioners: A qualitative study. American Educational Research Journal , 27 (2), 363–391.

Huisman, B., Saab, N., Van Driel, J., & Van Den Broek, P. (2018). Peer feedback on academic writing: Undergraduate students’ peer feedback role, peer feedback perceptions and essay performance. Assessment & Evaluation in Higher Education , 43 (6), 955–968.

Jin, X., Jiang, Q., Xiong, W., Feng, Y., & Zhao, W. (2022). Effects of student engagement in peer feedback on writing performance in higher education. Interactive Learning Environments, 32(1), 128–143.

Kerman, N. T., Banihashem, S. K., Karami, M., Er, E., Van Ginkel, S., & Noroozi, O. (2024). Online peer feedback in higher education: A synthesis of the literature. Education and Information Technologies, 29(1), 763–813.

Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine , 15 (2), 155–163.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics , 33 (1), 159–174.

Leijen, D. A. (2017). A novel approach to examine the impact of web-based peer review on the revisions of L2 writers. Computers and Composition , 43 , 35–54.

Li, H., Xiong, Y., Zang, X., Kornhaber, M. L., Lyu, Y., Chung, K. S., & Suen, H. K. (2016). Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education , 41 (2), 245–264.

Little, T., Dawson, P., Boud, D., & Tai, J. (2024). Can students’ feedback literacy be improved? A scoping review of interventions. Assessment & Evaluation in Higher Education , 49 (1), 39–52.

Liu, J., & Sadler, R. W. (2003). The effect and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes , 2 (3), 193–227.

Lu, Q., Yao, Y., & Zhu, X. (2023). The relationship between peer feedback features and revision sources mediated by feedback acceptance: The effect on undergraduate students’ writing performance. Assessing Writing , 56 , 100725.

Lu, Q., Zhu, X., & Cheong, C. M. (2021). Understanding the difference between self-feedback and peer feedback: A comparative study of their effects on undergraduate students' writing improvement. Frontiers in psychology, 12 , 739962.

Man, D., Kong, B., & Chau, M. (2022). Developing student feedback literacy through peer review training. RELC Journal . Advanced Published Online. https://doi.org/10.1177/00336882221078380

McGraw, K. O., & Wong, S. P. (1996). Forming inferences about some intraclass correlation coefficients. Psychological Methods , 1 (1), 30–46.

Misiejuk, K., Wasson, B., & Egelandsdal, K. (2021). Using learning analytics to understand student perceptions of peer feedback. Computers in human behavior, 117 , 106658.

Misiejuk, K., & Wasson, B. (2021). Backward evaluation in peer assessment: A scoping review. Computers & Education , 175 , 104319.

Murtagh, F., & Contreras, P. (2017). Algorithms for hierarchical clustering: An overview, II. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , 7 (6), e1219.

Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science , 37 , 375–401.

Nieminen, J. H., & Carless, D. (2023). Feedback literacy: A critical review of an emerging concept. Higher Education , 85 (6), 1381–1400.

Noroozi, O., Banihashem, S. K., Taghizadeh Kerman, N., Parvaneh Akhteh Khaneh, M., Babayi, M., Ashrafi, H., & Biemans, H. J. (2022). Gender differences in students’ argumentative essay writing, peer review performance and uptake in online learning environments. Interactive Learning Environments, 31(10), 6302–6316.

Patchan, M. M., Schunn, C. D., & Clark, R. J. (2018). Accountability in peer assessment: Examining the effects of reviewing grades on peer ratings and peer feedback. Studies in Higher Education , 43 (12), 2263–2278.

Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. In Proceedings of the 6th international conference on educational data mining (EDM 2013)

Raes, A., Vanderhoven, E., & Schellens, T. (2013). Increasing anonymity in peer assessment by using classroom response technology within face-to-face higher education. Studies in Higher Education , 40(1) , 178–193.

Ramachandran, L., Gehringer, E. F., & Yadav, R. K. (2017). Automated assessment of the quality of peer reviews using natural language processing techniques. International Journal of Artificial Intelligence in Education , 27 , 534–581.

Rietsche, R., Caines, A., Schramm, C., Pfütze, D., & Buttery, P. (2022). The specificity and helpfulness of peer-to-peer feedback in higher education. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) (pp. 107–117).

Sanchez, A., Romero, N., & De Raedt, R. (2017). Depression-related difficulties disengaging from negative faces are associated with sustained attention to negative feedback during social evaluation and predict stress recovery. PLoS One , 12 (3), e0175040.

Schunn, C., Godley, A., & DeMartino, S. (2016). The reliability and validity of peer review of writing in high school AP English classes. Journal of Adolescent & Adult Literacy, 60(1), 13–23.

Schunn, C. D. (2016). Writing to learn and learning to write through SWoRD. In S. A. Crossley & D. S. McNamara (Eds.), Adaptive educational technologies for literacy instruction. Taylor & Francis, Routledge.

Smith, P. L., & Ratcliff, R. (2004). Psychology and neurobiology of simple decisions. Trends in Neurosciences , 27 (3), 161–168.

Sutton, P. (2012). Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in Education and Teaching International , 49 (1), 31–40.

Tan, J. S., & Chen, W. (2022). Peer feedback to support collaborative knowledge improvement: What kind of feedback feed-forward? Computers & Education , 187 , 104467.

Tan, J. S., Chen, W., Su, J., & Su, G. (2023). The mechanism and effect of class-wide peer feedback on conceptual knowledge improvement: Does different feedback type matter?. International Journal of Computer-Supported Collaborative Learning , 18, 393–424.

Tong, Y., Schunn, C. D., & Wang, H. (2023). Why increasing the number of raters only helps sometimes: Reliability and validity of peer assessment across tasks of different complexity. Studies in Educational Evaluation , 76 , 101233.

Topping, K. J. (2023). Advantages and disadvantages of online and Face-to-Face peer learning in higher education: A review. Education Sciences , 13 (4), 326. https://doi.org/10.3390/educsci13040326

van den Bos, A. H., & Tan, E. (2019). Effects of anonymity on online peer review in second-language writing. Computers & Education , 142 , 103638.

Wichmann, A., Funk, A., & Rummel, N. (2018). Leveraging the potential of peer feedback in an academic writing activity through sense-making support. European Journal of Psychology of Education , 33 , 165–184.

Winstone, N., & Carless, D. (2020). Designing effective feedback processes in higher education: A learner-centred approach. Innovations in education and teaching international, 57(3), 386–387.

Woitt, S., Weidlich, J., Jivet, I., Orhan Göksün, D., Drachsler, H., & Kalz, M. (2023). Students’ feedback literacy in higher education: an initial scale validation study. Teaching in Higher Education , Advanced Published Online. https://doi.org/10.1080/13562517.2023.2263838

Wolfe, E. M. (2005). Uncovering rater’s cognitive processing and focus using think-aloud protocols. Journal of Writing Assessment, 2(1) , 37–56.

Wu, Y., & Schunn, C. D. (2020a). From feedback to revisions: Effects of feedback features and perceptions. Contemporary Educational Psychology , 60 , 101826.

Wu, Y., & Schunn, C. D. (2020b). When peers agree, do students listen? The central role of feedback quality and feedback frequency in determining uptake of feedback. Contemporary Educational Psychology , 62 , 101897.

Wu, Y., & Schunn, C. D. (2021a). The effects of providing and receiving peer feedback on writing performance and learning of secondary school students. American Educational Research Journal , 58 (3), 492–526.

Wu, Y., & Schunn, C. D. (2021b). From plans to actual implementation: A process model for why feedback features influence feedback implementation. Instructional Science , 49 (3), 365–394.

Wu, Y., & Schunn, C. D. (2023). Assessor writing performance on peer feedback: Exploring the relation between assessor writing performance, problem identification accuracy, and helpfulness of peer feedback. Journal of Educational Psychology , 115(1) , 118–142.

Xiong, W., & Litman, D. (2011). Understanding differences in perceived peer-review helpfulness using natural language processing. In Proceedings of the sixth workshop on innovative use of NLP for building educational applications  (pp. 10–19).

Xiong, Y., & Schunn, C. D. (2021). Reviewer, essay, and reviewing-process characteristics that predict errors in web-based peer review. Computers & Education , 166 , 104146.

Yu, Q., & Schunn, C. D. (2023). Understanding the what and when of peer feedback benefits for performance and transfer. Computers in Human Behavior , 147 , 107857.

Yu, F. Y., Liu, Y. H., & Liu, K. (2023). Online peer-assessment quality control: A proactive measure, validation study, and sensitivity analysis. Studies in Educational Evaluation , 78 , 101279.

Zhan, Y. (2022). Developing and validating a student feedback literacy scale. Assessment & Evaluation in Higher Education , 47 (7), 1087–1100.

Zhang, Y., & Schunn, C. D. (2023). Self-regulation of peer feedback quality aspects through different dimensions of experience within prior peer feedback assignments. Contemporary Educational Psychology , 74 , 102210.

Zhang, F., Schunn, C., Li, W., & Long, M. (2020). Changes in the reliability and validity of peer assessment across the college years. Assessment & Evaluation in Higher Education , 45 (8), 1073–1087.

Zheng, L., Zhang, X., & Cui, P. (2020). The role of technology-facilitated peer assessment and supporting strategies: A meta-analysis. Assessment & Evaluation in Higher Education , 45 (3), 372–386.

Zong, Z., Schunn, C. D., & Wang, Y. (2021a). What aspects of online peer feedback robustly predict growth in students’ task performance? Computers in Human Behavior , 124 , 106924.

Zong, Z., Schunn, C. D., & Wang, Y. (2021b). Learning to improve the quality peer feedback through experience with peer feedback. Assessment & Evaluation in Higher Education , 46 (6), 973–992.

Zong, Z., Schunn, C., & Wang, Y. (2022). What makes students contribute more peer feedback? The role of within-course experience with peer feedback. Assessment & Evaluation in Higher Education , 47 (6), 972–983.

Zou, Y., Schunn, C. D., Wang, Y., & Zhang, F. (2018). Student attitudes that predict participation in peer assessment. Assessment & Evaluation in Higher Education , 43 (5), 800–811.

Download references

Acknowledgements

Not applicable.

This study was supported by the Philosophy and Social Sciences Planning Youth Project of Guangdong Province under grant [GD24YJY01], and The National Social Science Fund of China [23BYY154].

Author information

Authors and affiliations.

College of Education for the Future, Beijing Normal University, No. 18 Jingfeng Road, Zhuhai, Guangdong Province, 519087, China

Learning Research and Development Center, University of Pittsburgh, 3420 Forbes Avenue, Pittsburgh, PA, 15260, USA

Christian D. Schunn

School of Humanities, Beijing University of Posts and Telecommunications, No. 10, Xitucheng Road, Haidian District, Beijing, 100876, China

You can also search for this author in PubMed   Google Scholar

Contributions

Yi Zhang: Conceptualization, Methodology, Data Curation, Writing - Original Draft, Revision. Christian D. Schunn: Conceptualization, Visualization, Methodology, Supervision, Revision. Yong Wu: Data Curation, Conceptualization, Investigation.

Corresponding author

Correspondence to Yi Zhang .

Ethics declarations

Competing interests.

There is no conflict of interest, as we conducted this study only as part of our research program.

The second author is a co-inventor of the peer review system used in the study.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

figure 3

The sample size for each pairwise correlation

figure 4

Scree plot for MDS

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zhang, Y., Schunn, C.D. & Wu, Y. What does it mean to be good at peer reviewing? A multidimensional scaling and cluster analysis study of behavioral indicators of peer feedback literacy. Int J Educ Technol High Educ 21 , 26 (2024). https://doi.org/10.1186/s41239-024-00458-1

Download citation

Received : 20 December 2023

Accepted : 18 March 2024

Published : 22 April 2024

DOI : https://doi.org/10.1186/s41239-024-00458-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Peer reviewing quality
  • Peer feedback literacy
  • Multidimensional scaling

what is meant by peer review in research publication

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Anaesthesiol Clin Pharmacol
  • v.31(4); Oct-Dec 2015

An introduction to peer review

Lakshmi narayana yaddanapudi.

Department of Anaesthesia and Intensive Care, PGIMER, Chandigarh, India

Sandhya Yaddanapudi

What is peer review.

A great many processes go by the name of peer review, with no real operational definition. It is, however, generally understood to be the review of a scientific manuscript by scientists not involved in the study. These are selected by the editing staff of the journal based on the scientists’ knowledge of the domain, research methodology and statistics, and willingness to contribute to the scientific process.

It has been shown that peer review delays the publication process, increases the costs, and may possibly be biased and open to abuse. It is very poor at detecting errors and is almost useless at detecting fraud. However, it still forms the mainstay of the scientific process.[ 1 ] A number of modifications of the peer review process have been and are being tried, including but not limited to reviewer education, acknowledgment, monetary compensation (possibly in the form of waiving of publication fees or access to full-text articles), anonymous reviewing, signed reviewing, and open pre- and post-publication reviewing.

Who are Peer Reviewers?

Peer reviewers are fellow scientists and colleagues of the authors. They need to be familiar with the domain of the manuscript being reviewed but do not need to be authorities on the subject. Some reviewers are selected for specific expertise in a methodology or tool used. Being invited to review a manuscript is an honor. According to the Reviewers’ Information Pack of Elsevier, by accepting to review manuscripts, you ensure the continued rigorous standards of the scientific process, uphold the integrity of the journal you are reviewing for, fulfill a sense of scientific obligation, establish relationships with reputable colleagues, reciprocate professional courtesy, establish your expertise in a particular area, stay current in the discipline, and facilitate advancement of your career.

Most scientists learn the art of reviewing on the job. Formal training is rare. This is one of the reasons for the documented inconsistencies in the review process.[ 2 ] The process becomes more consistent and enjoyable if a systematic method is adopted.

General Guidelines

Please adhere to the time limits set by the editor. If you are busy professionally or otherwise and cannot devote time for a review, decline to review the manuscript in the first instance. If you are yourself working in the same niche and thus have a conflict of interest, please let the editor know as soon as possible. The editor may still decide to let you review the manuscript if there are only a few people qualified in the topic.

Remember to be courteous. Your comments should be constructive. Marginal notes such as “so what?” and “what does this even mean?” detract from the quality of the review. Even your valid points may cause hurt and provoke resentment in such a context. The comments should include enough detail that the authors understand your point. Even if the manuscript in question is not accepted for publication, your comments should help to improve the quality of the authors’ future research and writing. They should focus on how the argument is supported, not on whether you agree with it or not. Remember, the author whose manuscript you are reviewing today, may well be reviewing your manuscript tomorrow. Follow the golden rule: Do unto others as you would be done unto.

Single line reviews such as “you can publish this” do not contribute to the scientific evaluation of the manuscript. Remember, you are advising the editors. They will decide what to do with the article based on your and other reviewers’ inputs as well as a number of other concerns. Furthermore, you should not be offended if your advice is not followed by the editor.

Maintain the confidentiality of the process. The manuscripts you receive are not for general circulation or gossip in the coffee room. Neither are they to be used for your own research projects. Unfortunately, this has occurred in the past. Once you finish the review, make sure that the manuscript and any accompanying material are deleted from your computer.

If you make inline corrections in the manuscript, especially when using “track changes” feature, the word processor inserts your initials or name in the document leading to unblinding of the anonymous review process. Even a separate word processor document carries the same risk. You should disable this feature in your word processor. A separate text file prepared in a simple text editor is probably a better bet.

The main points on which you should comment are well listed by Benos et al .[ 3 ]

  • Importance of the research question.
  • Originality of the work.
  • Delineation of strengths and weaknesses of the methodology/experimental/statistical approach/interpretation of results.
  • Writing style and figure/table presentation.
  • Ethical concerns.

Importance of the Research Question

Is the question addressed by the manuscript important enough? Does it address a question that needs to be answered? Remember that the potential audience for the article is not restricted to anesthesiologists. It may include physicians from other specialties, hospital administrators, policy makers, and patients. The question is important, not the answer. If the question is important, methodology correct, and the conclusion valid, it does not matter whether the answer is “significant” or “not significant.” By not publishing the so-called “negative” studies, scientific journals, and their editors have contributed to publication bias and thus to the unnecessary persistence of research into unprofitable avenues.

Originality of the Work

Is the study original? As someone familiar with the domain, you can probably decide this from your own knowledge. In specific cases, you may need to search the literature. This can be done directly from the JOACP reviewer interface. Unfortunately many studies are “me too” studies with hardly anything to distinguish them from other studies in the literature. If the only unique feature of the study is that it is the first one in Indians or in patients undergoing urological procedures or something similar, the authors should provide reasons for thinking that the result would be different in that subpopulation. If you think the research is unoriginal, please give references to previous work.

This is not to decry replication studies. They are needed to reduce the high incidence of false discovery rates in the biomedical literature. However, such studies should be done only where the question is important and the findings unique. There should be a true duplication of methodology, and the manuscript should clearly say that it is a replication.

Delineation of Strengths and Weaknesses

It is important to mention the strengths as well as the weaknesses of the study as part of the review. Not all the reviewers are proficient in research methodology or statistics. However, all scientific authors and reviewers need to understand the basics of how to frame a research question. They should know the appropriate study designs and statistical tests for the question studied and the outcomes measured. They should be able to interpret the results correctly. Though old, a series of ten articles published in 1997 by Greenhalgh in the BMJ are essential reading for all peer reviewers. You should at least be familiar with the articles dealing with study designs,[ 4 ] diagnostic tests,[ 5 ] and the two articles that deal with statistical analysis for the nonstatistician.[ 6 , 7 ] If you are good at research methodology and statistical analysis, you may want to duplicate some of the analyses the authors have done. In specific cases, you may even ask for the raw data to be provided.

The abstract should include the substantive portion of the results. Please see whether the data and conclusions given in the abstract and the text of the manuscript match. Read the discussion and conclusions carefully. Appropriate comparisons with the literature are warranted. If the authors cite only articles which favor their conclusions, draw attention to it. If there are other important studies dealing with the subject that the authors don’t reference, please provide references. The limitations of the study should be brought out clearly in the discussion. Many authors tend to conclude far more than their data warrant. The conclusions should be limited to the context of the study. Any generalizations should be done carefully keeping external validity in mind.

Writing Style and Figure/Table Presentation

Please read the manuscript for clarity of thought, organization of the content and logical structure. Most of our authors are Indian. In addition, JOACP receives many submissions from African and the Middle Eastern countries among others. What is common to these authors is the fact that English is not their primary language of communication. This leads to nonstandard idioms and turns of phrase, in addition to many orthographic and grammatical errors. It is not necessary for you to catalog all the errors. It is enough to mention that they exist. It is the responsibility of the editorial office to deal with these. The manuscripts are usually sent to the reviewers after some corrective measures have been taken. However, in particular cases where these errors impede you from reading the article for the scientific content you may return it for language revision.

Look for a balance between the tables and figures. Tables provide the data of the study while the figures illustrate the story. Most studies do not warrant more than a single digit precision in the numbers provided in the tables. Consider whether all the tables are necessary. Some small tables may be incorporated in the text. Others may be combined. Figures should be appropriate to the data being presented. Figures which do not show any major finding such as a trend or difference between two groups should be omitted. Both figures and tables should be appropriately labeled and titled to be understood on their own. In particular, if a reader just reads a table or a figure along with its legend without reading the article, she should be able to understand what data are presented and the conclusions to be drawn from it.

Ethical Concerns

Despite the approval of an Ethics Committee, please consider whether there are any ethical concerns with the way the subjects are treated in the study. The concerns may include inappropriate use of placebos, lack of or inadequate consent process, an inappropriate population not subject to the problem being studied, etc. Please check if the authors have acknowledged all sources of funding and any conflict of interest. If there is a suspicion of research misconduct such as fraudulent data, bring it to the notice of the editor privately.

Recommendations

At the end of the process, you should provide a recommendation for the editors indicating whether the manuscript should be rejected, revised or accepted with specific grounds for each recommendation. In addition, you may suggest an accompanying commentary/editorial be commissioned.

It is not possible to deal comprehensively with the review process for all types of manuscripts in one short article. However, acquiring some skills in critical reading, basic research methodology and statistics, and following a systematic way of reviewing a manuscript while trying to answer the key questions listed here, may help make the reviewing process consistent, helpful to the authors and the editors, and enjoyable to yourself.

IMAGES

  1. Peer Review

    what is meant by peer review in research publication

  2. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    what is meant by peer review in research publication

  3. Understand the peer review process

    what is meant by peer review in research publication

  4. 7 Types Of Peer-Review Process

    what is meant by peer review in research publication

  5. What is Peer Review?

    what is meant by peer review in research publication

  6. Peer Review process

    what is meant by peer review in research publication

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. THIS Got Through Peer Review?!

  3. University of Johannesburg & Elsevier present "SDGs and How to Use Them in Your Research"

  4. From Research to Publication: Sailing through peer review waters

  5. PUBLISHING AN OBGYN PAPER IN A JOURNAL

  6. Join the Academic Revolution

COMMENTS

  1. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  2. Understanding Peer Review in Science

    The manuscript peer review process helps ensure scientific publications are credible and minimizes errors. Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps ...

  3. Peer Review in Scientific Publications: Benefits, Critiques, & A

    HISTORY OF PEER REVIEW. The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ().The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ().

  4. Reviewers

    The Philosophical Transactions of the Royal Society is thought to be the first journal to formalize the peer review process under the editorship of Henry Oldenburg (1618- 1677). Despite many criticisms about the integrity of peer review, the majority of the research community still believes peer review is the best form of scientific evaluation.

  5. What is Peer Review?

    Peer review is designed to assess the validity, quality and often the originality of articles for publication. Its ultimate purpose is to maintain the integrity of science by filtering out invalid or poor quality articles. From a publisher's perspective, peer review functions as a filter for content, directing better quality articles to ...

  6. What is Peer Review?

    Peer review brings academic research to publication in the following ways: Evaluation - Peer review is an effective form of research evaluation to help select the highest quality articles for publication.; Integrity - Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.

  7. Peer review

    Scholarly peer review or academic peer review (also known as refereeing) is the process of having a draft version of a researcher's methods and findings reviewed (usually anonymously) by experts (or "peers") in the same field. Peer review is widely used for helping the academic publisher (that is, the editor-in-chief, the editorial board or the ...

  8. What is Peer Review?

    Peer review brings academic research to publication in the following ways: Evaluation - Peer review is an effective form of research evaluation to help select the highest quality articles for publication.; Integrity - Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.

  9. Peer review and the publication process

    Peer review is one of various mechanisms used to ensure the quality of publications in academic journals. It helps authors, journal editors and the reviewer themselves. It is a process that is unlikely to be eliminated from the publication process. All forms of peer review have their own strengths and weaknesses.

  10. All About Peer Review

    So you need to use scholarly, peer-reviewed articles for an assignment...what does that mean? Peer review is a process for evaluating research studies before they are published by an academic journal. These studies typically communicate original research or analysis for other researchers. The Peer Review Process at a Glance:

  11. What is peer review?

    Peer review is a critical part of the modern scientific process. For science to progress, research methods and findings need to be closely examined to decide on the best direction for future research. After a study has gone through peer review and is accepted for publication, scientists and the public can be confident that the study has met ...

  12. Research Guides: Peer Reviewed Literature: What is Peer Review?

    The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.. Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.) For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

  13. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  14. Explainer: what is peer review?

    The process in details. The peer review process for journals involves at least three stages. 1. The desk evaluation stage. When a paper is submitted to a journal, it receives an initial evaluation ...

  15. Library: Academic Publishing Demystified: What is peer review?

    Peer Review. Experts in the subject area of your article will review your article and provide feedback on it. Depending on the journal and the availability of reviewers, it is typical for one to three external experts to review your paper. There are a number of different types of peer review. It's good to know what type your target publication ...

  16. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  17. What does "Peer-Reviewed" mean?

    What is the peer review process? In academic publishing, the goal of peer review is to assess the quality of articles submitted for publication in a scholarly journal. Before an article is deemed appropriate to be published in a peer-reviewed journal, it must undergo the following process:

  18. What is Peer Review?

    What is Peer Review? The peer-review process tries to ensure that the highest quality research gets published. When an article is submitted to a peer-reviewed journal, the editor after deciding if the article meets the basic requirements for inclusion, sends it to be reviewed by other scholars (the author's peers) within the same field.

  19. Peer-reviewed or Refereed

    What Does "Peer-reviewed" or "Refereed" Mean? Peer review is a process that journals use to ensure the articles they publish represent the best scholarship currently available. When an article is submitted to a peer reviewed journal, the editors send it out to other scholars in the same field (the author's peers) to get their opinion on the quality of the scholarship, its relevance to the ...

  20. What are Peer-Reviewed Journals?

    Peer-review is a process where an article is verified by a group of scholars before it is published. When an author submits an article to a peer-reviewed journal, the editor passes out the article to a group of scholars in the related field (the author's peers).

  21. Structure peer review to make it more robust

    Mario Malički. Mario Malički is associate director of the Stanford Program on Research Rigor and Reproducibility (SPORR) and co-editor-in-chief of the Research Integrity and Peer Review journal.

  22. Editorial: What Is Peer Review?

    The first journal that formalized the peer review process is The Philosophical Transactions of the Royal Society, the first and longest-running journal launched in 1665 by Henry Oldenburg (1618- 1677) . The review of a research paper starts with the 'Internal Review' process.

  23. LibGuides: Psychology Research: What is Peer Review?

    You can click on the journal title to see more information. Look for the Refereed under the Basic Description. If it says Refereed and Yes, then the journal is peer-reviewed. Ulrich's uses the word Refereed. Refereed is the same as peer-reviewed. When a journal is peer-reviewed, it means that most of the articles published in it are peer-reviewed.

  24. Research Guides: Scholarly Publishing: What is "peer-review"?

    People often use "peer review" and "scholarly" interchangeably, but they aren't the same. Peer review happens at the article level. A Journal is peer reviewed if its articles* are all peer reviewed. An article has been "peer reviewed" if it has been reviewed by a group of the article author's peers prior to that article being published.

  25. What is peer review?

    Learn how to differentiate between popular and scholarly articles and how to find peer reviewed or refereed journal articles.

  26. Peer Review and Scientific Publication at a Crossroads: Call for

    The congress is organized under the auspices of JAMA and the JAMA Network, The BMJ, and the Meta-research Innovation Center at Stanford (METRICS) and is guided by an international panel of advisors who represent diverse areas of science and of activities relevant to peer review and scientific publication. 4 The abstract submission site is expected to open on December 1, 2024, with an ...

  27. Abstract eyes looking around

    He oversaw journal policy across OUP's journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and ...

  28. What does it mean to be good at peer reviewing? A multidimensional

    Peer feedback literacy is becoming increasingly important in higher education as peer feedback has substantially grown as a pedagogical approach. However, quality of produced feedback, a key behavioral aspect of peer feedback literacy, lacks a systematic and evidence-based conceptualization to guide research, instruction, and system design. We introduce a novel framework involving six ...

  29. An introduction to peer review

    It has been shown that peer review delays the publication process, increases the costs, and may possibly be biased and open to abuse. It is very poor at detecting errors and is almost useless at detecting fraud. However, it still forms the mainstay of the scientific process. [ 1] A number of modifications of the peer review process have been ...