Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Prevent plagiarism. Run a free check.

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved March 20, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.9(7); 2013 Jul

Logo of ploscomp

Ten Simple Rules for Writing a Literature Review

Marco pautasso.

1 Centre for Functional and Evolutionary Ecology (CEFE), CNRS, Montpellier, France

2 Centre for Biodiversity Synthesis and Analysis (CESAB), FRB, Aix-en-Provence, France

Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications [1] . For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively [2] . Given such mountains of papers, scientists cannot be expected to examine in detail every single new paper relevant to their interests [3] . Thus, it is both advantageous and necessary to rely on regular summaries of the recent literature. Although recognition for scientists mainly comes from primary research, timely literature reviews can lead to new synthetic insights and are often widely read [4] . For such summaries to be useful, however, they need to be compiled in a professional way [5] .

When starting from scratch, reviewing the literature can require a titanic amount of work. That is why researchers who have spent their career working on a certain research issue are in a perfect position to review that literature. Some graduate schools are now offering courses in reviewing the literature, given that most research students start their project by producing an overview of what has already been done on their research issue [6] . However, it is likely that most scientists have not thought in detail about how to approach and carry out a literature review.

Reviewing the literature requires the ability to juggle multiple tasks, from finding and evaluating relevant material to synthesising information from various sources, from critical thinking to paraphrasing, evaluating, and citation skills [7] . In this contribution, I share ten simple rules I learned working on about 25 literature reviews as a PhD and postdoctoral student. Ideas and insights also come from discussions with coauthors and colleagues, as well as feedback from reviewers and editors.

Rule 1: Define a Topic and Audience

How to choose which topic to review? There are so many issues in contemporary science that you could spend a lifetime of attending conferences and reading the literature just pondering what to review. On the one hand, if you take several years to choose, several other people may have had the same idea in the meantime. On the other hand, only a well-considered topic is likely to lead to a brilliant literature review [8] . The topic must at least be:

  • interesting to you (ideally, you should have come across a series of recent papers related to your line of work that call for a critical summary),
  • an important aspect of the field (so that many readers will be interested in the review and there will be enough material to write it), and
  • a well-defined issue (otherwise you could potentially include thousands of publications, which would make the review unhelpful).

Ideas for potential reviews may come from papers providing lists of key research questions to be answered [9] , but also from serendipitous moments during desultory reading and discussions. In addition to choosing your topic, you should also select a target audience. In many cases, the topic (e.g., web services in computational biology) will automatically define an audience (e.g., computational biologists), but that same topic may also be of interest to neighbouring fields (e.g., computer science, biology, etc.).

Rule 2: Search and Re-search the Literature

After having chosen your topic and audience, start by checking the literature and downloading relevant papers. Five pieces of advice here:

  • keep track of the search items you use (so that your search can be replicated [10] ),
  • keep a list of papers whose pdfs you cannot access immediately (so as to retrieve them later with alternative strategies),
  • use a paper management system (e.g., Mendeley, Papers, Qiqqa, Sente),
  • define early in the process some criteria for exclusion of irrelevant papers (these criteria can then be described in the review to help define its scope), and
  • do not just look for research papers in the area you wish to review, but also seek previous reviews.

The chances are high that someone will already have published a literature review ( Figure 1 ), if not exactly on the issue you are planning to tackle, at least on a related topic. If there are already a few or several reviews of the literature on your issue, my advice is not to give up, but to carry on with your own literature review,

An external file that holds a picture, illustration, etc.
Object name is pcbi.1003149.g001.jpg

The bottom-right situation (many literature reviews but few research papers) is not just a theoretical situation; it applies, for example, to the study of the impacts of climate change on plant diseases, where there appear to be more literature reviews than research studies [33] .

  • discussing in your review the approaches, limitations, and conclusions of past reviews,
  • trying to find a new angle that has not been covered adequately in the previous reviews, and
  • incorporating new material that has inevitably accumulated since their appearance.

When searching the literature for pertinent papers and reviews, the usual rules apply:

  • be thorough,
  • use different keywords and database sources (e.g., DBLP, Google Scholar, ISI Proceedings, JSTOR Search, Medline, Scopus, Web of Science), and
  • look at who has cited past relevant papers and book chapters.

Rule 3: Take Notes While Reading

If you read the papers first, and only afterwards start writing the review, you will need a very good memory to remember who wrote what, and what your impressions and associations were while reading each single paper. My advice is, while reading, to start writing down interesting pieces of information, insights about how to organize the review, and thoughts on what to write. This way, by the time you have read the literature you selected, you will already have a rough draft of the review.

Of course, this draft will still need much rewriting, restructuring, and rethinking to obtain a text with a coherent argument [11] , but you will have avoided the danger posed by staring at a blank document. Be careful when taking notes to use quotation marks if you are provisionally copying verbatim from the literature. It is advisable then to reformulate such quotes with your own words in the final draft. It is important to be careful in noting the references already at this stage, so as to avoid misattributions. Using referencing software from the very beginning of your endeavour will save you time.

Rule 4: Choose the Type of Review You Wish to Write

After having taken notes while reading the literature, you will have a rough idea of the amount of material available for the review. This is probably a good time to decide whether to go for a mini- or a full review. Some journals are now favouring the publication of rather short reviews focusing on the last few years, with a limit on the number of words and citations. A mini-review is not necessarily a minor review: it may well attract more attention from busy readers, although it will inevitably simplify some issues and leave out some relevant material due to space limitations. A full review will have the advantage of more freedom to cover in detail the complexities of a particular scientific development, but may then be left in the pile of the very important papers “to be read” by readers with little time to spare for major monographs.

There is probably a continuum between mini- and full reviews. The same point applies to the dichotomy of descriptive vs. integrative reviews. While descriptive reviews focus on the methodology, findings, and interpretation of each reviewed study, integrative reviews attempt to find common ideas and concepts from the reviewed material [12] . A similar distinction exists between narrative and systematic reviews: while narrative reviews are qualitative, systematic reviews attempt to test a hypothesis based on the published evidence, which is gathered using a predefined protocol to reduce bias [13] , [14] . When systematic reviews analyse quantitative results in a quantitative way, they become meta-analyses. The choice between different review types will have to be made on a case-by-case basis, depending not just on the nature of the material found and the preferences of the target journal(s), but also on the time available to write the review and the number of coauthors [15] .

Rule 5: Keep the Review Focused, but Make It of Broad Interest

Whether your plan is to write a mini- or a full review, it is good advice to keep it focused 16 , 17 . Including material just for the sake of it can easily lead to reviews that are trying to do too many things at once. The need to keep a review focused can be problematic for interdisciplinary reviews, where the aim is to bridge the gap between fields [18] . If you are writing a review on, for example, how epidemiological approaches are used in modelling the spread of ideas, you may be inclined to include material from both parent fields, epidemiology and the study of cultural diffusion. This may be necessary to some extent, but in this case a focused review would only deal in detail with those studies at the interface between epidemiology and the spread of ideas.

While focus is an important feature of a successful review, this requirement has to be balanced with the need to make the review relevant to a broad audience. This square may be circled by discussing the wider implications of the reviewed topic for other disciplines.

Rule 6: Be Critical and Consistent

Reviewing the literature is not stamp collecting. A good review does not just summarize the literature, but discusses it critically, identifies methodological problems, and points out research gaps [19] . After having read a review of the literature, a reader should have a rough idea of:

  • the major achievements in the reviewed field,
  • the main areas of debate, and
  • the outstanding research questions.

It is challenging to achieve a successful review on all these fronts. A solution can be to involve a set of complementary coauthors: some people are excellent at mapping what has been achieved, some others are very good at identifying dark clouds on the horizon, and some have instead a knack at predicting where solutions are going to come from. If your journal club has exactly this sort of team, then you should definitely write a review of the literature! In addition to critical thinking, a literature review needs consistency, for example in the choice of passive vs. active voice and present vs. past tense.

Rule 7: Find a Logical Structure

Like a well-baked cake, a good review has a number of telling features: it is worth the reader's time, timely, systematic, well written, focused, and critical. It also needs a good structure. With reviews, the usual subdivision of research papers into introduction, methods, results, and discussion does not work or is rarely used. However, a general introduction of the context and, toward the end, a recapitulation of the main points covered and take-home messages make sense also in the case of reviews. For systematic reviews, there is a trend towards including information about how the literature was searched (database, keywords, time limits) [20] .

How can you organize the flow of the main body of the review so that the reader will be drawn into and guided through it? It is generally helpful to draw a conceptual scheme of the review, e.g., with mind-mapping techniques. Such diagrams can help recognize a logical way to order and link the various sections of a review [21] . This is the case not just at the writing stage, but also for readers if the diagram is included in the review as a figure. A careful selection of diagrams and figures relevant to the reviewed topic can be very helpful to structure the text too [22] .

Rule 8: Make Use of Feedback

Reviews of the literature are normally peer-reviewed in the same way as research papers, and rightly so [23] . As a rule, incorporating feedback from reviewers greatly helps improve a review draft. Having read the review with a fresh mind, reviewers may spot inaccuracies, inconsistencies, and ambiguities that had not been noticed by the writers due to rereading the typescript too many times. It is however advisable to reread the draft one more time before submission, as a last-minute correction of typos, leaps, and muddled sentences may enable the reviewers to focus on providing advice on the content rather than the form.

Feedback is vital to writing a good review, and should be sought from a variety of colleagues, so as to obtain a diversity of views on the draft. This may lead in some cases to conflicting views on the merits of the paper, and on how to improve it, but such a situation is better than the absence of feedback. A diversity of feedback perspectives on a literature review can help identify where the consensus view stands in the landscape of the current scientific understanding of an issue [24] .

Rule 9: Include Your Own Relevant Research, but Be Objective

In many cases, reviewers of the literature will have published studies relevant to the review they are writing. This could create a conflict of interest: how can reviewers report objectively on their own work [25] ? Some scientists may be overly enthusiastic about what they have published, and thus risk giving too much importance to their own findings in the review. However, bias could also occur in the other direction: some scientists may be unduly dismissive of their own achievements, so that they will tend to downplay their contribution (if any) to a field when reviewing it.

In general, a review of the literature should neither be a public relations brochure nor an exercise in competitive self-denial. If a reviewer is up to the job of producing a well-organized and methodical review, which flows well and provides a service to the readership, then it should be possible to be objective in reviewing one's own relevant findings. In reviews written by multiple authors, this may be achieved by assigning the review of the results of a coauthor to different coauthors.

Rule 10: Be Up-to-Date, but Do Not Forget Older Studies

Given the progressive acceleration in the publication of scientific papers, today's reviews of the literature need awareness not just of the overall direction and achievements of a field of inquiry, but also of the latest studies, so as not to become out-of-date before they have been published. Ideally, a literature review should not identify as a major research gap an issue that has just been addressed in a series of papers in press (the same applies, of course, to older, overlooked studies (“sleeping beauties” [26] )). This implies that literature reviewers would do well to keep an eye on electronic lists of papers in press, given that it can take months before these appear in scientific databases. Some reviews declare that they have scanned the literature up to a certain point in time, but given that peer review can be a rather lengthy process, a full search for newly appeared literature at the revision stage may be worthwhile. Assessing the contribution of papers that have just appeared is particularly challenging, because there is little perspective with which to gauge their significance and impact on further research and society.

Inevitably, new papers on the reviewed topic (including independently written literature reviews) will appear from all quarters after the review has been published, so that there may soon be the need for an updated review. But this is the nature of science [27] – [32] . I wish everybody good luck with writing a review of the literature.

Acknowledgments

Many thanks to M. Barbosa, K. Dehnen-Schmutz, T. Döring, D. Fontaneto, M. Garbelotto, O. Holdenrieder, M. Jeger, D. Lonsdale, A. MacLeod, P. Mills, M. Moslonka-Lefebvre, G. Stancanelli, P. Weisberg, and X. Xu for insights and discussions, and to P. Bourne, T. Matoni, and D. Smith for helpful comments on a previous draft.

Funding Statement

This work was funded by the French Foundation for Research on Biodiversity (FRB) through its Centre for Synthesis and Analysis of Biodiversity data (CESAB), as part of the NETSEED research project. The funders had no role in the preparation of the manuscript.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 5. The Literature Review
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

A literature review surveys prior research published in books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated. Literature reviews are designed to provide an overview of sources you have used in researching a particular topic and to demonstrate to your readers how your research fits within existing scholarship about the topic.

Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . Fourth edition. Thousand Oaks, CA: SAGE, 2014.

Importance of a Good Literature Review

A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories . A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that informs how you are planning to investigate a research problem. The analytical features of a literature review might:

  • Give a new interpretation of old material or combine new with old interpretations,
  • Trace the intellectual progression of the field, including major debates,
  • Depending on the situation, evaluate the sources and advise the reader on the most pertinent or relevant research, or
  • Usually in the conclusion of a literature review, identify where gaps exist in how a problem has been researched to date.

Given this, the purpose of a literature review is to:

  • Place each work in the context of its contribution to understanding the research problem being studied.
  • Describe the relationship of each work to the others under consideration.
  • Identify new ways to interpret prior research.
  • Reveal any gaps that exist in the literature.
  • Resolve conflicts amongst seemingly contradictory previous studies.
  • Identify areas of prior scholarship to prevent duplication of effort.
  • Point the way in fulfilling a need for additional research.
  • Locate your own research within the context of existing literature [very important].

Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper. 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . Los Angeles, CA: SAGE, 2011; Knopf, Jeffrey W. "Doing a Literature Review." PS: Political Science and Politics 39 (January 2006): 127-132; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012.

Types of Literature Reviews

It is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the primary studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally among scholars that become part of the body of epistemological traditions within the field.

In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews. Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are a number of approaches you could adopt depending upon the type of analysis underpinning your study.

Argumentative Review This form examines literature selectively in order to support or refute an argument, deeply embedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to make summary claims of the sort found in systematic reviews [see below].

Integrative Review Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses or research problems. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication. This is the most common form of review in the social sciences.

Historical Review Few things rest in isolation from historical precedent. Historical literature reviews focus on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review A review does not always focus on what someone said [findings], but how they came about saying what they say [method of analysis]. Reviewing methods of analysis provides a framework of understanding at different levels [i.e. those of theory, substantive fields, research approaches, and data collection and analysis techniques], how researchers draw upon a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection, and data analysis. This approach helps highlight ethical issues which you should be aware of and consider as you go through your own study.

Systematic Review This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyze data from the studies that are included in the review. The goal is to deliberately document, critically evaluate, and summarize scientifically all of the research about a clearly defined research problem . Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?" This type of literature review is primarily applied to examining prior research studies in clinical medicine and allied health fields, but it is increasingly being used in the social sciences.

Theoretical Review The purpose of this form is to examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review helps to establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

NOTE : Most often the literature review will incorporate some combination of types. For example, a review that examines literature supporting or refuting an argument, assumption, or philosophical problem related to the research problem will also need to include writing supported by sources that establish the history of these arguments in the literature.

Baumeister, Roy F. and Mark R. Leary. "Writing Narrative Literature Reviews."  Review of General Psychology 1 (September 1997): 311-320; Mark R. Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Kennedy, Mary M. "Defining a Literature." Educational Researcher 36 (April 2007): 139-147; Petticrew, Mark and Helen Roberts. Systematic Reviews in the Social Sciences: A Practical Guide . Malden, MA: Blackwell Publishers, 2006; Torracro, Richard. "Writing Integrative Literature Reviews: Guidelines and Examples." Human Resource Development Review 4 (September 2005): 356-367; Rocco, Tonette S. and Maria S. Plakhotnik. "Literature Reviews, Conceptual Frameworks, and Theoretical Frameworks: Terms, Functions, and Distinctions." Human Ressource Development Review 8 (March 2008): 120-130; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.

Structure and Writing Style

I.  Thinking About Your Literature Review

The structure of a literature review should include the following in support of understanding the research problem :

  • An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
  • Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
  • An explanation of how each work is similar to and how it varies from the others,
  • Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.

The critical evaluation of each work should consider :

  • Provenance -- what are the author's credentials? Are the author's arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?
  • Methodology -- were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?
  • Objectivity -- is the author's perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author's point?
  • Persuasiveness -- which of the author's theses are most convincing or least convincing?
  • Validity -- are the author's arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?

II.  Development of the Literature Review

Four Basic Stages of Writing 1.  Problem formulation -- which topic or field is being examined and what are its component issues? 2.  Literature search -- finding materials relevant to the subject being explored. 3.  Data evaluation -- determining which literature makes a significant contribution to the understanding of the topic. 4.  Analysis and interpretation -- discussing the findings and conclusions of pertinent literature.

Consider the following issues before writing the literature review: Clarify If your assignment is not specific about what form your literature review should take, seek clarification from your professor by asking these questions: 1.  Roughly how many sources would be appropriate to include? 2.  What types of sources should I review (books, journal articles, websites; scholarly versus popular sources)? 3.  Should I summarize, synthesize, or critique sources by discussing a common theme or issue? 4.  Should I evaluate the sources in any way beyond evaluating how they relate to understanding the research problem? 5.  Should I provide subheadings and other background information, such as definitions and/or a history? Find Models Use the exercise of reviewing the literature to examine how authors in your discipline or area of interest have composed their literature review sections. Read them to get a sense of the types of themes you might want to look for in your own research or to identify ways to organize your final review. The bibliography or reference section of sources you've already read, such as required readings in the course syllabus, are also excellent entry points into your own research. Narrow the Topic The narrower your topic, the easier it will be to limit the number of sources you need to read in order to obtain a good survey of relevant resources. Your professor will probably not expect you to read everything that's available about the topic, but you'll make the act of reviewing easier if you first limit scope of the research problem. A good strategy is to begin by searching the USC Libraries Catalog for recent books about the topic and review the table of contents for chapters that focuses on specific issues. You can also review the indexes of books to find references to specific issues that can serve as the focus of your research. For example, a book surveying the history of the Israeli-Palestinian conflict may include a chapter on the role Egypt has played in mediating the conflict, or look in the index for the pages where Egypt is mentioned in the text. Consider Whether Your Sources are Current Some disciplines require that you use information that is as current as possible. This is particularly true in disciplines in medicine and the sciences where research conducted becomes obsolete very quickly as new discoveries are made. However, when writing a review in the social sciences, a survey of the history of the literature may be required. In other words, a complete understanding the research problem requires you to deliberately examine how knowledge and perspectives have changed over time. Sort through other current bibliographies or literature reviews in the field to get a sense of what your discipline expects. You can also use this method to explore what is considered by scholars to be a "hot topic" and what is not.

III.  Ways to Organize Your Literature Review

Chronology of Events If your review follows the chronological method, you could write about the materials according to when they were published. This approach should only be followed if a clear path of research building on previous research can be identified and that these trends follow a clear chronological order of development. For example, a literature review that focuses on continuing research about the emergence of German economic power after the fall of the Soviet Union. By Publication Order your sources by publication chronology, then, only if the order demonstrates a more important trend. For instance, you could order a review of literature on environmental studies of brown fields if the progression revealed, for example, a change in the soil collection practices of the researchers who wrote and/or conducted the studies. Thematic [“conceptual categories”] A thematic literature review is the most common approach to summarizing prior research in the social and behavioral sciences. Thematic reviews are organized around a topic or issue, rather than the progression of time, although the progression of time may still be incorporated into a thematic review. For example, a review of the Internet’s impact on American presidential politics could focus on the development of online political satire. While the study focuses on one topic, the Internet’s impact on American presidential politics, it would still be organized chronologically reflecting technological developments in media. The difference in this example between a "chronological" and a "thematic" approach is what is emphasized the most: themes related to the role of the Internet in presidential politics. Note that more authentic thematic reviews tend to break away from chronological order. A review organized in this manner would shift between time periods within each section according to the point being made. Methodological A methodological approach focuses on the methods utilized by the researcher. For the Internet in American presidential politics project, one methodological approach would be to look at cultural differences between the portrayal of American presidents on American, British, and French websites. Or the review might focus on the fundraising impact of the Internet on a particular political party. A methodological scope will influence either the types of documents in the review or the way in which these documents are discussed.

Other Sections of Your Literature Review Once you've decided on the organizational method for your literature review, the sections you need to include in the paper should be easy to figure out because they arise from your organizational strategy. In other words, a chronological review would have subsections for each vital time period; a thematic review would have subtopics based upon factors that relate to the theme or issue. However, sometimes you may need to add additional sections that are necessary for your study, but do not fit in the organizational strategy of the body. What other sections you include in the body is up to you. However, only include what is necessary for the reader to locate your study within the larger scholarship about the research problem.

Here are examples of other sections, usually in the form of a single paragraph, you may need to include depending on the type of review you write:

  • Current Situation : Information necessary to understand the current topic or focus of the literature review.
  • Sources Used : Describes the methods and resources [e.g., databases] you used to identify the literature you reviewed.
  • History : The chronological progression of the field, the research literature, or an idea that is necessary to understand the literature review, if the body of the literature review is not already a chronology.
  • Selection Methods : Criteria you used to select (and perhaps exclude) sources in your literature review. For instance, you might explain that your review includes only peer-reviewed [i.e., scholarly] sources.
  • Standards : Description of the way in which you present your information.
  • Questions for Further Research : What questions about the field has the review sparked? How will you further your research as a result of the review?

IV.  Writing Your Literature Review

Once you've settled on how to organize your literature review, you're ready to write each section. When writing your review, keep in mind these issues.

Use Evidence A literature review section is, in this sense, just like any other academic research paper. Your interpretation of the available sources must be backed up with evidence [citations] that demonstrates that what you are saying is valid. Be Selective Select only the most important points in each source to highlight in the review. The type of information you choose to mention should relate directly to the research problem, whether it is thematic, methodological, or chronological. Related items that provide additional information, but that are not key to understanding the research problem, can be included in a list of further readings . Use Quotes Sparingly Some short quotes are appropriate if you want to emphasize a point, or if what an author stated cannot be easily paraphrased. Sometimes you may need to quote certain terminology that was coined by the author, is not common knowledge, or taken directly from the study. Do not use extensive quotes as a substitute for using your own words in reviewing the literature. Summarize and Synthesize Remember to summarize and synthesize your sources within each thematic paragraph as well as throughout the review. Recapitulate important features of a research study, but then synthesize it by rephrasing the study's significance and relating it to your own work and the work of others. Keep Your Own Voice While the literature review presents others' ideas, your voice [the writer's] should remain front and center. For example, weave references to other sources into what you are writing but maintain your own voice by starting and ending the paragraph with your own ideas and wording. Use Caution When Paraphrasing When paraphrasing a source that is not your own, be sure to represent the author's information or opinions accurately and in your own words. Even when paraphrasing an author’s work, you still must provide a citation to that work.

V.  Common Mistakes to Avoid

These are the most common mistakes made in reviewing social science research literature.

  • Sources in your literature review do not clearly relate to the research problem;
  • You do not take sufficient time to define and identify the most relevant sources to use in the literature review related to the research problem;
  • Relies exclusively on secondary analytical sources rather than including relevant primary research studies or data;
  • Uncritically accepts another researcher's findings and interpretations as valid, rather than examining critically all aspects of the research design and analysis;
  • Does not describe the search procedures that were used in identifying the literature to review;
  • Reports isolated statistical results rather than synthesizing them in chi-squared or meta-analytic methods; and,
  • Only includes research that validates assumptions and does not consider contrary findings and alternative interpretations found in the literature.

Cook, Kathleen E. and Elise Murowchick. “Do Literature Review Skills Transfer from One Course to Another?” Psychology Learning and Teaching 13 (March 2014): 3-11; Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . London: SAGE, 2011; Literature Review Handout. Online Writing Center. Liberty University; Literature Reviews. The Writing Center. University of North Carolina; Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: SAGE, 2016; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012; Randolph, Justus J. “A Guide to Writing the Dissertation Literature Review." Practical Assessment, Research, and Evaluation. vol. 14, June 2009; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016; Taylor, Dena. The Literature Review: A Few Tips On Conducting It. University College Writing Centre. University of Toronto; Writing a Literature Review. Academic Skills Centre. University of Canberra.

Writing Tip

Break Out of Your Disciplinary Box!

Thinking interdisciplinarily about a research problem can be a rewarding exercise in applying new ideas, theories, or concepts to an old problem. For example, what might cultural anthropologists say about the continuing conflict in the Middle East? In what ways might geographers view the need for better distribution of social service agencies in large cities than how social workers might study the issue? You don’t want to substitute a thorough review of core research literature in your discipline for studies conducted in other fields of study. However, particularly in the social sciences, thinking about research problems from multiple vectors is a key strategy for finding new solutions to a problem or gaining a new perspective. Consult with a librarian about identifying research databases in other disciplines; almost every field of study has at least one comprehensive database devoted to indexing its research literature.

Frodeman, Robert. The Oxford Handbook of Interdisciplinarity . New York: Oxford University Press, 2010.

Another Writing Tip

Don't Just Review for Content!

While conducting a review of the literature, maximize the time you devote to writing this part of your paper by thinking broadly about what you should be looking for and evaluating. Review not just what scholars are saying, but how are they saying it. Some questions to ask:

  • How are they organizing their ideas?
  • What methods have they used to study the problem?
  • What theories have been used to explain, predict, or understand their research problem?
  • What sources have they cited to support their conclusions?
  • How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?

When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.

Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1 998.

Yet Another Writing Tip

When Do I Know I Can Stop Looking and Move On?

Here are several strategies you can utilize to assess whether you've thoroughly reviewed the literature:

  • Look for repeating patterns in the research findings . If the same thing is being said, just by different people, then this likely demonstrates that the research problem has hit a conceptual dead end. At this point consider: Does your study extend current research?  Does it forge a new path? Or, does is merely add more of the same thing being said?
  • Look at sources the authors cite to in their work . If you begin to see the same researchers cited again and again, then this is often an indication that no new ideas have been generated to address the research problem.
  • Search Google Scholar to identify who has subsequently cited leading scholars already identified in your literature review [see next sub-tab]. This is called citation tracking and there are a number of sources that can help you identify who has cited whom, particularly scholars from outside of your discipline. Here again, if the same authors are being cited again and again, this may indicate no new literature has been written on the topic.

Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: Sage, 2016; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.

  • << Previous: Theoretical Framework
  • Next: Citation Tracking >>
  • Last Updated: Mar 19, 2024 11:03 AM
  • URL: https://libguides.usc.edu/writingguide

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

Grad Coach

How To Write An A-Grade Literature Review

3 straightforward steps (with examples) + free template.

By: Derek Jansen (MBA) | Expert Reviewed By: Dr. Eunice Rautenbach | October 2019

Quality research is about building onto the existing work of others , “standing on the shoulders of giants”, as Newton put it. The literature review chapter of your dissertation, thesis or research project is where you synthesise this prior work and lay the theoretical foundation for your own research.

Long story short, this chapter is a pretty big deal, which is why you want to make sure you get it right . In this post, I’ll show you exactly how to write a literature review in three straightforward steps, so you can conquer this vital chapter (the smart way).

Overview: The Literature Review Process

  • Understanding the “ why “
  • Finding the relevant literature
  • Cataloguing and synthesising the information
  • Outlining & writing up your literature review
  • Example of a literature review

But first, the “why”…

Before we unpack how to write the literature review chapter, we’ve got to look at the why . To put it bluntly, if you don’t understand the function and purpose of the literature review process, there’s no way you can pull it off well. So, what exactly is the purpose of the literature review?

Well, there are (at least) four core functions:

  • For you to gain an understanding (and demonstrate this understanding) of where the research is at currently, what the key arguments and disagreements are.
  • For you to identify the gap(s) in the literature and then use this as justification for your own research topic.
  • To help you build a conceptual framework for empirical testing (if applicable to your research topic).
  • To inform your methodological choices and help you source tried and tested questionnaires (for interviews ) and measurement instruments (for surveys ).

Most students understand the first point but don’t give any thought to the rest. To get the most from the literature review process, you must keep all four points front of mind as you review the literature (more on this shortly), or you’ll land up with a wonky foundation.

Okay – with the why out the way, let’s move on to the how . As mentioned above, writing your literature review is a process, which I’ll break down into three steps:

  • Finding the most suitable literature
  • Understanding , distilling and organising the literature
  • Planning and writing up your literature review chapter

Importantly, you must complete steps one and two before you start writing up your chapter. I know it’s very tempting, but don’t try to kill two birds with one stone and write as you read. You’ll invariably end up wasting huge amounts of time re-writing and re-shaping, or you’ll just land up with a disjointed, hard-to-digest mess . Instead, you need to read first and distil the information, then plan and execute the writing.

Free Webinar: Literature Review 101

Step 1: Find the relevant literature

Naturally, the first step in the literature review journey is to hunt down the existing research that’s relevant to your topic. While you probably already have a decent base of this from your research proposal , you need to expand on this substantially in the dissertation or thesis itself.

Essentially, you need to be looking for any existing literature that potentially helps you answer your research question (or develop it, if that’s not yet pinned down). There are numerous ways to find relevant literature, but I’ll cover my top four tactics here. I’d suggest combining all four methods to ensure that nothing slips past you:

Method 1 – Google Scholar Scrubbing

Google’s academic search engine, Google Scholar , is a great starting point as it provides a good high-level view of the relevant journal articles for whatever keyword you throw at it. Most valuably, it tells you how many times each article has been cited, which gives you an idea of how credible (or at least, popular) it is. Some articles will be free to access, while others will require an account, which brings us to the next method.

Method 2 – University Database Scrounging

Generally, universities provide students with access to an online library, which provides access to many (but not all) of the major journals.

So, if you find an article using Google Scholar that requires paid access (which is quite likely), search for that article in your university’s database – if it’s listed there, you’ll have access. Note that, generally, the search engine capabilities of these databases are poor, so make sure you search for the exact article name, or you might not find it.

Method 3 – Journal Article Snowballing

At the end of every academic journal article, you’ll find a list of references. As with any academic writing, these references are the building blocks of the article, so if the article is relevant to your topic, there’s a good chance a portion of the referenced works will be too. Do a quick scan of the titles and see what seems relevant, then search for the relevant ones in your university’s database.

Method 4 – Dissertation Scavenging

Similar to Method 3 above, you can leverage other students’ dissertations. All you have to do is skim through literature review chapters of existing dissertations related to your topic and you’ll find a gold mine of potential literature. Usually, your university will provide you with access to previous students’ dissertations, but you can also find a much larger selection in the following databases:

  • Open Access Theses & Dissertations
  • Stanford SearchWorks

Keep in mind that dissertations and theses are not as academically sound as published, peer-reviewed journal articles (because they’re written by students, not professionals), so be sure to check the credibility of any sources you find using this method. You can do this by assessing the citation count of any given article in Google Scholar. If you need help with assessing the credibility of any article, or with finding relevant research in general, you can chat with one of our Research Specialists .

Alright – with a good base of literature firmly under your belt, it’s time to move onto the next step.

Need a helping hand?

literature review techniques

Step 2: Log, catalogue and synthesise

Once you’ve built a little treasure trove of articles, it’s time to get reading and start digesting the information – what does it all mean?

While I present steps one and two (hunting and digesting) as sequential, in reality, it’s more of a back-and-forth tango – you’ll read a little , then have an idea, spot a new citation, or a new potential variable, and then go back to searching for articles. This is perfectly natural – through the reading process, your thoughts will develop , new avenues might crop up, and directional adjustments might arise. This is, after all, one of the main purposes of the literature review process (i.e. to familiarise yourself with the current state of research in your field).

As you’re working through your treasure chest, it’s essential that you simultaneously start organising the information. There are three aspects to this:

  • Logging reference information
  • Building an organised catalogue
  • Distilling and synthesising the information

I’ll discuss each of these below:

2.1 – Log the reference information

As you read each article, you should add it to your reference management software. I usually recommend Mendeley for this purpose (see the Mendeley 101 video below), but you can use whichever software you’re comfortable with. Most importantly, make sure you load EVERY article you read into your reference manager, even if it doesn’t seem very relevant at the time.

2.2 – Build an organised catalogue

In the beginning, you might feel confident that you can remember who said what, where, and what their main arguments were. Trust me, you won’t. If you do a thorough review of the relevant literature (as you must!), you’re going to read many, many articles, and it’s simply impossible to remember who said what, when, and in what context . Also, without the bird’s eye view that a catalogue provides, you’ll miss connections between various articles, and have no view of how the research developed over time. Simply put, it’s essential to build your own catalogue of the literature.

I would suggest using Excel to build your catalogue, as it allows you to run filters, colour code and sort – all very useful when your list grows large (which it will). How you lay your spreadsheet out is up to you, but I’d suggest you have the following columns (at minimum):

  • Author, date, title – Start with three columns containing this core information. This will make it easy for you to search for titles with certain words, order research by date, or group by author.
  • Categories or keywords – You can either create multiple columns, one for each category/theme and then tick the relevant categories, or you can have one column with keywords.
  • Key arguments/points – Use this column to succinctly convey the essence of the article, the key arguments and implications thereof for your research.
  • Context – Note the socioeconomic context in which the research was undertaken. For example, US-based, respondents aged 25-35, lower- income, etc. This will be useful for making an argument about gaps in the research.
  • Methodology – Note which methodology was used and why. Also, note any issues you feel arise due to the methodology. Again, you can use this to make an argument about gaps in the research.
  • Quotations – Note down any quoteworthy lines you feel might be useful later.
  • Notes – Make notes about anything not already covered. For example, linkages to or disagreements with other theories, questions raised but unanswered, shortcomings or limitations, and so forth.

If you’d like, you can try out our free catalog template here (see screenshot below).

Excel literature review template

2.3 – Digest and synthesise

Most importantly, as you work through the literature and build your catalogue, you need to synthesise all the information in your own mind – how does it all fit together? Look for links between the various articles and try to develop a bigger picture view of the state of the research. Some important questions to ask yourself are:

  • What answers does the existing research provide to my own research questions ?
  • Which points do the researchers agree (and disagree) on?
  • How has the research developed over time?
  • Where do the gaps in the current research lie?

To help you develop a big-picture view and synthesise all the information, you might find mind mapping software such as Freemind useful. Alternatively, if you’re a fan of physical note-taking, investing in a large whiteboard might work for you.

Mind mapping is a useful way to plan your literature review.

Step 3: Outline and write it up!

Once you’re satisfied that you have digested and distilled all the relevant literature in your mind, it’s time to put pen to paper (or rather, fingers to keyboard). There are two steps here – outlining and writing:

3.1 – Draw up your outline

Having spent so much time reading, it might be tempting to just start writing up without a clear structure in mind. However, it’s critically important to decide on your structure and develop a detailed outline before you write anything. Your literature review chapter needs to present a clear, logical and an easy to follow narrative – and that requires some planning. Don’t try to wing it!

Naturally, you won’t always follow the plan to the letter, but without a detailed outline, you’re more than likely going to end up with a disjointed pile of waffle , and then you’re going to spend a far greater amount of time re-writing, hacking and patching. The adage, “measure twice, cut once” is very suitable here.

In terms of structure, the first decision you’ll have to make is whether you’ll lay out your review thematically (into themes) or chronologically (by date/period). The right choice depends on your topic, research objectives and research questions, which we discuss in this article .

Once that’s decided, you need to draw up an outline of your entire chapter in bullet point format. Try to get as detailed as possible, so that you know exactly what you’ll cover where, how each section will connect to the next, and how your entire argument will develop throughout the chapter. Also, at this stage, it’s a good idea to allocate rough word count limits for each section, so that you can identify word count problems before you’ve spent weeks or months writing!

PS – check out our free literature review chapter template…

3.2 – Get writing

With a detailed outline at your side, it’s time to start writing up (finally!). At this stage, it’s common to feel a bit of writer’s block and find yourself procrastinating under the pressure of finally having to put something on paper. To help with this, remember that the objective of the first draft is not perfection – it’s simply to get your thoughts out of your head and onto paper, after which you can refine them. The structure might change a little, the word count allocations might shift and shuffle, and you might add or remove a section – that’s all okay. Don’t worry about all this on your first draft – just get your thoughts down on paper.

start writing

Once you’ve got a full first draft (however rough it may be), step away from it for a day or two (longer if you can) and then come back at it with fresh eyes. Pay particular attention to the flow and narrative – does it fall fit together and flow from one section to another smoothly? Now’s the time to try to improve the linkage from each section to the next, tighten up the writing to be more concise, trim down word count and sand it down into a more digestible read.

Once you’ve done that, give your writing to a friend or colleague who is not a subject matter expert and ask them if they understand the overall discussion. The best way to assess this is to ask them to explain the chapter back to you. This technique will give you a strong indication of which points were clearly communicated and which weren’t. If you’re working with Grad Coach, this is a good time to have your Research Specialist review your chapter.

Finally, tighten it up and send it off to your supervisor for comment. Some might argue that you should be sending your work to your supervisor sooner than this (indeed your university might formally require this), but in my experience, supervisors are extremely short on time (and often patience), so, the more refined your chapter is, the less time they’ll waste on addressing basic issues (which you know about already) and the more time they’ll spend on valuable feedback that will increase your mark-earning potential.

Literature Review Example

In the video below, we unpack an actual literature review so that you can see how all the core components come together in reality.

Let’s Recap

In this post, we’ve covered how to research and write up a high-quality literature review chapter. Let’s do a quick recap of the key takeaways:

  • It is essential to understand the WHY of the literature review before you read or write anything. Make sure you understand the 4 core functions of the process.
  • The first step is to hunt down the relevant literature . You can do this using Google Scholar, your university database, the snowballing technique and by reviewing other dissertations and theses.
  • Next, you need to log all the articles in your reference manager , build your own catalogue of literature and synthesise all the research.
  • Following that, you need to develop a detailed outline of your entire chapter – the more detail the better. Don’t start writing without a clear outline (on paper, not in your head!)
  • Write up your first draft in rough form – don’t aim for perfection. Remember, done beats perfect.
  • Refine your second draft and get a layman’s perspective on it . Then tighten it up and submit it to your supervisor.

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling Udemy Course, Literature Review Bootcamp . If you want to work smart, you don't want to miss this .

You Might Also Like:

How To Find a Research Gap (Fast)

38 Comments

Phindile Mpetshwa

Thank you very much. This page is an eye opener and easy to comprehend.

Yinka

This is awesome!

I wish I come across GradCoach earlier enough.

But all the same I’ll make use of this opportunity to the fullest.

Thank you for this good job.

Keep it up!

Derek Jansen

You’re welcome, Yinka. Thank you for the kind words. All the best writing your literature review.

Renee Buerger

Thank you for a very useful literature review session. Although I am doing most of the steps…it being my first masters an Mphil is a self study and one not sure you are on the right track. I have an amazing supervisor but one also knows they are super busy. So not wanting to bother on the minutae. Thank you.

You’re most welcome, Renee. Good luck with your literature review 🙂

Sheemal Prasad

This has been really helpful. Will make full use of it. 🙂

Thank you Gradcoach.

Tahir

Really agreed. Admirable effort

Faturoti Toyin

thank you for this beautiful well explained recap.

Tara

Thank you so much for your guide of video and other instructions for the dissertation writing.

It is instrumental. It encouraged me to write a dissertation now.

Lorraine Hall

Thank you the video was great – from someone that knows nothing thankyou

araz agha

an amazing and very constructive way of presetting a topic, very useful, thanks for the effort,

Suilabayuh Ngah

It is timely

It is very good video of guidance for writing a research proposal and a dissertation. Since I have been watching and reading instructions, I have started my research proposal to write. I appreciate to Mr Jansen hugely.

Nancy Geregl

I learn a lot from your videos. Very comprehensive and detailed.

Thank you for sharing your knowledge. As a research student, you learn better with your learning tips in research

Uzma

I was really stuck in reading and gathering information but after watching these things are cleared thanks, it is so helpful.

Xaysukith thorxaitou

Really helpful, Thank you for the effort in showing such information

Sheila Jerome

This is super helpful thank you very much.

Mary

Thank you for this whole literature writing review.You have simplified the process.

Maithe

I’m so glad I found GradCoach. Excellent information, Clear explanation, and Easy to follow, Many thanks Derek!

You’re welcome, Maithe. Good luck writing your literature review 🙂

Anthony

Thank you Coach, you have greatly enriched and improved my knowledge

Eunice

Great piece, so enriching and it is going to help me a great lot in my project and thesis, thanks so much

Stephanie Louw

This is THE BEST site for ANYONE doing a masters or doctorate! Thank you for the sound advice and templates. You rock!

Thanks, Stephanie 🙂

oghenekaro Silas

This is mind blowing, the detailed explanation and simplicity is perfect.

I am doing two papers on my final year thesis, and I must stay I feel very confident to face both headlong after reading this article.

thank you so much.

if anyone is to get a paper done on time and in the best way possible, GRADCOACH is certainly the go to area!

tarandeep singh

This is very good video which is well explained with detailed explanation

uku igeny

Thank you excellent piece of work and great mentoring

Abdul Ahmad Zazay

Thanks, it was useful

Maserialong Dlamini

Thank you very much. the video and the information were very helpful.

Suleiman Abubakar

Good morning scholar. I’m delighted coming to know you even before the commencement of my dissertation which hopefully is expected in not more than six months from now. I would love to engage my study under your guidance from the beginning to the end. I love to know how to do good job

Mthuthuzeli Vongo

Thank you so much Derek for such useful information on writing up a good literature review. I am at a stage where I need to start writing my one. My proposal was accepted late last year but I honestly did not know where to start

SEID YIMAM MOHAMMED (Technic)

Like the name of your YouTube implies you are GRAD (great,resource person, about dissertation). In short you are smart enough in coaching research work.

Richie Buffalo

This is a very well thought out webpage. Very informative and a great read.

Adekoya Opeyemi Jonathan

Very timely.

I appreciate.

Norasyidah Mohd Yusoff

Very comprehensive and eye opener for me as beginner in postgraduate study. Well explained and easy to understand. Appreciate and good reference in guiding me in my research journey. Thank you

Maryellen Elizabeth Hart

Thank you. I requested to download the free literature review template, however, your website wouldn’t allow me to complete the request or complete a download. May I request that you email me the free template? Thank you.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER FEATURE
  • 04 December 2020
  • Correction 09 December 2020

How to write a superb literature review

Andy Tay is a freelance writer based in Singapore.

You can also search for this author in PubMed   Google Scholar

Literature reviews are important resources for scientists. They provide historical context for a field while offering opinions on its future trajectory. Creating them can provide inspiration for one’s own research, as well as some practice in writing. But few scientists are trained in how to write a review — or in what constitutes an excellent one. Even picking the appropriate software to use can be an involved decision (see ‘Tools and techniques’). So Nature asked editors and working scientists with well-cited reviews for their tips.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-020-03422-x

Interviews have been edited for length and clarity.

Updates & Corrections

Correction 09 December 2020 : An earlier version of the tables in this article included some incorrect details about the programs Zotero, Endnote and Manubot. These have now been corrected.

Hsing, I.-M., Xu, Y. & Zhao, W. Electroanalysis 19 , 755–768 (2007).

Article   Google Scholar  

Ledesma, H. A. et al. Nature Nanotechnol. 14 , 645–657 (2019).

Article   PubMed   Google Scholar  

Brahlek, M., Koirala, N., Bansal, N. & Oh, S. Solid State Commun. 215–216 , 54–62 (2015).

Choi, Y. & Lee, S. Y. Nature Rev. Chem . https://doi.org/10.1038/s41570-020-00221-w (2020).

Download references

Related Articles

literature review techniques

  • Research management

‘Woah, this is affecting me’: why I’m fighting racial inequality in prostate-cancer research

‘Woah, this is affecting me’: why I’m fighting racial inequality in prostate-cancer research

Career Q&A 20 MAR 24

So … you’ve been hacked

So … you’ve been hacked

Technology Feature 19 MAR 24

Four years on: the career costs for scientists battling long COVID

Four years on: the career costs for scientists battling long COVID

Career Feature 18 MAR 24

Is AI ready to mass-produce lay summaries of research articles?

Is AI ready to mass-produce lay summaries of research articles?

Nature Index 20 MAR 24

Is the Mars rover’s rock collection worth $11 billion?

Is the Mars rover’s rock collection worth $11 billion?

News 19 MAR 24

People, passion, publishable: an early-career researcher’s checklist for prioritizing projects

People, passion, publishable: an early-career researcher’s checklist for prioritizing projects

Career Column 15 MAR 24

Peer-replication model aims to address science’s ‘reproducibility crisis’

Peer-replication model aims to address science’s ‘reproducibility crisis’

Nature Index 13 MAR 24

Numbers highlight US dominance in clinical research

Numbers highlight US dominance in clinical research

Postdoctoral Scholar-Lab Animal Care Unit

Memphis, Tennessee

The University of Tennessee Health Science Center (UTHSC)

literature review techniques

Experimental Postdoctoral Fellows in Cancer Biology/Immunology

Experimental postdoctoral fellows in cancer biology/immunology

Houston, Texas (US)

The University of Texas MD Anderson Cancer Center

literature review techniques

Postdoctoral fellow

The position focuses on the use of hematopoietic stem and progenitor cells (HSPCs) and gene therapy for neurodegenerative disorders.

Cherqui Lab, UCSD, La Jolla

Univesity of California, San Diego

Senior Research Scientist

MSK is seeking an experienced Scientist to join their NIH funded laboratory dedicated to gene target identification and drug discovery in soft tissue

New York (US)

Memorial Sloan Kettering Cancer Center (MSK)

literature review techniques

Peter J. Braam Early Career Research Fellowship in Global Wellbeing 2024

An opportunity for an early career researcher to join us on a three-year, fixed-term contract starting from October 2024 to January 2025

Oxford, OX1 4JD

Merton College

literature review techniques

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • UConn Library
  • Literature Review: The What, Why and How-to Guide
  • Introduction

Literature Review: The What, Why and How-to Guide — Introduction

  • Getting Started
  • How to Pick a Topic
  • Strategies to Find Sources
  • Evaluating Sources & Lit. Reviews
  • Tips for Writing Literature Reviews
  • Writing Literature Review: Useful Sites
  • Citation Resources
  • Other Academic Writings

What are Literature Reviews?

So, what is a literature review? "A literature review is an account of what has been published on a topic by accredited scholars and researchers. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be defined by a guiding concept (e.g., your research objective, the problem or issue you are discussing, or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries." Taylor, D.  The literature review: A few tips on conducting it . University of Toronto Health Sciences Writing Centre.

Goals of Literature Reviews

What are the goals of creating a Literature Review?  A literature could be written to accomplish different aims:

  • To develop a theory or evaluate an existing theory
  • To summarize the historical or existing state of a research topic
  • Identify a problem in a field of research 

Baumeister, R. F., & Leary, M. R. (1997). Writing narrative literature reviews .  Review of General Psychology , 1 (3), 311-320.

What kinds of sources require a Literature Review?

  • A research paper assigned in a course
  • A thesis or dissertation
  • A grant proposal
  • An article intended for publication in a journal

All these instances require you to collect what has been written about your research topic so that you can demonstrate how your own research sheds new light on the topic.

Types of Literature Reviews

What kinds of literature reviews are written?

Narrative review: The purpose of this type of review is to describe the current state of the research on a specific topic/research and to offer a critical analysis of the literature reviewed. Studies are grouped by research/theoretical categories, and themes and trends, strengths and weakness, and gaps are identified. The review ends with a conclusion section which summarizes the findings regarding the state of the research of the specific study, the gaps identify and if applicable, explains how the author's research will address gaps identify in the review and expand the knowledge on the topic reviewed.

  • Example : Predictors and Outcomes of U.S. Quality Maternity Leave: A Review and Conceptual Framework:  10.1177/08948453211037398  

Systematic review : "The authors of a systematic review use a specific procedure to search the research literature, select the studies to include in their review, and critically evaluate the studies they find." (p. 139). Nelson, L. K. (2013). Research in Communication Sciences and Disorders . Plural Publishing.

  • Example : The effect of leave policies on increasing fertility: a systematic review:  10.1057/s41599-022-01270-w

Meta-analysis : "Meta-analysis is a method of reviewing research findings in a quantitative fashion by transforming the data from individual studies into what is called an effect size and then pooling and analyzing this information. The basic goal in meta-analysis is to explain why different outcomes have occurred in different studies." (p. 197). Roberts, M. C., & Ilardi, S. S. (2003). Handbook of Research Methods in Clinical Psychology . Blackwell Publishing.

  • Example : Employment Instability and Fertility in Europe: A Meta-Analysis:  10.1215/00703370-9164737

Meta-synthesis : "Qualitative meta-synthesis is a type of qualitative study that uses as data the findings from other qualitative studies linked by the same or related topic." (p.312). Zimmer, L. (2006). Qualitative meta-synthesis: A question of dialoguing with texts .  Journal of Advanced Nursing , 53 (3), 311-318.

  • Example : Women’s perspectives on career successes and barriers: A qualitative meta-synthesis:  10.1177/05390184221113735

Literature Reviews in the Health Sciences

  • UConn Health subject guide on systematic reviews Explanation of the different review types used in health sciences literature as well as tools to help you find the right review type
  • << Previous: Getting Started
  • Next: How to Pick a Topic >>
  • Last Updated: Sep 21, 2022 2:16 PM
  • URL: https://guides.lib.uconn.edu/literaturereview

Creative Commons

University of Texas

  • University of Texas Libraries

Literature Reviews

Steps in the literature review process.

  • What is a literature review?
  • Define your research question
  • Determine inclusion and exclusion criteria
  • Choose databases and search
  • Review Results
  • Synthesize Results
  • Analyze Results
  • Librarian Support
  • You may need to some exploratory searching of the literature to get a sense of scope, to determine whether you need to narrow or broaden your focus
  • Identify databases that provide the most relevant sources, and identify relevant terms (controlled vocabularies) to add to your search strategy
  • Finalize your research question
  • Think about relevant dates, geographies (and languages), methods, and conflicting points of view
  • Conduct searches in the published literature via the identified databases
  • Check to see if this topic has been covered in other discipline's databases
  • Examine the citations of on-point articles for keywords, authors, and previous research (via references) and cited reference searching.
  • Save your search results in a citation management tool (such as Zotero, Mendeley or EndNote)
  • De-duplicate your search results
  • Make sure that you've found the seminal pieces -- they have been cited many times, and their work is considered foundational 
  • Check with your professor or a librarian to make sure your search has been comprehensive
  • Evaluate the strengths and weaknesses of individual sources and evaluate for bias, methodologies, and thoroughness
  • Group your results in to an organizational structure that will support why your research needs to be done, or that provides the answer to your research question  
  • Develop your conclusions
  • Are there gaps in the literature?
  • Where has significant research taken place, and who has done it?
  • Is there consensus or debate on this topic?
  • Which methodological approaches work best?
  • For example: Background, Current Practices, Critics and Proponents, Where/How this study will fit in 
  • Organize your citations and focus on your research question and pertinent studies
  • Compile your bibliography

Note: The first four steps are the best points at which to contact a librarian. Your librarian can help you determine the best databases to use for your topic, assess scope, and formulate a search strategy.

Videos Tutorials about Literature Reviews

This 4.5 minute video from Academic Education Materials has a Creative Commons License and a British narrator.

Recommended Reading

Cover Art

  • Last Updated: Oct 26, 2022 2:49 PM
  • URL: https://guides.lib.utexas.edu/literaturereviews

Creative Commons License

Brown University Homepage

Organizing and Creating Information

  • Citation and Attribution

What Is a Literature Review?

Review the literature, write the literature review, further reading, learning objectives, attribution.

This guide is designed to:

  • Identify the sections and purpose of a literature review in academic writing
  • Review practical strategies and organizational methods for preparing a literature review

A literature review is a summary and synthesis of scholarly research on a specific topic. It should answer questions such as:

  • What research has been done on the topic?
  • Who are the key researchers and experts in the field?
  • What are the common theories and methodologies?
  • Are there challenges, controversies, and contradictions?
  • Are there gaps in the research that your approach addresses?

The process of reviewing existing research allows you to fine-tune your research question and contextualize your own work. Preparing a literature review is a cyclical process. You may find that the research question you begin with evolves as you learn more about the topic.

Once you have defined your research question , focus on learning what other scholars have written on the topic.

In order to  do a thorough search of the literature  on the topic, define the basic criteria:

  • Databases and journals: Look at the  subject guide  related to your topic for recommended databases. Review the  tutorial on finding articles  for tips. 
  • Books: Search BruKnow, the Library's catalog. Steps to searching ebooks are covered in the  Finding Ebooks tutorial .
  • What time period should it cover? Is currency important?
  • Do I know of primary and secondary sources that I can use as a way to find other information?
  • What should I be aware of when looking at popular, trade, and scholarly resources ? 

One strategy is to review bibliographies for sources that relate to your interest. For more on this technique, look at the tutorial on finding articles when you have a citation .

Tip: Use a Synthesis Matrix

As you read sources, themes will emerge that will help you to organize the review. You can use a simple Synthesis Matrix to track your notes as you read. From this work, a concept map emerges that provides an overview of the literature and ways in which it connects. Working with Zotero to capture the citations, you build the structure for writing your literature review.

How do I know when I am done?

A key indicator for knowing when you are done is running into the same articles and materials. With no new information being uncovered, you are likely exhausting your current search and should modify search terms or search different catalogs or databases. It is also possible that you have reached a point when you can start writing the literature review.

Tip: Manage Your Citations

These citation management tools also create citations, footnotes, and bibliographies with just a few clicks:

Zotero Tutorial

Endnote Tutorial

Your literature review should be focused on the topic defined in your research question. It should be written in a logical, structured way and maintain an objective perspective and use a formal voice.

Review the Summary Table you created for themes and connecting ideas. Use the following guidelines to prepare an outline of the main points you want to make. 

  • Synthesize previous research on the topic.
  • Aim to include both summary and synthesis.
  • Include literature that supports your research question as well as that which offers a different perspective.
  • Avoid relying on one author or publication too heavily.
  • Select an organizational structure, such as chronological, methodological, and thematic.

The three elements of a literature review are introduction, body, and conclusion.

Introduction

  • Define the topic of the literature review, including any terminology.
  • Introduce the central theme and organization of the literature review.
  • Summarize the state of research on the topic.
  • Frame the literature review with your research question.
  • Focus on ways to have the body of literature tell its own story. Do not add your own interpretations at this point.
  • Look for patterns and find ways to tie the pieces together.
  • Summarize instead of quote.
  • Weave the points together rather than list summaries of each source.
  • Include the most important sources, not everything you have read.
  • Summarize the review of the literature.
  • Identify areas of further research on the topic.
  • Connect the review with your research.
  • DeCarlo, M. (2018). 4.1 What is a literature review? In Scientific Inquiry in Social Work. Open Social Work Education. https://scientificinquiryinsocialwork.pressbooks.com/chapter/4-1-what-is-a-literature-review/
  • Literature Reviews (n.d.) https://writingcenter.unc.edu/tips-and-tools/literature-reviews/ Accessed Nov. 10, 2021

This guide was designed to: 

  • Identify the sections and purpose of a literature review in academic writing 
  • Review practical strategies and organizational methods for preparing a literature review​

Content on this page adapted from: 

Frederiksen, L. and Phelps, S. (2017).   Literature Reviews for Education and Nursing Graduate Students.  Licensed CC BY 4.0

  • << Previous: EndNote
  • Last Updated: Jan 9, 2024 3:05 PM
  • URL: https://libguides.brown.edu/organize

moBUL - Mobile Brown University Library

Brown University Library  |  Providence, RI 02912  |  (401) 863-2165  |  Contact  |  Comments  |  Library Feedback  |  Site Map

Library Intranet

Harvey Cushing/John Hay Whitney Medical Library

  • Collections
  • Research Help

YSN Doctoral Programs: Steps in Conducting a Literature Review

  • Biomedical Databases
  • Global (Public Health) Databases
  • Soc. Sci., History, and Law Databases
  • Grey Literature
  • Trials Registers
  • Data and Statistics
  • Public Policy
  • Google Tips
  • Recommended Books
  • Steps in Conducting a Literature Review

What is a literature review?

A literature review is an integrated analysis -- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.  That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

A literature review may be a stand alone work or the introduction to a larger research paper, depending on the assignment.  Rely heavily on the guidelines your instructor has given you.

Why is it important?

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Discovers relationships between research studies/ideas.
  • Identifies major themes, concepts, and researchers on a topic.
  • Identifies critical gaps and points of disagreement.
  • Discusses further research questions that logically come out of the previous studies.

APA7 Style resources

Cover Art

APA Style Blog - for those harder to find answers

1. Choose a topic. Define your research question.

Your literature review should be guided by your central research question.  The literature represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.

  • Make sure your research question is not too broad or too narrow.  Is it manageable?
  • Begin writing down terms that are related to your question. These will be useful for searches later.
  • If you have the opportunity, discuss your topic with your professor and your class mates.

2. Decide on the scope of your review

How many studies do you need to look at? How comprehensive should it be? How many years should it cover? 

  • This may depend on your assignment.  How many sources does the assignment require?

3. Select the databases you will use to conduct your searches.

Make a list of the databases you will search. 

Where to find databases:

  • use the tabs on this guide
  • Find other databases in the Nursing Information Resources web page
  • More on the Medical Library web page
  • ... and more on the Yale University Library web page

4. Conduct your searches to find the evidence. Keep track of your searches.

  • Use the key words in your question, as well as synonyms for those words, as terms in your search. Use the database tutorials for help.
  • Save the searches in the databases. This saves time when you want to redo, or modify, the searches. It is also helpful to use as a guide is the searches are not finding any useful results.
  • Review the abstracts of research studies carefully. This will save you time.
  • Use the bibliographies and references of research studies you find to locate others.
  • Check with your professor, or a subject expert in the field, if you are missing any key works in the field.
  • Ask your librarian for help at any time.
  • Use a citation manager, such as EndNote as the repository for your citations. See the EndNote tutorials for help.

Review the literature

Some questions to help you analyze the research:

  • What was the research question of the study you are reviewing? What were the authors trying to discover?
  • Was the research funded by a source that could influence the findings?
  • What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions.
  • Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
  • If there are conflicting studies, why do you think that is?
  • How are the authors viewed in the field? Has this study been cited? If so, how has it been analyzed?

Tips: 

  • Review the abstracts carefully.  
  • Keep careful notes so that you may track your thought processes during the research process.
  • Create a matrix of the studies for easy analysis, and synthesis, across all of the studies.
  • << Previous: Recommended Books
  • Last Updated: Jan 4, 2024 10:52 AM
  • URL: https://guides.library.yale.edu/YSNDoctoral

Libraries | Research Guides

Literature reviews, what is a literature review, learning more about how to do a literature review.

  • Planning the Review
  • The Research Question
  • Choosing Where to Search
  • Organizing the Review
  • Writing the Review

A literature review is a review and synthesis of existing research on a topic or research question. A literature review is meant to analyze the scholarly literature, make connections across writings and identify strengths, weaknesses, trends, and missing conversations. A literature review should address different aspects of a topic as it relates to your research question. A literature review goes beyond a description or summary of the literature you have read. 

  • Sage Research Methods Core Collection This link opens in a new window SAGE Research Methods supports research at all levels by providing material to guide users through every step of the research process. SAGE Research Methods is the ultimate methods library with more than 1000 books, reference works, journal articles, and instructional videos by world-leading academics from across the social sciences, including the largest collection of qualitative methods books available online from any scholarly publisher. – Publisher

Cover Art

  • Next: Planning the Review >>
  • Last Updated: Jan 17, 2024 10:05 AM
  • URL: https://libguides.northwestern.edu/literaturereviews

Auraria Library red logo

Research Methods: Literature Reviews

  • Annotated Bibliographies
  • Literature Reviews
  • Scoping Reviews
  • Systematic Reviews
  • Scholarship of Teaching and Learning
  • Persuasive Arguments
  • Subject Specific Methodology

A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic. The results of a literature review may be an entire report or article OR may be part of a article, thesis, dissertation, or grant proposal. A literature review helps the author learn about the history and nature of their topic, and identify research gaps and problems.

Steps & Elements

Problem formulation

  • Determine your topic and its components by asking a question
  • Research: locate literature related to your topic to identify the gap(s) that can be addressed
  • Read: read the articles or other sources of information
  • Analyze: assess the findings for relevancy
  • Evaluating: determine how the article are relevant to your research and what are the key findings
  • Synthesis: write about the key findings and how it is relevant to your research

Elements of a Literature Review

  • Summarize subject, issue or theory under consideration, along with objectives of the review
  • Divide works under review into categories (e.g. those in support of a particular position, those against, those offering alternative theories entirely)
  • Explain how each work is similar to and how it varies from the others
  • Conclude which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of an area of research

Writing a Literature Review Resources

  • How to Write a Literature Review From the Wesleyan University Library
  • Write a Literature Review From the University of California Santa Cruz Library. A Brief overview of a literature review, includes a list of stages for writing a lit review.
  • Literature Reviews From the University of North Carolina Writing Center. Detailed information about writing a literature review.
  • Undertaking a literature review: a step-by-step approach Cronin, P., Ryan, F., & Coughan, M. (2008). Undertaking a literature review: A step-by-step approach. British Journal of Nursing, 17(1), p.38-43

literature review techniques

Literature Review Tutorial

  • << Previous: Annotated Bibliographies
  • Next: Scoping Reviews >>
  • Last Updated: Feb 29, 2024 12:00 PM
  • URL: https://guides.auraria.edu/researchmethods

1100 Lawrence Street Denver, CO 80204 303-315-7700 Ask Us Directions

Research Methods

  • Getting Started
  • Literature Review Research
  • Research Design
  • Research Design By Discipline
  • SAGE Research Methods
  • Teaching with SAGE Research Methods

Literature Review

  • What is a Literature Review?
  • What is NOT a Literature Review?
  • Purposes of a Literature Review
  • Types of Literature Reviews
  • Literature Reviews vs. Systematic Reviews
  • Systematic vs. Meta-Analysis

Literature Review  is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.

Also, we can define a literature review as the collected body of scholarly works related to a topic:

  • Summarizes and analyzes previous research relevant to a topic
  • Includes scholarly books and articles published in academic journals
  • Can be an specific scholarly paper or a section in a research paper

The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic

  • Help gather ideas or information
  • Keep up to date in current trends and findings
  • Help develop new questions

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Helps focus your own research questions or problems
  • Discovers relationships between research studies/ideas.
  • Suggests unexplored ideas or populations
  • Identifies major themes, concepts, and researchers on a topic.
  • Tests assumptions; may help counter preconceived ideas and remove unconscious bias.
  • Identifies critical gaps, points of disagreement, or potentially flawed methodology or theoretical approaches.
  • Indicates potential directions for future research.

All content in this section is from Literature Review Research from Old Dominion University 

Keep in mind the following, a literature review is NOT:

Not an essay 

Not an annotated bibliography  in which you summarize each article that you have reviewed.  A literature review goes beyond basic summarizing to focus on the critical analysis of the reviewed works and their relationship to your research question.

Not a research paper   where you select resources to support one side of an issue versus another.  A lit review should explain and consider all sides of an argument in order to avoid bias, and areas of agreement and disagreement should be highlighted.

A literature review serves several purposes. For example, it

  • provides thorough knowledge of previous studies; introduces seminal works.
  • helps focus one’s own research topic.
  • identifies a conceptual framework for one’s own research questions or problems; indicates potential directions for future research.
  • suggests previously unused or underused methodologies, designs, quantitative and qualitative strategies.
  • identifies gaps in previous studies; identifies flawed methodologies and/or theoretical approaches; avoids replication of mistakes.
  • helps the researcher avoid repetition of earlier research.
  • suggests unexplored populations.
  • determines whether past studies agree or disagree; identifies controversy in the literature.
  • tests assumptions; may help counter preconceived ideas and remove unconscious bias.

As Kennedy (2007) notes*, it is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field. In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.

Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:

Argumentative Review      This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to to make summary claims of the sort found in systematic reviews.

Integrative Review      Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.

Historical Review      Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review      A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.

Systematic Review      This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyse data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"

Theoretical Review      The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

* Kennedy, Mary M. "Defining a Literature."  Educational Researcher  36 (April 2007): 139-147.

All content in this section is from The Literature Review created by Dr. Robert Larabee USC

Robinson, P. and Lowe, J. (2015),  Literature reviews vs systematic reviews.  Australian and New Zealand Journal of Public Health, 39: 103-103. doi: 10.1111/1753-6405.12393

literature review techniques

What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters . By Lynn Kysh from University of Southern California

literature review techniques

Systematic review or meta-analysis?

A  systematic review  answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria.

A  meta-analysis  is the use of statistical methods to summarize the results of these studies.

Systematic reviews, just like other research articles, can be of varying quality. They are a significant piece of work (the Centre for Reviews and Dissemination at York estimates that a team will take 9-24 months), and to be useful to other researchers and practitioners they should have:

  • clearly stated objectives with pre-defined eligibility criteria for studies
  • explicit, reproducible methodology
  • a systematic search that attempts to identify all studies
  • assessment of the validity of the findings of the included studies (e.g. risk of bias)
  • systematic presentation, and synthesis, of the characteristics and findings of the included studies

Not all systematic reviews contain meta-analysis. 

Meta-analysis is the use of statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.  More information on meta-analyses can be found in  Cochrane Handbook, Chapter 9 .

A meta-analysis goes beyond critique and integration and conducts secondary statistical analysis on the outcomes of similar studies.  It is a systematic review that uses quantitative methods to synthesize and summarize the results.

An advantage of a meta-analysis is the ability to be completely objective in evaluating research findings.  Not all topics, however, have sufficient research evidence to allow a meta-analysis to be conducted.  In that case, an integrative review is an appropriate strategy. 

Some of the content in this section is from Systematic reviews and meta-analyses: step by step guide created by Kate McAllister.

  • << Previous: Getting Started
  • Next: Research Design >>
  • Last Updated: Aug 21, 2023 4:07 PM
  • URL: https://guides.lib.udel.edu/researchmethods

Deepfake Detection: A Systematic Literature Review

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

A systematic review of educational online peer-review and assessment systems: charting the landscape

  • Development Article
  • Open access
  • Published: 19 March 2024

Cite this article

You have full access to this open access article

  • Dmytro Babik   ORCID: orcid.org/0000-0001-6464-6028 1 ,
  • Edward Gehringer 2 ,
  • Jennifer Kidd 3 ,
  • Kristine Sunday 3 ,
  • David Tinapple 4 &
  • Steven Gilbert 5  

Over the past two decades, there has been an explosion of innovation in software tools that encapsulate and expand the capabilities of the widely used student peer assessment. While the affordances and pedagogical impacts of traditional in-person, “paper-and-pencil” peer assessment have been studied extensively and are relatively well understood, computerized (online) peer assessment introduced not only shifts in scalability and efficiency, but also entirely new capabilities and forms of social learning interactions, instructor leverage, and distributed cognition, that still need to be researched and systematized. Despite the ample research on traditional peer assessment and evidence of its efficacy, common vocabulary and shared understanding of online peer-assessment system design, including the variety of methods, techniques, and implementations, is still missing. We present key findings of a comprehensive survey based on a systematic research framework for examining and generalizing affordances and constraints of online peer-assessment systems. This framework (a) provides a foundation of a design-science metatheory of online peer assessment, (b) helps structure the discussion of user needs and design options, and (c) informs educators and system design practitioners. We identified two major themes in existing and potential research—orientation towards scaffolded learning vs. exploratory learning and system maturity. We also outlined an agenda for future studies.

Avoid common mistakes on your manuscript.

Introduction

Peer assessment has been widely used in pedagogical practice and intensively studied by education researchers since the 1970s. The seminal work by Topping ( 1998 ) that provided a comprehensive review of 31 studies and offered a typology of peer assessment has been cited by nearly every paper on the topic published since then (over 3270 citations shown by Google Scholar as of March 2024). Educational peer assessment (also called student peer review) was defined as “an arrangement in which individuals consider the amount, level, value, worth, quality, or success of the products or outcomes of learning of peers of similar status” (Topping, 1998 , p. 250) and typically entails “the quantitative evaluation of and qualitative feedback to a learners’ performance by another learner” (Patchan et al., 2017 , p. 2263). Also widely studied and closely associated with peer assessment practices is self-assessment, or self-evaluation, i.e., the evaluation of an artifact or a contribution by its own creator (Sargeant et al., 2008 ; Topping, 2003 ). The tandem of peer and self-assessment, used in conjunction with conventional instructor assessment, promises the benefits of high-level learning (Bostock, 2000 ; Boud & Falchikov, 1989 ; Falchikov & Boud, 1989 ; Sadler & Good, 2006 ; Sargeant et al., 2008 ).

Multiple studies and meta-reviews present extensive evidence of positive pedagogical outcomes of peer and self-assessment in various contexts (Chang et al., 2021 ; Double et al., 2020 ; Li et al., 2020 ; Misiejuk & Wasson, 2021 ). By introducing self- and peer assessment in face-to-face or online classrooms, instructors and course designers attempt to promote constructivist learning by (a) engaging learners in high-level cognitive activities of solving complex, open-ended, ill-structured problems and reviewing solutions (Wooley et al., 2008 ); (b) quantifying and understanding interactions among learners (Berg et al., 2006 ; Rotsaert et al., 2018 ; Willey & Gardner, 2009 ); (c) providing scalable, timely, and targeted feedback (Taraborelli, 2008 ); (d) creating sustainable, self-regulating, self-curating learning environments in which students are motivated to produce high-quality solutions and provide extensive, professional, and developmental feedback to each other (Baikadi et al., 2016 ; Rotsaert et al., 2018 ; Steffens, 2006 ).

With increasing use of computer information technologies (CITs) in education, computer-aided peer assessment has been praised as an enabler of pedagogies for developing both higher-level competencies (Bull & McCalla, 2002 ; Topping, 2005 ; Verma, 2015 ), and scalable methods for reliable assessment in large and online classes (Kulkarni et al., 2013 ; Raman & Joachims, 2014 ; Shah et al., 2013 ). CITs not only make peer assessment more efficient (i.e., simpler, faster, and cheaper), but also more effective and versatile (i.e., enabling many different types of interactions among peer learners as well as between learners and instructors that were not possible with face-to-face, paper-and-pencil peer- and self-assessment techniques) (Søndergaard & Mulder, 2012 ). In over 25 years that web-based CITs have been used in education, instructors, course designers, and software developers alike have made many attempts to computerize peer assessment by employing existing generic tools (e.g., Google Forms and Spreadsheets) or building specialized educational peer-review and assessment applications. These systems began emerging in the late 1990s and seem to have peaked between 2009 and 2018; by now some of the systems have reached maturity and are widely used, while others became defunct. The better known systems, built professionally and used in at least several universities, include Aropä (Hamer, 2006 ; Hamer et al., 2007 ), Calibrated Peer Review (Russell, 2001 ), CritViz (Tinapple et al., 2013 ), CrowdGrader (de Alfaro & Shavlovsky, 2014 ), Expertiza (Gehringer et al., 2007 ), Mobius SLIP (Babik et al., 2012 , 2017a , 2017b ), Peerceptiv/SWoRD (Cho & Schunn, 2007 ), peerScholar (Joordens et al., 2009 ), and SPARKPlus (Wu et al., 2010 ). These systems are used by educators in a wide variety of courses in STEM, liberal arts, and business disciplines, such as English and Writing (Cho & Schunn, 2007 ), Sciences (Russell, 2001 ), Computer Science (Søndergaard & Mulder, 2012 ), Business (Babik et al., 2017a , 2017b ), Visual Art and Design (Tinapple et al., 2013 ), among others (Alqassab et al., 2023 ).

Evidently, most of these peer assessment systems have been designed and built without prior exploring other systems to see what methods, techniques, and implementations already exist, have been tested and are appropriate for a particular pedagogical need, and how they could be improved. This likely happened because, despite the abundance of literature on peer assessment, there is little literature specifically focusing on the design of CIT-supported peer-assessment systems. Several attempts have been made to systematize CIT-supported peer assessment by reviewing research literature (Alqassab et al., 2023 ; Fu et al., 2019 ; Topping, 2023 ) or existing applications (Luxton-Reilly, 2009 ), developing inventories of peer assessment diversity (Gielen et al., 2011 ), and proposing classifications of peer assessment emphases (Søndergaard & Mulder, 2012 ) (see the Literature Review section for further details). However, the need for a comprehensive, systematic survey of educational CIT-enabled peer-assessment systems that explores and generalizes affordances and constraints of these systems based on a structured research framework still has not been addressed (Alqassab et al., 2023 ). Such a framework would inform new system designs, guide the improvement of existing ones, and help various categories of users navigate the much-needed innovations in this domain. The current study aimed to fill this gap by conducting such a survey and developing the said framework.

The purpose of this paper is to present our framework and summarize the key findings of our study. This framework serves as a foundation of a design-science metatheory of CIT-enabled peer assessment. It helps structure the discussion of user needs and design options that address these needs, and informs educators and system design practitioners. Footnote 1 For this study, we adopted the term online peer-review and assessment (OPRA) system and defined it as a web-based software application purposefully designed and developed to facilitate and automate student self- and peer-review and assessment process. Specifically, OPRA systems support collecting submission artifacts, allocating artifacts to peer reviewers for critiquing and/or evaluating, setting deadlines, guiding and scaffolding reviewers’ qualitative and quantitative feedback, aggregating quantitative evaluations, and conducting other components of the peer-review process. This definition covers a broad range of applications for educational peer review and assessment, also referred to in the literature as computer Footnote 2 -supported Footnote 3 (or digital) peer assessment Footnote 4 systems. Many synonymous, albeit competing, terms found in the literature, such as “ computer-mediated peer review (CMPR) ” (Carlson & Smith, 2017 ), when applied to the current generation of web 2.0- and cloud-based technologies, appear outdated and limited in spectrum. Therefore, we propose the term “OPRA” as more general and current. Educational OPRA systems are a subset of a broader class of social computing systems that explicitly or implicitly involve peer review (including social networking and social media applications, such as wikis, blogs, discussion forums) and collaborative editing and annotating (e.g., Google Docs, Hypothesis). OPRA systems, however, are distinct by being designed specifically for educational peer-review practice. The aim of our systematic review and the proposed framework is to guide the future design of OPRA systems by addressing these important research questions: What is the current landscape of educational online peer-review and assessment technology? What essential common characteristics of the OPRA systems address user needs? How are these characteristics defined by pedagogical objectives? How does this technology support, advance, and transform the pedagogical process? How does the diversity of contexts, in which OPRA systems are applied, define the diversity of features these systems have? We constructed this research framework for the systematic exploration of the current state of educational OPRA systems based on the rigorous literature review, examination of individual OPRA systems, and analysis of data collected through focus-group discussions, questionnaires and interviews. We used this framework to categorize functionality of OPRA systems developed since 2005, analyze design choices made by their originators, identify affordances and limitations of these choices, inform new development efforts, and suggest future research agenda. We consciously excluded from our study the systems developed prior to the advent of the web 2.0, since they were limited compared to modern systems and have largely disappeared. In this paper, we did not aim to analyze and report all possible variations of the peer-review process ; instead, we focused on the high-level view of affordances enabled by these variations. We sought to aggregate and systematize knowledge of the core characteristics of many existing OPRA systems and to use them as illustrative examples to help users, designers, and researchers make informed decisions, rather than to present in detail any specific individual OPRA systems.

The authors of this paper are researchers and instructors from several universities, who have created their own peer assessment systems, and subsequently worked together under the umbrella of the NSF-sponsored PeerLogic Project (Gehringer, 2019 ) to pursue several goals: (1) to systematically explore the domain of CIT-enabled peer assessment systems; (2) to develop an arsenal of web services for a wide range of applications in such systems; and (3) to develop a meta-language and a data repository for in-depth research of student peer review. We address this study to researchers and practitioners interested in OPRA and motivated to advance its use. We invite educational assessment and learning analytics researchers, system designers, educational technologists, instructional designers, and instructors, who enter the field of OPRA, to use this review as a guide to various functionalities and design options. Researchers in learning analytics will discover what data can be extracted from OPRA systems and mined to demonstrate learning outcomes. Educational software designers will learn from what has been developed and implemented in the past and incorporate this knowledge in their future projects. Instructors applying peer-review pedagogy in their classes will find what systems and functionalities exist to make informed choices about what approaches would best meet their needs. Oftentimes, instructors turn with these questions to ed-tech specialists and instructional designers; thus, the latter may also find this work useful. Conversely, marketers of OPRA systems may identify the unique and differentiating features of their products and better inform their target users.

This paper is organized as follows. Sect. “ Literature review ” presents a literature review. In Sect. “ Methodology ”, we define key terms, describe our methodology, the framework, and its application to systematic survey of multiple existing educational OPRA systems. In Sect. “ Results and discussion ”, we discuss the findings of our analysis, propose a general research agenda for future studies of OPRA, and summarize contributions and limitations of the paper.

Literature review

The focus of this study is information technology that enables the pedagogy of peer assessment. We explored existing literature to conduct a systematic analysis of the multitude, variety, and complexity of such implementations, functionalities, and design choices in OPRA. Several attempts have been made to survey computerized peer-assessment practices (Bouzidi & Jaillet, 2009 ; Chang et al., 2021 ; Davies, 2000 ; Doiron, 2003 ; Gikandi et al., 2011 ; Luxton-Reilly, 2009 ; Tenório et al., 2016 ; Topping, 2005 ), or some specific aspects of peer assessment, such as approaches to reliability and validity of peer evaluations (Gehringer, 2014 ; Misiejuk & Wasson, 2021 ; Patchan et al., 2017 ). However, meta-analysis of OPRA systems is complicated because their design space has high dimensionality; OPRA practices and designs vary across many disciplines in many different ways (Søndergaard & Mulder, 2012 ).

With the aim of describing the common, as well as unique, features of the OPRA systems, Luxton-Reilly ( 2009 ) conducted a systematic review of literature and identified 18 OPRA systems:

Six generic systems: Peer Grader, Web-SPA, OPAS, CeLS, PRAISE, and Aropä (Luxton-Reilly, 2009 , Table  1 , p. 213),

Seven domain-specific systems: Calibrated Peer Review (CPR), CAP, Praktomat, SWoRD, PeerWise, peerScholar, and an unnamed system by Sitthiworachart and Joy ( 2004 ) (Luxton-Reilly, 2009 , Table  2 , p. 217), and

Five context-specific systems: Peers, NetPeas, OASYS, PEARS, and an unnamed system by Wolfe ( 2004 ) (Luxton-Reilly, 2009 , Table  1 , p. 213).

[At the time of this writing, five systems from the Luxton-Reilly ( 2009 ) review, namely, PeerGrader, Web-SPA, OPAS, CeLS, and PRAISE, appeared to be defunct, not maintained or not extensively used (Purchase & Hamer, 2017 )]. Luxton-Reilly’s study identified the following common elements of the OPRA systems: anonymity , allocation and distribution , grading/marking criteria (rubrics), calculating peer grade/mark (aggregation), controls for quality of reviews , and workflow . The author noted a significant tradeoff between flexibility of an OPRA system (its ability to accommodate a variety of workflows and use cases) and its ease of use ; that is, a more flexible and effective system may be too complex to use for a user with weaker computer skills or a lack of understanding of the processes. This tradeoff highlights the need for a comprehensive analysis of various affordances of the OPRA systems. Luxton-Reilly ( 2009 ) also called for more usability studies and further evaluation studies of differences among the OPRA systems.

Gielen et al. ( 2011 ) updated Topping’s ( 1998 ) typology of peer assessment by reviewing studies on educational peer assessment published between 1997 and 2006. Specifically, they refined Topping’s variables, identified new variables, dimensions and values, and extended variable clustering proposed by Berg et al. ( 2006 ). They developed a classification framework called an inventory of peer assessment diversity that focused on the organizational aspects of the peer-assessment processes rather than use-case implementations in OPRA applications. While discussing the contact , time , and place characteristics of peer assessment, they concluded that the “internet-based learning environments are now often the preferred location for peer assessment” (Gielen et al., 2011 , p. 146).

Goldin et al. ( 2012 ) highlighted advantages of computer tools specially designed for peer review and assessment (which we define as OPRA systems) over general-purpose applications (including file sharing systems, online discussion boards, etc.), such as the ability to track the interactions of peers in greater depth and to manipulate specific components of peer interactions. The authors also emphasized many variations in the OPRA process (even within a narrow context, such as academic writing) and distinctions between peer review and similarly sounding activities, such as peer editing and peer evaluation.

Søndergaard and Mulder ( 2012 ) explored peer reviewing as a source of formative feedback in the more general context of collaborative learning, specifically in the context of collaborative learning in STEM disciplines. They identified the essential attributes for OPRA systems: automation (including anonymization and distribution of artifacts); simplicity (including easy-to-use, intuitive, and attractive user interface; integration with LMS; technical support); customizability (including handling any file format and creating individualized review rubrics); and accessibility (free, web-based, globally available, mobile). In addition, they discussed other interesting desirable attributes, such as rule-based review allocation (distribution); reviewer training/calibration ; similarity checking ; reporting tools (for review comparisons and instructor monitoring). Based on these attributes, Søndergaard and Mulder ( 2012 ) identified four approaches to formative peer review , namely training-oriented , similarity-checking-oriented , customization-oriented , and writing-skills-oriented , and illustrated implementations of these attributes using four OPRA systems, respectively Calibrated Peer Review, PeerMark, PRAZE, and Peerceptiv/SWoRD. To the best of our knowledge, to date, this is the only attempt to offer a taxonomy of OPRA systems based on a systematic analysis. Its limitation, however, is that the taxonomy framework was developed from only four systems, two of which are currently defunct.

Based on the analysis of five OPRA systems (Peerceptiv/SWoRD, peerScholar, PRAZE, OASIS, and Aropä), Purchase and Hamer ( 2017 ) identified the following important features of an effective OPRA system:

Anonymity ,

Peer allocation method ,

Submission method ,

Grading/marking criteria (specifying criteria or rubric),

Grade/mark calculation (aggregating peer evaluations into an attainment measure, i.e., a “grade”, and a metric for evaluation discrepancies, inconsistencies, or the lack of reliability),

Backward feedback (author’s responses to peer reviews).

Purchase and Hamer ( 2017 ) concluded that an increasing number of instructors are willing to try peer assessment of complex, open-ended assignments in order to quickly provide more feedback to students and help them develop higher-level transferable skills, such as critical thinking, creativity, communication, and collaboration. They noted that, despite the common basic peer-review process, specifics of peer-review and assessment activities vary across different instructors; therefore, often instructors ask for specific unique features and design choices.

Wahid et al. ( 2016 ) attempted to provide a systematic analysis of the domain and form a general understanding by applying a cognitive mapping approach to find criteria for categorizing OPRA systems. They analyzed a sample of 17 systems, of which only 13 match our definition of OPRA; about half of the sample was represented by various OPRA research projects in Europe. The authors identified three dimensions for categorizing OPRA systems, namely system design , efficiency , and effectiveness . Within the system design dimensions, they identified six features ( anonymity , delivery , grading weightage , channel , review loop , collaboration ). The dimensions of efficiency and effectiveness were not well defined, but efficiency included the sole feature of feedback timing , and effectiveness included rubrics , validation , reviewer calibration , and reverse reviews . Despite analyzing a fairly large sample of systems, Wahid et al. ( 2016 ) concluded that the majority of systems were designed similarly, differing only in small number of features or the ways the features were implemented.

Carlson and Smith ( 2017 ) conducted in-depth comparison of two OPRA systems—Calibrated Peer Review (CPR) and Moodle’s Workshop—based on their set of four criteria for an effective OPRA system:

Does the system include a cohesive mental model that deconstructs the process and demonstrates staged problem solving?

Does the system include scaffolding to move students forward, both in task accomplishment and in enhanced independent learning?

Does the system encourage students to learn from peer feedback?

Does the system provide data/outcomes for instructors to assess both assignment-specific and programmatic gains for individuals and for larger aggregates?

Albeit comparing only two OPRA systems, this study was markedly different from other studies (typically focused either on pedagogical or technological aspects of peer review) in that it integrated these two views into a coherent analysis of how technological affordances and constraints translate into pedagogical effects. Carlson and Smith ( 2017 ) pointed out that, although the OPRA systems provide advantages over the “old-school, paper-and-pencil” process, using them is still “labor-intensive” and involves a steep learning curve for instructors because of the variety of available options. In addition, they suggested several ways to help students see the value of peer assessment. They cautioned against inflated expectations for digital applications and suggested that the true value of OPRA, as both a learning and an assessment tool, is in “ informating ” the pedagogies dealing with complex problem-solving competencies, rather than simply automating them.

Gehringer ( 2014 ) examined six OPRA systems (Calibrated Peer Review, CrowdGrader, Expertiza, Mobius SLIP, Peerceptiv/SWoRD, PeerWise) in order to catalog methods for improving peer-review quality (both qualitative critique content and quantitative evaluation accuracy). The identified methods are calibration , reputation , human (“manual”) and machine (“automatic”) meta-reviewing , rejoinders (feedback from the author to the reviewer), and different scales for evaluating critiques (cardinal/rating-based and ordinal/ranking-based). While this study contrasted quality-control strategies for reviewing, it did not attempt to differentiate methods for improving qualitative critique content and quantitative evaluation accuracy.

Patchan et al. ( 2017 ) also studied quality-control mechanisms for assessment accuracy and critique quality. They examined literature on about 13 OPRA systems and identified the following approaches to encourage participants to provide more valuable reviews:

For controlling accuracy of evaluations :

Reviewer weight/reputation systems/accuracy grades;

Calibration/training within the application;

For controlling quality of critiques (reviewer comments):

Minimum word count;

Non-anonymous reviewing;

Instructor oversight;

Rejoinders (aka back-review, reverse review, double-loop feedback, meta-reviewing);

Automated meta-review/feedback;

Training outside the application.

In addition, Patchan et al. ( 2017 ) conducted an experiment in Peerceptiv/SWoRD to test two hypotheses:

Direct accountability hypothesis : positive effects of holding participants accountable for the accuracy of evaluations;

Depth-of-processing hypothesis : positive effects of holding participants accountable for the quality of critiques.

In this experiment, they conceptualized holding participants accountable in three ways: (i) being “graded” only on the quality of critiques they give, (ii) being “graded” only on the accuracy of evaluations they give, and (iii) being “graded” on both the quality of critiques and the accuracy of evaluations. The experiment demonstrated that:

Both types of participants’ perceptions about being held accountable (i) and (iii) positively affect evaluation accuracy; at the same time, participants’ perceptions of (ii) does not significantly affect evaluation accuracy;

Similarly, both types of participants’ perceptions (i) and (iii) positively affect critique quality (measured as approximated critique length and the number of longer comments); at the same time, participants’ perceptions of (ii) does not significantly affect critique quality.

Thus, overall, this study did not support the direct accountability hypothesis but did support the depth-of-processing hypothesis . This study examined the quality-control mechanisms and offered a basic classification of this very important aspect of OPRA. In our framework, we extended and refined this classification.

Misiejuk and Wasson ( 2021 ) conducted scoping review of the studies exploring backward evaluation (or ‘the feedback that an author provides to a reviewer about the quality of the review’ per Luxton-Reilly ( 2009 )) published during 2000–2021. In this earliest literature review addressing backward evaluations in PA, the authors focused specifically on the characteristics of the empirical studies rather than user needs and designs of this feature in various OPRA systems. They also pointed out variety and diversity of terminology and suggested a need to establish common vocabulary to describe various aspects of OPRA processes and systems.

Attempts have also been made to create generalized models of peer review and assessment that could guide the design of OPRA systems. For example, Millard et al. ( 2007 ) and Millard et al. ( 2008 ) analyzed various peer-review processes (and in particular, reviewer allocation patterns ) and proposed a canonical model integrating a set of peer-review cycles , each of which is defined by a set of peer-review transforms . Based on this model, they created a prototype of generalist web-based OPRA system called PeerPigeon and a Domain Specific Language (DSL). The project, however, at this point appears to be discontinued. Pramudianto et al. ( 2016 ), Song et al. ( 2016 ), and Babik et al. ( 2018 ) described a generalized domain model of peer review and assessment intended for integrating data from multiple OPRA systems into large-scale research data sets. At the time of writing, this work had been largely in progress, and we found no other attempts to present generalized OPRA domain models in the literature.

In summary, the large amount of research and development on OPRA systems has created a need for a systematic, comprehensive, framework-based analysis of this domain. Previous studies of the OPRA systems have been limited in scope (the number of considered characteristics, factors, attributes, or variables), scale (the number of examined systems), and depth of analysis (the level of considered detail, structuration, conclusions, and generalizations drawn). This study is an effort to fill the gap.

Methodology

Definitions.

The current peer-assessment literature lacks standard terminology. Different authors use diverse terms for the same concepts or the same terms for different concepts. For the purposes of this study, we used the following definitions:

A user is any person who interacts with an OPRA system.

An instructor is a user who sets up a peer-reviewed assignment.

A participant is a user who completes activities in the peer-review assignment. Typically, participants are students in a course.

An artifact is any kind of digital object that represents a solution to a problem or signifies completion of a task; for example, a document posted/submitted by a student to fulfill the requirements of an assignment.

A submission is an artifact or an outcome subjected to peer review.

An author is a participant or a team of participants which creates and posts a submission.

A review is a process and an artifact of completing a peer-review task; it includes a quantitative evaluation (assessment) and/or qualitative feedback (comments and/or critiques).

A reviewer is a participant or a team who reviews and assesses submissions authored by other peers.

An evaluation is a process and a result of a participant’s assigning some quantitative measure of attainment (“score”, “grade”, or “mark”) to an artifact.

A critique is a set of qualitative, textual, or verbal comments on an artifact; comments provided by a given reviewer to a given submission are referred to as a critique artifact .

A rejoinder is an author’s response to a received critique or evaluation; this response may take the form of a critique, evaluation, or both; in the peer assessment literature and in OPRA systems, rejoinder is also referred to as appeal (Wright et al., 2015 ), backward evaluation (Misiejuk & Wasson, 2021 ), backward feedback (Purchase & Hamer, 2017 ), back-review (Goldin, 2011 ), concordance or double-loop feedback (Babik et al., 2017a , 2017b ), or reaction (Babik, 2015 ).

Attainment is the degree to which a participant succeeded in solving a particular problem or in performing a specific complex task; attainment reflects the degree to which an artifact possesses some desired properties or values, such as efficacy, verity, accuracy, utility, or style.

In a typical peer assessment process, participants, as authors, create an artifact and make a submission (individually or as a team); the submission artifact is distributed (based on predetermined settings) to several participants, who now act as peer reviewers. Reviewers complete evaluations of submissions they received to review (typically using a preset scale and/or criteria/rubric), as well as provide critiques of the submissions; this step may also involve some form of self-assessment. The evaluation and critique data are then processed and distributed back to the authors. In many OPRA systems this basic process is followed by additional steps, such as rejoinder, or complemented by various treatments, such as training, calibration, or instructor feedback.

Note that we decompose reviewing into two separate activities: critiquing (e.g., providing qualitative , typically textual, feedback regarding attainment and possible improvements of the artifact), and evaluating (i.e., expressing judgment by assigning a quantitative measure of attainment to an artifact).

Data collection and analysis

We applied a grounded theory approach to systematically construct our framework and develop a design-science meta-theory of OPRA systems through the analysis of data about existing systems (Babik et al., 2012 ; Denzin & Lincoln, 2011 ; Hevner et al., 2004 ; Martin & Turner, 1986 ; Strauss & Corbin, 1994 ). Since our intent was not to test a set of specific hypotheses, but rather to build a framework for exploring designs of a class of systems, we operated inductively. In summary, we began with our research questions, collected and examined qualitative data, identified and coded apparent repeated ideas, concepts or elements. As more data were collected and re-examined, codes were grouped into concepts, and then into categories. These categories became the basis for our framework.

Initial formal data collection was conducted through keyword search for journal publications and conference proceedings describing various educational OPRA systems. Keywords such as “peer assessment”, “peer evaluations”, “student peer review”, “student self-assessment” and “computer-based student peer assessment” were used to identify academic publications through Google Scholar. We examined abstracts to identify articles dealing specifically with educational online peer-review and assessment systems, rather than peer review in general. We paid particular attention to articles comparing different OPRA systems. We also examined the reference lists in each of the found articles to identify additional sources. Overall, over 50 publications for the period 2005–2019 were identified and reviewed. We systematically reviewed this literature and discussed it during the online conference calls of the Online Peer Assessment PI Forum. We also reviewed and discussed our experiences of examining and experimenting with multiple available OPRA systems, as well as designing and implementing our own systems. In addition, we networked and collaborated with many originators and users of the OPRA systems; we organized and invited originators of well-known OPRA systems to participate in the PI Forum online meetings to demonstrate their applications to interested academics and practitioners. These demos and discussions were documented (as typed notes and video recordings, including screencasts) and shared online for further review and analysis.

Based on these research activities, we compiled a list of 57 systems developed between 1995 and 2015 that conform to our definition of an educational OPRA system (Appendix A; the complete detailed metadata in a publicly shared Google spreadsheet can be accessed at https://shorturl.at/cjvB4 ). Peer-assessment systems prior to 1995 typically were desktop applications limited to the use in local area networks (LANs), which we label “computer-assisted PA systems.” Advances in the CITs, such as the Web (1995), Web 2.0 (2002), and HTML 5 (2008), permitted the creation of truly online and interactive PA applications, which we defined as OPRA systems.

As we explored the identified OPRA systems, the recurring ideas, common patterns, and themes about user-system interactions, functionality, and design choices emerging through these demos and discussions, were coded as use cases . Use case refers to a given interaction between a user and a system needed to achieve a goal or satisfy a need (Jacobson, 1992 ). For example, the use case “Provide quantitative peer evaluation” requires a participant to enter a quantitative value for a reviewed submission with the goal of assessing it. We noted use cases common across multiple OPRA systems, as well as some unique use cases, pertinent only to some individual systems. To document objects, relationships, and use cases we identified as essential for the OPRA systems, we first created a concept map (see, for example, Fig.  1 ), and then applied systems analysis techniques to create use-case and class diagrams that were iteratively reviewed and refined by co-authors (see, for example, Babik et al., 2018 ).

figure 1

Example of a concept map of the online peer review and assessment domain

Next, we refined and validated the preliminary list of use cases through an informal focus-group discussion, in which instructors who practice peer assessment, described various user needs, situations, and scenarios that had occurred in their OPRA practice, such as collecting student work and assigning reviewers. In addition, we revisited academic papers describing various OPRA systems to examine how these user needs have been addressed by their designers. We applied concept mapping to visualize discovered use cases and to further group them into categories of user needs that the OPRA system must accommodate.

Classification framework and systematic survey of OPRA systems

We constructed a preliminary framework for analysis and classification of OPRA systems by organizing identified, formalized, and categorized user needs, use cases, features, and design options in four layers of abstraction from more specific to more general (Table  1 ). Categorized user needs and essential use cases (i.e., the use cases that describe only the minimum essential issues necessary to understand the required functionality) form the two layers of the problem domain (or implementation-independent layers ) of the framework, because they are determined by the needs of the users and are independent of any specific implementation or technology. These user needs and use cases apply to any OPRA system; in other words, a system is not an OPRA system unless it accommodates these needs and use cases. A given use case may be implemented in various systems differently, with varying design options. Therefore, functionality features and design options implemented in specific OPRA systems form the two layers of the solution domain (or implementation-dependent layers ). In organizing our framework as hierarchical layers, we follow generally accepted principles of systems analysis and design (Dennis et al., 2015 ). We focused on features relevant to peer review and assessment and left outside the scope of this study any features pertinent to any learning, content-management, or communication system (e.g., learning-object content management).

Visually, our framework is structured as hierarchically organized layers, where the top, most general, layer defines user needs , the second layer includes use cases , the third layer contains features , and the bottom, the most specific layer consists of specific design options (Table  1 ). (Note that such hierarchically structured frameworks are also used in other domains of information systems; see, for example, the National Institute of Standards and Technology’s Framework for Improving Critical Infrastructure Cybersecurity).

To illustrate the use of this framework, consider the following example (Table  1 ). Every OPRA system must accommodate the user need of “Eliciting evaluations and critiques”, i.e., for reviewers to input assessment data (quantitative or qualitative, structured or semi-structured). This user need creates two essential use cases—“Provide quantitative evaluation” and “Provide qualitative critiques”. To support the former use case, typically two features are required—a rubric (a set of evaluation criteria) and a scale. A rubric may be implemented as two design choices—as either holistic or specific (analytic) rubric. Similarly, the scale feature may be implemented as either cardinal (rating), ordinal (ranking), hybrid (combining rating and ranking), or some other “exotic” scale, such as “dividing a pie.”

We validated our framework by applying the multi-case method (Stake, 2013 ). We designed a questionnaire combining closed-ended and open-ended items and distributed it via a Google Form to the originators of existing OPRA systems to collect structured and semi-structured data and to verify whether our framework fits their responses. We contacted originators of 23 currently used systems (40% of the total number of identified OPRA systems); 19 responses were received, of which 15 usable responses were selected (a response rate of 65%). The collected data on specific OPRA systems’ functionality and design choices were mapped to our framework and any inconsistencies or divergences in the framework were addressed. In addition, whenever possible, we conducted short teleconference interviews with the originators to obtain additional comments and suggestions. The authors of this paper also contributed data for the systems they originated. The assembled framework was also compared to previously published surveys of peer assessment to ensure commonality of terms and definitions (see the Literature Review section). After detailed analysis of 16 systems, we reached saturation in classification of user needs, use cases, features, and design options.

Our framework allowed us to systematically survey, compare, and analyze the ways in which various user needs are addressed in multiple OPRA systems through implementations of various features and design options. Based on this framework and using data obtained through our multiple case study, we identified six primary categories of user needs accommodated in OPRA systems. The compendium of our classification framework is presented in Table 2 . Footnote 5 This framework is not only complete in terms of well-defined layers of abstraction, but also extensible in the sense that additional use cases can be added as OPRA systems evolve over time. However, by no means is our list of features and design options exhaustive. OPRA systems (just as any other type of information systems) continuously evolve—new technologies emerge, enabling new implementations, while some old technologies and implementations become obsolete. Therefore, we consider our framework to be extensible and invite users and designers of OPRA systems to contribute to its evolution.

Results and discussion

Major contributions and findings, contributions.

The major contribution of this study is construction of the classification framework and its application for the systematic review and analysis of existing and emerging educational OPRA systems. This helps researchers and practitioners understand the current landscape of technologies supporting and transforming student peer assessment. Importantly, it helps educators, as well as course and system designers, make informed decisions about the available choices of applications to fulfill teaching and learning needs. Analysis based on this framework can help identify the major gaps in existing designs and suggest directions for improving existing and developing new OPRA systems that fit better in the broader and ever-evolving educational technology landscape.

Summary of findings

The diversity of functionalities in existing OPRA systems can satisfy a broad variety of pedagogical and administrative needs. However, this diversity also leads to a multitude of painful tradeoffs between flexibility, comprehensibility, and ease of use. Educational peer-review practices vary greatly across courses, institutions, and countries, so no single OPRA application can comprehensively satisfy all their needs. Thus, every stakeholder involved in design, implementation, and use of such systems should carefully consider the fit between users’ needs and a system’s affordances and constraints.

In summary, the current landscape of educational OPRA systems can be characterized by the following generalizations:

User needs are met by a wide variety of solutions;

Most solutions dictate a peer-review process that is more amenable to certain disciplines, pedagogies, and types of assignments than others; based on underlying design choices, OPRA systems can be generally described as leaning toward either scaffolded or exploratory peer learning;

Some systems offer a single model of peer-review process while others enable multiple models and may require instructors to assert their own pedagogy in determining the design of the peer-review process. Instructors new to peer assessment may appreciate the former, whereas instructors with greater expertise or established pedagogical practices may prefer the latter;

There is a broad spectrum of the system maturity (i.e., the degree to which an OPRA system can satisfy diverse and conflicting user needs) as technology advances, new design ideas and new systems constantly emerge;

Although empirical studies provide some insight into the merits of various solutions, there are still many research opportunities to discover more effective models; in addition, there may be no single most effective model for all user needs;

No existing solutions address all user needs.

Computerized systems are good at automating very standardized and uniform processes and making them more efficient. The need for variety and flexibility is an enemy of unification, automation, and, consequently, efficiency, but it creates richness, adaptability, and effectiveness. This is the dilemma OPRA system designers have yet to resolve, and our framework and review are meant to aid in this effort. Beyond the benefit of efficiency and scalability, OPRA systems give educators a new kind of leverage, enabling them to develop in their students a certain new kind of cognition and mentality—away from “studying for the test” and toward “building competency for life”. OPRA benefits arise not only from being reviewed and receiving prompt and rich feedback but also from constantly immersing in the practice of peer-reviewing and assessing at all education levels. The practice of peer-reviewing is not a means to an end of producing more feedback or arriving at a more accurate final score, but rather a learning outcome in itself. When a classroom activity is designed around peer review, students begin approaching their own work differently, less as a series of obstacles, and more as an exploration of possibility. In this way, peer review becomes less of a tool and more of an environment that supports and stimulates exploratory learning.

Our analysis revealed two fundamental factors that determine the diversity of OPRA systems’ functionalities and how they may fit in a particular educational context. The first factor can be broadly characterized as orientation towards scaffolded learning versus exploratory learning . The second factor can be described as the degree of system maturity .

Scaffolded learning versus exploratory learning

With regard to the first factor, generally speaking, all OPRA systems are used to encourage learning and develop higher-level competencies for dealing with complex, open-ended problems. However, the systems oriented toward scaffolded learning tend to favor better-defined, more-structured assignments aimed at assessing proficiency in a specific skill set, such as critical writing and programming, or solving a particular type of problem, usually with a preconceived proficient solution. This orientation leans towards using peer assessment to aid summative assessment and seeks to elicit primarily quantitative evaluations that are deemed to “accurately” assess student performance rather than to generate a volume of qualitative critiques. Therefore, systems with this orientation (and empirical research based on them) are concerned with issues of reliability and validity of peer evaluations. To improve reliability and validity, they tend to rely on purposeful allocation, analytic rubrics, rating scales, calibration (based on preconceived sample “correct” answers), as well as instructor grading, feedback, intervention, and censoring of “inaccurate” peer feedback.

In contrast, the OPRA systems with orientation towards exploratory learning tend to promote less-defined and less-structured assignments and projects with no expected “ideal proficient” solution. Such systems aim at exposing learners to the ambiguity and uncertainty of the problem, stimulating their exploration and holistic understanding of the problem domain, encouraging social-learning interactions and discussions among peers. Therefore, OPRA systems based on this orientation typically use random allocation, focus on qualitative critiques, holistic rubrics, and ranking scales. While providing capabilities for quantitative evaluations that may enable “peer grading” if desired, these systems focus on providing learning analytics highlighting weaknesses and gaps in students’ shared understanding of a problem and highlighting emerging differences in opinions, perceptions, and approaches. These systems tend to lack features for “improving the accuracy” of quantitative evaluations and curation of peer critiques, but instead emphasize features that encourage self-curated and self-regulated social learning through imitation of successful examples (Bandura, 1986 ; Coleman et al., 1957 ; Rogers, 2005 ), proximal development (Vygotsky, 1980 ), critical dialogue, cultural consensus, and intersubjectivity (Matusov, 1996 ; Matusov & Marjanovic-Shane, 2017 ). Skeptics of this orientation often cite the “blind leading the blind” adage, suggesting that without a certain basic level of disciplinary knowledge and competency, participants’ ability to self-regulate and self-curate cannot be trusted. The proponents of this orientation argue that although features, such as calibration and analytic rubrics, reduce evaluation inconsistency (or “inaccuracy”) and foster authors’ confidence in the reviewers’ competence, when overused, they discourage divergent thinking, intellectual exploration, experimentation with provocative ideas, student agency and creativity, while encouraging authors’ and reviewers’ conformity and the quest for the “right answer” and “better grade”.

While we do not suggest that any OPRA system strictly adheres to any one of these orientations, the combination of features in systems, such as CPR, Peerceptiv, Expertiza, and SPARKPlus (Willey & Gardner, 2010 ), indicates their originators’ orientation towards scaffolded learning , while the design of the systems, such as CritViz and Mobius SLIP, appears to be oriented towards exploratory learning .

Our investigation shows that online peer review and assessment, as any other assessment process, is inherently value-laden and intersubjective. The means by which OPRA is conducted, and therefore, the design of the system to support it, inevitably reflect the pedagogical values of the system designer. Moreover, the choice of the system should be primarily driven by the values and aims of the instructor. While we maintain that both system orientations are grounded in constructivist learning theories, they do so differently, in accordance with different epistemological orientations regarding knowledge. This warrants further examination of their ontological significance. A deep explanation of constructivist learning theories in the OPRA context is beyond the scope of this paper, however, we posit that both system orientations lean towards constructivism that proffers “all cognitive activity takes place within the experiential world of a goal-directed consciousness” (Von Glasersfeld, 1984 , p. 10). Put another way, constructivism assumes that cognition organizes its experiential world by organizing itself. Consistent with constructivist pedagogy, both OPRA system orientations avail themselves as a more knowledgeable other ( MKO ), albeit a non-human one, suggesting a pedagogical model that emphasizes the gradual release of responsibility between the MKO and a learner.

In the case of the scaffolded learning orientation , scaffolding means that, while the “true” or “correct” problem solution is socially and culturally constructed and interpreted, it can ultimately be known. An OPRA system then enacts a type of epistemological determinism, in which the submissions and reviews are assessed against the “correct” solution determined by the educator through tools such as calibration, analytical rubrics, and quantitative summative assessment. Subsequently, while students engage in the “peer” component of learning, they do so in accordance with the knowledge claims of said instructor. Thus, what is valued as knowledge becomes strikingly visible within the system itself. By limiting what is possible or acceptable, the scaffolded-learning-oriented OPPA defines knowledge by what it is not, by simply ignoring or discouraging solutions and critiques which do not meet the given criteria. Such systems align with traditional assessment tools, such as tests, as they determine what knowledge is worth knowing.

The OPRA systems with exploratory learning orientation , conversely, lean towards an epistemological stance that knowledge is in a state of constant flux, generated and evolving through particular intersubjective interactions among learners and between learners and instructors in the peer-review process. One might align such an orientation with radical constructivism (Von Glasersfeld, 1984 , 1995 ) and, in doing so, consider that “we can check our perceptions only by means of other perceptions” (Von Glasersfeld, 1984 , p. 6). While it may be true that some might be troubled by the potential for a relativist pedagogy to emerge out of such an open and exploratory approach to knowledge, it also suggests an alternative view in which troubling a representational view of reality might effectuate a “search for fitting ways of behaving and thinking” (Von Glasersfeld, 1984 , p. 14). This interpretation suggests that instructors skeptical of prescriptive scaffolding practices, like grading rubrics or calibration exercises, may find benefit in OPRA as a process scaffold for socially constructed solutions to ill-structured problems.

System maturity

With regard to the second factor, we found that the degree of system maturity reflects diversity and flexibility of an OPRA system’s functionality (i.e., its ability to satisfy diverse and conflicting user needs), and generally correlates with the age of the system. Older and more mature systems (e.g., Peerceptiv, Expertiza, peerScholar, SPARKPlus) tend to have a greater variety of well-tested features, supported by extensive experimental research, and they cater to a broader institutional audience. Less mature systems (e.g., CritViz, Eli Review, eMarking, Mechanical TA, Moodle’s Workshop) usually focus on a specific pedagogical approach to peer review and assessment, with somewhat restrictive workflow, fewer options, and tend to provide their services to niche users. Importantly, the relation between system maturity and feature diversity is not perfectly deterministic. Systems such as SPARKPlus, Peerceptiv, and PeerWise are fairly old and have a wide variety of features; in contrast, while CPR could be considered one of the most mature and widely used OPRA systems, it lacks in diversity and flexibility of features in comparison to many younger systems. Also, in this age, systems tend to evolve very quickly, thanks to agile development practices. Therefore, by the time this paper is published, it is very likely that newer systems listed here will reach a higher degree of maturity. Equally likely, however, is that development of some of them may be discontinued.

Future research opportunities and system design recommendations

Our analysis also serves to highlight exciting future research and system design opportunities. Peer review and assessment have been extensively studied for several decades, but research on how technology-enabled processes affect pedagogical and administrative outcomes remains an emerging stream. One such opportunity is investigating the effects of participant anonymity on the OPRA process and outcomes. Anonymity has been generally considered a good remedy against various social and personal biases in evaluating and critiquing, as well as adverse reviewers’ behaviors driven by these biases (e.g., retaliation against negatively toned critiques or favorable evaluation given to friends’ submissions). Therefore, double-blind or single-blind reviews are prevalent modes in practically all OPRA systems. At the same time, appropriate uses of identity disambiguation (i.e., revealing authors’ or reviewers’ true identities) may be used as motivating factors for both authors and reviewers (Lin, 2018 ; Lu, 2011 ; Lu & Bol, 2007 ; Yu & Sung, 2016 ; Yu & Wu, 2011 ). Moreover, anonymity has been interpreted very narrowly, usually as authors’ and reviewers’ knowledge of each other’s identity at the time of review. However, we found it to be a multifaceted aspect of peer review. For example, anonymity also specifies whether identities remain hidden or revealed after the peer-review process is complete; and whether identities are revealed only to the author and reviewers of a particular artifact or to the larger pool of other participants. Ambiguity could be eschewed in multiple ways and could have a variety of effects. Research questions about the effects of these various aspects of anonymity and privacy in OPRA should be explored in the future.

Future research may also explore the effects of various aspects of allocation, peer-review workflow, and quantitative evaluations on various variables measuring the peer-review process and outcomes. One particularly important issue is how to motivate reviewers to provide deeper, richer, and more professional feedback. It would also be interesting to review and compare algorithms for computing attainment, accuracy, reputation, and other metrics used in different OPRA systems.

As the OPRA systems evolve to include various web-based tools for data visualization , it is worthwhile to investigate the effects of visualizations on participants’ behavior. New types of digital visual representations of peer-review processes and outcomes can better depict cognitive work, relationships, trends, and activities of the learners. Furthermore, these representations can be interactive and real-time, thus influencing individual and group learning behavior dynamics (Babik et al., 2017a , 2017b ).

Another emerging opportunity is incorporating transferable micro-credentials (“digital badges”) in the peer-review process. Micro-credentialing is a new trend in education technology that offers certain transformative changes in education (Abramovich et al., 2013 ; Carey, 2012 ; Casilli & Hickey, 2016 ). There is a synergy between OPRA and micro-credentialing: OPRA makes micro-credentialing more scalable, trustworthy, and versatile, whereas micro-credentialing allows peer-assessed learning outcomes to be conveyed beyond a single course. Credentials from peer assessment can be shared with anyone, providing credible documentation of a student’s skills, learning experiences, and achievements to interested third parties, such as prospective employers. For these reasons, integrating OPRA and micro-credentialing is a logical next step for both of these technologies. Our study, however, found that only a couple of systems (Mechanical TA and PeerWise) provide digital badges, and these are used only internally and are not transferable across multiple platforms. A couple of other systems claimed to have micro-credentialing features in development (e.g., Expertiza and Mobius SLIP). It would be exciting to explore the effects of integrating micro-credentialing and OPRA on individual and group learning, as well as on institutional competitiveness.

Our study would be of little value if it did not present any practical recommendations for future OPRA design. While the detailed analyses and justifications had to be left out due to space constraints (and are presented in related papers), we would like to give a sampler of the following observations and recommendations. Dynamic allocation is more flexible than en-masse allocation and reallocation , but the implications require careful study, as they may lead to undesired side-effects, such as procrastination. Dispersed unidirectional allocation allows for a more balanced workload (even number of submissions) per reviewer and, therefore, should be preferred as a default setup (unless there are other mechanisms to compensate for extra work, such as extra credit). When advantages of clustered reciprocal allocation are important (e.g., when giving peer-review-group-specific variations of assignments), this allocation may be created as a special case of dispersed unidirectional allocation. The desirable number of submissions per reviewer is between three and five; the preferred number of reviews per submission is five to six. It is desirable to have an interface for implementing flexible analytic rubrics, with the ability of creating holistic rubric as a special case. Combining ranking and rating evaluation scales in a single activity with a single interface control (as implemented in Mobius SLIP with the SLIP Slider) offers an interesting opportunity for deeper learning analytics than using solely rating or ranking data. Rejoinders (back evaluations) appears to be a popular approach for holding reviewers accountable for review value, and we recommend using it, with the single-loop reduced workflow as a special-case option.

Limitations

Our analysis has the following limitations. First, our assessment of OPRA systems’ capabilities is accurate only as of the time of data collection. These systems are constantly being developed and upgraded with new features based on the originators’ vision and users’ functional and non-functional requirements. As of time of publication, some systems may have been decommissioned while some new systems may have emerged. The software market is very dynamic, and the fact that most OPRA systems are provided as software-as-a-service (SaaS) over the internet makes provision of new features even more agile. Thus, another survey, preferably based on the proposed framework, may be due in a few years.

Second, our survey that provided data for framework validation and illustrative examples, due to resource constraints, covered only a subpopulation of OPRA systems. Therefore, some innovative use cases and features may have been unintentionally omitted. In addition, the OPRA systems designed specifically to assess observed behaviors rather than artifacts (e.g., a contribution to a team project) are underrepresented in our sample. Engaging more closely with the system originators and collecting richer information through system demonstrations may help update this survey in the future.

Perhaps the most significant limitation of the proposed framework is that it imposes a hierarchical one-to-many relationship between use cases and system features. In other words, it treats every use case as being addressed by one or several features, but every feature as addressing one specific use case. We found that, while typically several features are implemented as an ensemble to address a particular user need through a specific use case, oftentimes one feature serves more than one use case. For example, double-loop peer assessment with rejoinders can be treated as both a means to motivate participants to do a better job as reviewers and a way to assess attainment of critiques. In addition, some features serving the same use case, that we consider to be distinct features, may, in the view of some users and readers, be indistinguishable or considered to be a single feature. For example, in our discussions, some instructors treated scales and rubrics as the same feature. Thus, while mapping user needs, use cases, features, and design choices is always based on certain assumptions and simplifications made by the system analyst, this limitation offers an opportunity for further refinement of the framework to incorporate this cross-functional aspect. Importantly, the proposed research framework provides a foundation for exploring a dynamic socio-technological phenomenon. We invite other researchers to apply, update, and augment our framework on changing technologies and practices.

C onclusion

Online peer review and assessment enhances conventional and virtual classrooms by offering scalable and efficient “grading” and large volumes of prompt feedback. However, its greatest advantage is that it transforms a classroom into a self-curating, self-regulating learning environment as education undergoes technology-driven transformation. OPRA systems might be the first step toward new types of “instructorless classrooms”, such as the “42” computer-programming training program (42.us.org) (think “driverless cars”), but they require informed and careful development, implementation, and execution to maximize advantages and remedy pitfalls (think “driverless cars” again) (Beach, 1974 ; Fu et al., 2019 ; Morrison, 2014 ).

Charting the abstruse landscape of OPRA technologies to enable this type of pedagogy is the primary objective of this study. We sought not only to provide a structured and comprehensive overview of the state of things, but also attempted to offer a framework for ongoing analyzing the ever-changing landscape. Our framework and systematic survey can be expected to inform audiences of decision-makers involved in designing, researching, promoting, adopting, and applying online peer review and assessment. This paper contributes to the emerging stream of literature promoting educational design research (Akker et al., 2006 , 2012 ; Søndergaard & Mulder, 2012 ). This work may also have implications for other domains relying on peer review, such as academic publishing and grant application processing.

Due to space limitations, detailed discussions of specific user needs, designs, and learning implications found in this study will be presented in follow-up publications.

Other known terms: technology-, IT-, CIT-, ICT-, network-, internet-, web-, cloud-.

Other known terms: aided, assisted, automated, based, enabled, mediated.

Other known terms: review, evaluation.

Detailed reviews and analyses of OPRA user needs, use cases, and system functionality are left outside the scope of this paper due to space constraints and are presented in separate publications.

Abramovich, S., Schunn, C., & Higashi, R. M. (2013). Are badges useful in education?: It depends upon the type of badge and expertise of learner. Educational Technology Research and Development, 61 (2), 217–232. https://doi.org/10.1007/s11423-013-9289-2

Article   Google Scholar  

Alqassab, M., Strijbos, J.-W., Panadero, E., Ruiz, J. F., Warrens, M., & To, J. (2023). A systematic review of peer assessment design elements. Educational Psychology Review, 35 (1), 18. https://doi.org/10.1007/s10648-023-09723-7

Babik, D., Iyer, L., & Ford, E. (2012). Towards a comprehensive online peer assessment system: Design outline. Lect. Notes Comput. Sci. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 7286 LNCS , pp. 1–8

Babik, D. (2015). Investigating intersubjectivity in peer-review-based, technology-enabled knowledge creation and refinement social systems . The University of North Carolina at Greensboro.

Google Scholar  

Babik, D., Gehringer, E. F., Tinapple, D., Pramudianto, F., & Song, Y. (2018). Domain model and meta-language for peer review and assessment. Proceedings of Western DS, I , 7.

Babik, D., Singh, R., Zhao, X., & Ford, E. (2017a). What you think and what I think: Studying intersubjectivity in knowledge artifacts evaluation. Information Systems Frontiers, 19 (1), 31–56. https://doi.org/10.1007/s10796-015-9586-x

Babik, D., Tinapple, D., Gehringer, E. F., & Pramudianto, F. (2017b). The effect of visualization on students’ miscalibration in the context of online peer assessment. Proceedings of Western DS, I , 7.

Baikadi, A., Schunn, C. D., & Ashley, K. D. (2016). Impact of revision planning on peer-reviewed writing . Educational Data Mining.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory (1st ed.). Prentice Hall.

Beach, L. R. (1974). Self-directed student groups and college learning. Higher Education, 3 (2), 187–200. https://doi.org/10.1007/BF00143791

Article   MathSciNet   Google Scholar  

Bostock, S. (2000). Student peer assessment . Keele University.

Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in higher education: A critical analysis of findings. Higher Education, 18 (5), 529–549. https://doi.org/10.1007/BF00138746

Bouzidi, L., & Jaillet, A. (2009). Can online peer assessment be trusted? Educational Technology & Society, 12 (4), 257–268.

Bull, S., & McCalla, G. (2002). Modelling cognitive style in a peer help network. Instructional Science, 30 (6), 497–528. https://doi.org/10.1023/A:1020570928993

Carey, K. (2012). A future full of badges. The Chronicle of Higher Education , A60 . https://www.chronicle.com/article/A-Future-Full-of-Badges/131455

Carlson, P., & Smith, R. (2017). Computer-mediated peer review: A comparison of calibrated peer review and Moodle’s workshop. Faculty Publications—English & Literature . https://peer.asee.org/28064

Casilli, C., & Hickey, D. (2016). Transcending conventional credentialing and assessment paradigms with information-rich digital badges. The Information Society, 32 (2), 117–129. https://doi.org/10.1080/01972243.2016.1130500

Chang, C.-Y., Lee, D.-C., Tang, K.-Y., & Hwang, G.-J. (2021). Effect sizes and research directions of peer assessments: From an integrated perspective of meta-analysis and co-citation network. Computers & Education, 164 , 104123. https://doi.org/10.1016/j.compedu.2020.104123

Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers & Education, 48 (3), 409–426. https://doi.org/10.1016/j.compedu.2005.02.004

Coleman, J., Katz, E., & Menzel, H. (1957). The diffusion of an innovation among physicians. Sociometry, 20 (4), 253–270. https://doi.org/10.2307/2785979

Davies, P. (2000). Computerized peer assessment. Innovations in Education and Teaching International, 37 (4), 346–355.

de Alfaro, L., & Shavlovsky, M. (2014). CrowdGrader: A tool for crowdsourcing the evaluation of homework assignments. Proceedings of the 45th ACM Technical Symposium on Computer Science Education . https://doi.org/10.1145/2538862.2538900

Dennis, A., Wixom, B. H., & Tegarden, D. (2015). Systems analysis and design: An object-oriented approach with UML (5th ed.). John Wiley & Sons.

Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE handbook of qualitative research (4th ed.). SAGE Publications, Inc.

Doiron, G. (2003). The value of online student peer review, evaluation and feedback in higher education. CDTL Brief, 6 , 1–2.

Double, K. S., McGrane, J. A., & Hopfenbeck, T. N. (2020). The impact of peer assessment on academic performance: A meta-analysis of control group studies. Educational Psychology Review, 32 (2), 481–509. https://doi.org/10.1007/s10648-019-09510-3

Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta-analysis. Review of Educational Research, 59 (4), 395–430. https://doi.org/10.3102/00346543059004395

Fu, Q.-K., Lin, C.-J., & Hwang, G.-J. (2019). Research trends and applications of technology-supported peer assessment: A review of selected journal publications from 2007 to 2016. Journal of Computers in Education, 6 (2), 191–213. https://doi.org/10.1007/s40692-019-00131-x

Article   ADS   Google Scholar  

Gehringer, E. F. (2014). A survey of methods for improving review quality. New horizons in web based learning (pp. 92–97). Springer.

Chapter   Google Scholar  

Gehringer, E. F. (2019). Board 60: PeerLogic: Web services for peer assessment. In 2019 ASEE annual conference & exposition .

Gehringer, E. F., Ehresman, L., Conger, S. G., & Wagle, P. (2007). Reusable learning objects through peer review: The Expertiza approach. Innovate: Journal of Online Education, 3 (5), 4.

Gielen, S., Dochy, F., & Onghena, P. (2011). An inventory of peer assessment diversity. Assessment & Evaluation in Higher Education, 36 (2), 137–155. https://doi.org/10.1080/02602930903221444

Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57 (4), 2333–2351. https://doi.org/10.1016/j.compedu.2011.06.004

Goldin, I. (2011). A focus on content: The use of rubrics in peer review to guide students and instructors . http://d-scholarship.pitt.edu/8375/1/goldin%2Ddissertation%2D20110805.pdf

Goldin, I., Ashley, K. D., & Schunn, C. (2012). Redesigning educational peer review interactions using computer tools. Journal of Writing Research, 4 (2), 111–119.

Hamer, J. (2006). Some experiences with the “contributing student approach.” SIGCSE Bulletin, 38 (3), 68–72. https://doi.org/10.1145/1140123.1140145

Hamer, J., Kell, C., & Spence, F. (2007). Peer assessment using Aropä. Proceedings of the Ninth Australasian Conference on Computing Education, 66 , 43–54.

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28 (1), 75–105.

Jacobson, I. (1992). Object oriented software engineering: A use case driven approach (1st ed.). Addison-Wesley Professional.

Joordens, S., Desa, S., & Paré, D. (2009). The pedagogical anatomy of peer assessment: Dissecting a peerscholar assignment. Journal of Systemics, Cybernetics & Informatics, 7 (5), 1.

Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., & Klemmer, S. R. (2013). Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction, 20 (6), 1–33. https://doi.org/10.1145/2505057

Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45 (2), 193–211. https://doi.org/10.1080/02602938.2019.1620679

Lin, G.-Y. (2018). Anonymous versus identified peer assessment via a Facebook-based learning application: Effects on quality of peer feedback, perceived learning, perceived fairness, and attitude toward the system. Computers & Education, 116 , 81–92. https://doi.org/10.1016/j.compedu.2017.08.010

Lu, R. (2011). Anonymity in collaboration: Anonymous vs. identifiable E-peer review in writing instruction . Trafford Publishing.

Lu, R., & Bol, L. (2007). A comparison of anonymous versus identifiable E-peer review on college student writing performance and the extent of critical feedback. Journal of Interactive Online Learning, 6 (2), 100–115.

Luxton-Reilly, A. (2009). A systematic review of tools that support peer assessment. Computer Science Education, 19 (4), 209–232. https://doi.org/10.1080/08993400903384844

Martin, P. Y., & Turner, B. A. (1986). Grounded theory and organizational research. The Journal of Applied Behavioral Science, 22 (2), 141–157. https://doi.org/10.1177/002188638602200207

Matusov, E. (1996). Intersubjectivity without agreement. Mind, Culture, and Activity, 3 (1), 25–45. https://doi.org/10.1207/s15327884mca0301_4

Matusov, E., & Marjanovic-Shane, A. (2017). Many faces of the concept of culture (and education). Culture & Psychology, 23 (3), 309–336. https://doi.org/10.1177/1354067X16655460

Millard, D., Fill, K., Gilbert, L., Howard, Y., Sinclair, P., Senbanjo, D. O., & Wills, G. B. (2007). Towards a canonical view of peer assessment. Seventh IEEE international conference on advanced learning technologies (ICALT 2007) , pp. 793–797. https://doi.org/10.1109/ICALT.2007.260

Millard, D., Newman, D., & Sinclair, P. (2008). PeerPigeon: A web application to support generalised peer review . pp. 3824–3836. https://www.learntechlib.org/primary/p/30219/

Misiejuk, K., & Wasson, B. (2021). Backward evaluation in peer assessment: A scoping review. Computers & Education, 175 , 104319. https://doi.org/10.1016/j.compedu.2021.104319

Morrison, N. (2014). The teacher-less classroom is not as close as you think . Forbes. https://www.forbes.com/sites/nickmorrison/2014/08/21/the-teacher-less-classroom-is-not-as-close-as-you-think/

Patchan, M. M., Schunn, C. D., & Clark, R. J. (2017). Accountability in peer assessment: Examining the effects of reviewing grades on peer ratings and peer feedback. Studies in higher education , pp. 2263–2278. https://doi.org/10.1080/03075079.2017.1320374

Pramudianto, F., Aljeshi, M., Alhussein, H., Song, Y., Gehringer, E. F., Babik, D., & Tinapple, D. (2016). Peer review data warehouse: Insights from different systems. CSPRED 2016: Workshop on computer-supported peer review in education . Educational Data Mining

Purchase, H., & Hamer, J. (2017). Peer review in practice: Eight years of experience with Aropä . University of Glasgow. http://www.dcs.gla.ac.uk/~hcp/aropa/AropaReportJan2017.pdf

Raman, K., & Joachims, T. (2014). Methods for ordinal peer grading . pp. 1037–1046. https://doi.org/10.1145/2623330.2623654

Rogers, E. M. (2005). Complex adaptive systems and the diffusion of innovations. The Innovation Journal: The Public Sector Innovation Journal, 10 , 25.

Rotsaert, T., Panadero, E., & Schellens, T. (2018). Anonymity as an instructional scaffold in peer assessment: Its effects on peer feedback quality and evolution in students’ perceptions about peer assessment skills. European Journal of Psychology of Education, 33 (1), 75–99. https://doi.org/10.1007/s10212-017-0339-8

Russell, A. A. (2001). Calibrated peer review: a writing and critical-thinking instructional tool. UCLA, Chemistry , 2001 . http://www.unc.edu/opt-ed/eval/bp_stem_ed/russell.pdf

Sadler, P. M., & Good, E. (2006). The impact of self-and peer-grading on student learning. Educational Assessment, 11 (1), 1–31. https://doi.org/10.1207/s15326977ea1101_1

Sargeant, J., Mann, K., van der Vleuten, C., & Metsemakers, J. (2008). “Directed” self-assessment: Practice and feedback within a social context. Journal of Continuing Education in the Health Professions, 28 (1), 47–54. https://doi.org/10.1002/chp.155

Article   PubMed   Google Scholar  

Shah, N. B., Bradley, J. K., Parekh, A., Wainwright, M., & Ramchandran, K. (2013). A case for ordinal peer evaluation in MOOCs. NIPS workshop on data driven education , pp. 1–8.

Sitthiworachart, J., & Joy, M. (2004). Effective peer assessment for learning computer programming. SIGCSE BULLETIN, 36 , 122–126.

Søndergaard, H., & Mulder, R. A. (2012). Collaborative learning through formative peer review: Pedagogy, programs and potential. Computer Science Education, 22 (4), 343–367. https://doi.org/10.1080/08993408.2012.728041

Song, Y., Pramudianto, F., & Gehringer, E. F. (2016). A markup language for building a data warehouse for educational peer-assessment research. IEEE Frontiers in Education Conference (FIE), 2016 , 1–5. https://doi.org/10.1109/FIE.2016.7757600

Stake, R. E. (2013). Multiple case study analysis . Guilford Press.

Steffens, K. (2006). Self-regulated learning in technology-enhanced learning environments: Lessons of a European peer review. European Journal of Education, 41 (3–4), 353–379.

Strauss, A., & Corbin, J. (1994). Grounded theory methodology: An overview. In Handbook of qualitative research

Taraborelli, D. (2008). Soft peer review. Social software and distributed scientific evaluation. Proceedings of the 8th international conference on the design of cooperative systems, Carry-Le-Rouet, 20–23 May 2008 , pp. 99–110

Tenório, T., Bittencourt, I. I., Isotani, S., & Silva, A. P. (2016). Does peer assessment in on-line learning environments work? A systematic review of the literature. Computers in Human Behavior, 64 , 94–107. https://doi.org/10.1016/j.chb.2016.06.020

Tinapple, D., Olson, L., & Sadauskas, J. (2013). CritViz: Web-based software supporting peer critique in large creative classrooms. Bulletin of the IEEE Technical Committee on Learning Technology, 15 (1), 29.

Topping, K. J. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68 (3), 249–276. https://doi.org/10.3102/00346543068003249

Topping, K. J. (2003). Self and peer assessment in school and university: Reliability, validity and utility. In M. Segers, F. Dochy, & E. Cascallar (Eds.), Optimising new modes of assessment: In search of qualities and standards (pp. 55–87). Springer.

Topping, K. J. (2005). Trends in peer learning. Educational Psychology, 25 (6), 631–645. https://doi.org/10.1080/01443410500345172

Topping, K. J. (2023). Digital peer assessment in school teacher education and development: A systematic review. Research Papers in Education, 38 (3), 472–498. https://doi.org/10.1080/02671522.2021.1961301

van den Akker, J., Branch, R. M., Gustafson, K., Nieveen, N., & Plomp, T. (2012). Design approaches and tools in education and training . Springe.

van den Akker, J., Gravemeijer, K., McKenney, S., & Nieveen, N. (2006). Educational design research . Routledge.

Book   Google Scholar  

van den Berg, I., Admiraal, W., & Pilot, A. (2006). Peer assessment in university teaching: Evaluating seven course designs. Assessment & Evaluation in Higher Education, 31 (1), 19–36. https://doi.org/10.1080/02602930500262346

Verma, P. (2015). 5 Tech trends that will transform education by 2025 . Forbes. https://www.forbes.com/sites/centurylink/2015/08/11/5-tech-trends-that-will-transform-education-by-2025/#3e2910b75890

Von Glasersfeld, E. (1984). An introduction to radical constructivism. The invented reality 1740 (pp. 17–40). Norton.

Von Glasersfeld, E. (1995). Radical constructivism . Routledge.

Vygotsky, L. S. (1980). Mind in society: The development of higher psychological processes. Journal of Reading Behavior, 12 , 161–162.

Wahid, U., Chatti, M. A., & Schroeder, U. (2016). A systematic analysis of peer assessment in the MOOC era and future perspectives. eLmL, 75 , 6.

Willey, K., & Gardner, A. (2009). Improving self- and peer assessment processes with technology. Campus-Wide Information Systems, 26 (5), 379–399. https://doi.org/10.1108/10650740911004804

Willey, K., & Gardner, A. (2010). Investigating the capacity of self and peer assessment activities to engage students and promote learning. European Journal of Engineering Education, 35 (4), 429–443. https://doi.org/10.1080/03043797.2010.490577

Wolfe, W. J. (2004). Online student peer reviews. Proceedings of the 5th conference on information technology education , pp. 33–37. https://doi.org/10.1145/1029533.1029543

Wooley, R., Was, C., Schunn, C. D., & Dalton, D. (2008). The effects of feedback elaboration on the giver of feedback. Annual Meeting of the Cognitive Science Society, 5 , 2375–2380.

Wright, J. R., Thornton, C., & Leyton-Brown, K. (2015). Mechanical TA: partially automated high-stakes peer grading. Proceedings of the 46th ACM technical symposium on computer science education , pp. 96–101. https://doi.org/10.1145/2676723.2677278

Wu, C., Chanda, E., & Willison, J. (2010). SPARKPLUS for self- and peer assessment on group-based honours’ research projects . https://digital.library.adelaide.edu.au/dspace/handle/2440/61612

Yu, F.-Y., & Sung, S. (2016). A mixed methods approach to the assessor’s targeting behavior during online peer assessment: Effects of anonymity and underlying reasons. Interactive Learning Environments, 24 (7), 1674–1691. https://doi.org/10.1080/10494820.2015.1041405

Yu, F.-Y., & Wu, C.-P. (2011). Different identity revelation modes in an online peer-assessment learning environment: Effects on perceptions toward assessors, classroom climate and learning activities. Computers & Education, 57 (3), 2167–2177. https://doi.org/10.1016/j.compedu.2011.05.012

Download references

The study reported in this manuscript was supported by the National Science Foundation (Directorate for Education and Human Services) under grants DUE-1432347, 1431856, 1432580, 1432690, and 1431975.

Author information

Authors and affiliations.

Department of Computer Information Systems and Business Analytics, James Madison University, 421 Bluestone Dr, Harrisonburg, VA, 22807, USA

Dmytro Babik

Department of Computer Science, North Carolina State University, 890 Oval Drive, Raleigh, NC, 27695, USA

Edward Gehringer

College of Education, Old Dominion University, 1 Old Dominion University, Norfolk, VA, 23529, USA

Jennifer Kidd & Kristine Sunday

Herberger Institute for Design and the Arts School of Art, Media, and Engineering, Arizona State University, 1151 S Forest Ave, Tempe, AZ, 85281, USA

David Tinapple

Late President & Founder, The TLT Group, Takoma Park, MD, USA

Steven Gilbert

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dmytro Babik .

Ethics declarations

Conflict of interest.

There is no potential conflicts of interest to disclose related to this study.

Research involving human and animal rights

No human participants and/or animals were involved in this study. The unit of analysis is a technical system. Survey administered to collect data did not include any information on human subjects.

Informed consent

No informed consent related to collect data about human subjects was used in this study.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Babik, D., Gehringer, E., Kidd, J. et al. A systematic review of educational online peer-review and assessment systems: charting the landscape. Education Tech Research Dev (2024). https://doi.org/10.1007/s11423-024-10349-x

Download citation

Accepted : 04 February 2024

Published : 19 March 2024

DOI : https://doi.org/10.1007/s11423-024-10349-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Architectures for educational technology system
  • Peer assessment
  • Peer review
  • Online systems
  • Systematic review
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. How To Write A Literature Review

    literature review techniques

  2. How to Write a Literature Review Complete Guide

    literature review techniques

  3. Writing the Literature Review

    literature review techniques

  4. How to Write a Literature Review in 5 Simple Steps

    literature review techniques

  5. Literature Review: What is and How to do it?

    literature review techniques

  6. Why and How to Conduct a Literature Review

    literature review techniques

COMMENTS

  1. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  2. Chapter 9 Methods for Literature Reviews

    Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour (vom Brocke et al., 2009). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and ...

  3. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  4. 5. The Literature Review

    A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories.A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that ...

  5. Writing a Literature Review

    A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays).

  6. How to Write a Literature Review: Six Steps to Get You from ...

    Sonja Foss and William Walters* describe an efficient and effective way of writing a literature review. Their system provides an excellent guide for getting through the massive amounts of literature for any purpose: in a dissertation, an M.A. thesis, or preparing a research article for publication in any field of study. Below is a summary of ...

  7. PDF How to Write a Literature Review

    literature review and a larger area of study such as a discipline, a scientific endeavor, or a ... OTHER CONSIDERATIONS AND TECHNIQUES FOR WRITING A LITERATURE REVIEW VERB TENSE . 7 Technique Examples and Common Uses Using past tense emphasizes the researcher's agency. Examples: Jones (1997) investigated the causes of illiteracy; The causes ...

  8. How To Write A Literature Review (+ Free Template)

    Okay - with the why out the way, let's move on to the how. As mentioned above, writing your literature review is a process, which I'll break down into three steps: Finding the most suitable literature. Understanding, distilling and organising the literature. Planning and writing up your literature review chapter.

  9. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  10. How to write a superb literature review

    The best proposals are timely and clearly explain why readers should pay attention to the proposed topic. It is not enough for a review to be a summary of the latest growth in the literature: the ...

  11. Literature Review: The What, Why and How-to Guide

    Example: Predictors and Outcomes of U.S. Quality Maternity Leave: A Review and Conceptual Framework: 10.1177/08948453211037398 ; Systematic review: "The authors of a systematic review use a specific procedure to search the research literature, select the studies to include in their review, and critically evaluate the studies they find." (p. 139).

  12. Steps in the Literature Review Process

    Literature Review and Research Design by Dave Harris This book looks at literature review in the process of research design, and how to develop a research practice that will build skills in reading and writing about research literature--skills that remain valuable in both academic and professional careers. Literature review is approached as a process of engaging with the discourse of scholarly ...

  13. PDF Your essential guide to literature reviews

    Literature Review Literature reviews are a collection of the most relevant and significant publications regarding that topic in order to provide a comprehensive look at what has been said on the topic and by whom. The basic components of a literature review include: a description of the publication a summary of the publication's main points

  14. Literature Reviews

    Structure. The three elements of a literature review are introduction, body, and conclusion. Introduction. Define the topic of the literature review, including any terminology. Introduce the central theme and organization of the literature review. Summarize the state of research on the topic. Frame the literature review with your research question.

  15. Writing Literature Reviews

    Presenting points of attention, techniques and tips for the actual writing of literature reviews is what is found in this chapter. It starts by looking into what makes writing literature reviews different in Section 15.1.The section also pays attention to literature reviews informed by multiple disciplines and introduces a classification with additional points to be attentive to.

  16. Steps in Conducting a Literature Review

    A literature review is an integrated analysis-- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

  17. What is a Literature Review?

    A literature review is a review and synthesis of existing research on a topic or research question. A literature review is meant to analyze the scholarly literature, make connections across writings and identify strengths, weaknesses, trends, and missing conversations. A literature review should address different aspects of a topic as it ...

  18. Guidance on Conducting a Systematic Literature Review

    Literature review is an essential feature of academic research. Fundamentally, knowledge advancement must be built on prior existing work. To push the knowledge frontier, we must know where the frontier is. By reviewing relevant literature, we understand the breadth and depth of the existing body of work and identify gaps to explore.

  19. Research Methods: Literature Reviews

    A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic. The results of a literature review may be an entire report or article OR may be part of a article, thesis, dissertation, or grant proposal. A literature review helps the author ...

  20. Literature Review

    Types of Literature Review are as follows: Narrative literature review: This type of review involves a comprehensive summary and critical analysis of the available literature on a particular topic or research question. It is often used as an introductory section of a research paper. Systematic literature review: This is a rigorous and ...

  21. Literature review as a research methodology: An ...

    A literature review can broadly be described as a more or less systematic way of collecting and synthesizing previous research (Baumeister & Leary, 1997; Tranfield, Denyer, & Smart, 2003). ... Techniques can also be used to discover which study-level or sample characteristics have an effect on the phenomenon being studied, ...

  22. Literature Review Research

    Literature Review is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.. Also, we can define a literature review as the collected body of scholarly works related to a topic:

  23. 5 Tips to write a great literature review

    To make your search more focused, use keywords that you intend to use in your own paper. This will indicate the relevance of those terms in your field. It might also shine some light on whether you should be more precise in defining your concepts as well as your keywords. 3.

  24. Deepfake Detection: A Systematic Literature Review

    To provide an updated overview of the research works in Deepfake detection, we conduct a systematic literature review (SLR) in this paper, summarizing 112 relevant articles from 2018 to 2020 that presented a variety of methodologies. We analyze them by grouping them into four different categories: deep learning-based techniques, classical ...

  25. A systematic review of educational online peer-review and ...

    [At the time of this writing, five systems from the Luxton-Reilly review, namely, PeerGrader, Web-SPA, OPAS, CeLS, and PRAISE, appeared to be defunct, not maintained or not extensively used (Purchase & Hamer, 2017)].Luxton-Reilly's study identified the following common elements of the OPRA systems: anonymity, allocation and distribution, grading/marking criteria (rubrics), calculating peer ...

  26. Full article: Assessing the role of school-based sex education in

    The present review addresses a gap in the literature by providing a timely and systematic examination of school-based sex education interventions for shaping sexual health behaviours. Through using the Behaviour Change Technique taxonomy, this has provided a standardised framework for identifying and categorising techniques for school-based sex ...

  27. Laparoscopic-assisted retrieval of inferior vena cava filter: A case

    Cheng G, Ni D, Liang H, et al. Successful experiences and feasible techniques of robotic-assisted inferior vena cava filter retrieval after failure of endovascular attempts: a case report. Transl Androl Urol 2023; 12: 519-523.