U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

A Literature Review: Website Design and User Engagement

Renee garett.

1 ElevateU, Los Angeles, CA, USA

Sean D. Young

2 University of California Institute for Prediction Technology, Department of Family Medicine, University of California, Los Angeles, Los Angeles, CA, USA

3 UCLA Center for Digital Behavior, Department of Family Medicine, University of California, Los Angeles, Los Angeles, CA, USA

Proper design has become a critical element needed to engage website and mobile application users. However, little research has been conducted to define the specific elements used in effective website and mobile application design. We attempt to review and consolidate research on effective design and to define a short list of elements frequently used in research. The design elements mentioned most frequently in the reviewed literature were navigation, graphical representation, organization, content utility, purpose, simplicity, and readability. We discuss how previous studies define and evaluate these seven elements. This review and the resulting short list of design elements may be used to help designers and researchers to operationalize best practices for facilitating and predicting user engagement.

1. INTRODUCTION

Internet usage has increased tremendously and rapidly in the past decade ( “Internet Use Over Time,” 2014 ). Websites have become the most important public communication portal for most, if not all, businesses and organizations. As of 2014, 87% of American adults aged 18 or older are Internet users ( “Internet User Demographics,” 2013 ). Because business-to-consumer interactions mainly occur online, website design is critical in engaging users ( Flavián, Guinalíu, & Gurrea, 2006 ; Lee & Kozar, 2012 ; Petre, Minocha, & Roberts, 2006 ). Poorly designed websites may frustrate users and result in a high “bounce rate”, or people visiting the entrance page without exploring other pages within the site ( Google.com, 2015 ). On the other hand, a well-designed website with high usability has been found to positively influence visitor retention (revisit rates) and purchasing behavior ( Avouris, Tselios, Fidas, & Papachristos, 2003 ; Flavián et al., 2006 ; Lee & Kozar, 2012 ).

Little research, however, has been conducted to define the specific elements that constitute effective website design. One of the key design measures is usability ( International Standardization Organization, 1998 ). The International Standardized Organization (ISO) defines usability as the extent to which users can achieve desired tasks (e.g., access desired information or place a purchase) with effectiveness (completeness and accuracy of the task), efficiency (time spent on the task), and satisfaction (user experience) within a system. However, there is currently no consensus on how to properly operationalize and assess website usability ( Lee & Kozar, 2012 ). For example, Nielson associates usability with learnability, efficiency, memorability, errors, and satisfaction ( Nielsen, 2012 ). Yet, Palmer (2002) postulates that usability is determined by download time, navigation, content, interactivity, and responsiveness. Similar to usability, many other key design elements, such as scannability, readability, and visual aesthetics, have not yet been clearly defined ( Bevan, 1997 ; Brady & Phillips, 2003 ; Kim, Lee, Han, & Lee, 2002 ), and there are no clear guidelines that individuals can follow when designing websites to increase engagement.

This review sought to address that question by identifying and consolidating the key website design elements that influence user engagement according to prior research studies. This review aimed to determine the website design elements that are most commonly shown or suggested to increase user engagement. Based on these findings, we listed and defined a short list of website design elements that best facilitate and predict user engagement. The work is thus an exploratory research providing definitions for these elements of website design and a starting point for future research to reference.

2. MATERIALS AND METHODS

2.1. selection criteria and data extraction.

We searched for articles relating to website design on Google Scholar (scholar.google.com) because Google Scholar consolidates papers across research databases (e.g., Pubmed) and research on design is listed in multiple databases. We used the following combination of keywords: design, usability, and websites. Google Scholar yielded 115,000 total hits. However, due to the large list of studies generated, we decided to only review the top 100 listed research studies for this exploratory study. Our inclusion criteria for the studies was: (1) publication in a peer-reviewed academic journal, (2) publication in English, and (3) publication in or after 2000. Year of publication was chosen as a limiting factor so that we would have enough years of research to identify relevant studies but also have results that relate to similar styles of websites after the year 2000. We included studies that were experimental or theoretical (review papers and commentaries) in nature. Resulting studies represented a diverse range of disciplines, including human-computer interaction, marketing, e-commerce, interface design, cognitive science, and library science. Based on these selection criteria, thirty-five unique studies remained and were included in this review.

2.2. Final Search Term

(design) and (usability) and (websites).

The search terms were kept simple to capture the higher level design/usability papers and allow Google scholar’s ranking method to filter out the most popular studies. This method also allowed studies from a large range of fields to be searched.

2.3. Analysis

The literature review uncovered 20 distinct design elements commonly discussed in research that affect user engagement. They were (1) organization – is the website logically organized, (2) content utility – is the information provided useful or interesting, (3) navigation – is the website easy to navigate, (4) graphical representation – does the website utilize icons, contrasting colors, and multimedia content, (5) purpose – does the website clearly state its purpose (i.e. personal, commercial, or educational), (6) memorable elements – does the website facilitate returning users to navigate the site effectively (e.g., through layout or graphics), (7) valid links – does the website provide valid links, (8) simplicity – is the design of the website simple, (9) impartiality – is the information provided fair and objective, (10) credibility – is the information provided credible, (11) consistency/reliability – is the website consistently designed (i.e., no changes in page layout throughout the site), (12) accuracy – is the information accurate, (13) loading speed – does the website take a long time to load, (14) security/privacy – does the website securely transmit, store, and display personal information/data, (15) interactive – can the user interact with the website (e.g., post comments or receive recommendations for similar purchases), (16) strong user control capabilities– does the website allow individuals to customize their experiences (such as the order of information they access and speed at which they browse the website), (17) readability – is the website easy to read and understand (e.g., no grammatical/spelling errors), (18) efficiency – is the information presented in a way that users can find the information they need quickly, (19) scannability – can users pick out relevant information quickly, and (20) learnability – how steep is the learning curve for using the website. For each of the above, we calculated the proportion of studies mentioning the element. In this review, we provide a threshold value of 30%. We identified elements that were used in at least 30% of the studies and include these elements that are above the threshold on a short list of elements used in research on proper website design. The 30% value was an arbitrary threshold picked that would provide researchers and designers with a guideline list of elements described in research on effective web design. To provide further information on how to apply this list, we present specific details on how each of these elements was discussed in research so that it can be defined and operationalized.

3.1. Popular website design elements ( Table 1 )

Frequency of website design elements used in research (2000–2014)

Seven of the website design elements met our threshold requirement for review. Navigation was the most frequently discussed element, mentioned in 22 articles (62.86%). Twenty-one studies (60%) highlighted the importance of graphics. Fifteen studies (42.86%) emphasized good organization. Four other elements also exceeded the threshold level, and they were content utility (n=13, 37.14%), purpose (n=11, 31.43%), simplicity (n=11, 31.43%), and readability (n=11, 31.43%).

Elements below our minimum requirement for review include memorable features (n=5, 14.29%), links (n=10, 28.57%), impartiality (n=1, 2.86%), credibility (n=7, 20%), consistency/reliability (n=8. 22.86%), accuracy (n=5, 14.29%), loading speed (n=10, 28.57%), security/privacy (n=2, 5.71%), interactive features (n=9, 25.71%), strong user control capabilities (n=8, 22.86%), efficiency (n=6, 17.14%), scannability (n=1, 2.86%), and learnability (n=2, 5.71%).

3.2. Defining key design elements for user engagement ( Table 2 )

Definitions of Key Design Elements

In defining and operationalizing each of these elements, the research studies suggested that effective navigation is the presence of salient and consistent menu/navigation bars, aids for navigation (e.g., visible links), search features, and easy access to pages (multiple pathways and limited clicks/backtracking). Engaging graphical presentation entails 1) inclusion of images, 2) proper size and resolution of images, 3) multimedia content, 4) proper color, font, and size of text, 5) use of logos and icons, 6) attractive visual layout, 7) color schemes, and 8) effective use of white space. Optimal organization includes 1) cognitive architecture, 2) logical, understandable, and hierarchical structure, 3) information arrangement and categorization, 4) meaningful labels/headings/titles, and 5) use of keywords. Content utility is determined by 1) sufficient amount of information to attract repeat visitors, 2) arousal/motivation (keeps visitors interested and motivates users to continue exploring the site), 3) content quality, 4) information relevant to the purpose of the site, and 5) perceived utility based on user needs/requirements. The purpose of a website is clear when it 1) establishes a unique and visible brand/identity, 2) addresses visitors’ intended purpose and expectations for visiting the site, and 3) provides information about the organization and/or services. Simplicity is achieved by using 1) simple subject headings, 2) transparency of information (reduce search time), 3) website design optimized for computer screens, 4) uncluttered layout, 5) consistency in design throughout website, 6) ease of using (including first-time users), 7) minimize redundant features, and 8) easily understandable functions. Readability is optimized by content that is 1) easy to read, 2) well-written, 3) grammatically correct, 4) understandable, 5) presented in readable blocks, and 6) reading level appropriate.

4. DISCUSSION

The seven website design elements most often discussed in relation to user engagement in the reviewed studies were navigation (62.86%), graphical representation (60%), organization (42.86%), content utility (37.14%), purpose (31.43%), simplicity (31.43%), and readability (31.43%). These seven elements exceeded our threshold level of 30% representation in the literature and were included into a short list of website design elements to operationalize effective website design. For further analysis, we reviewed how studies defined and evaluated these seven elements. This may allow designers and researchers to determine and follow best practices for facilitating or predicting user engagement.

A remaining challenge is that the definitions of website design elements often overlap. For example, several studies evaluated organization by how well a website incorporates cognitive architecture, logical and hierarchical structure, systematic information arrangement and categorization, meaningful headings and labels, and keywords. However, these features are also crucial in navigation design. Also, the implications of using distinct logos and icons go beyond graphical representation. Logos and icons also establish unique brand/identity for the organization (purpose) and can serve as visual aids for navigation. Future studies are needed to develop distinct and objective measures to assess these elements and how they affect user engagement ( Lee & Kozar, 2012 ).

Given the rapid increase in both mobile technology and social media use, it is surprising that no studies mentioned cross-platform compatibility and social media integration. In 2013, 34% of cellphone owners primarily use their cellphones to access the Internet, and this number continues to grow ( “Mobile Technology Factsheet,” 2013 ). With the rise of different mobile devices, users are also diversifying their web browser use. Internet Explorer (IE) was once the leading web browser. However, in recent years, FireFox, Safari, and Chrome have gained significant traction ( W3schools.com, 2015 ). Website designers and researchers must be mindful of different platforms and browsers to minimize the risk of losing users due to compatibility issues. In addition, roughly 74% of American Internet users use some form of social media ( Duggan, Ellison, Lampe, Lenhart, & Smith, 2015 ), and social media has emerged as an effective platform for organizations to target and interact with users. Integrating social media into website design may increase user engagement by facilitating participation and interactivity.

There are several limitations to the current review. First, due to the large number of studies published in this area and due to this study being exploratory, we selected from the first 100 research publications on Google Scholar search results. Future studies may benefit from defining design to a specific topic, set of years, or other area to limit the number of search results. Second, we did not quantitatively evaluate the effectiveness of these website design elements. Additional research can help to better quantify these elements.

It should also be noted that different disciplines and industries have different objectives in designing websites and should thus prioritize different website design elements. For example, online businesses and marketers seek to design websites that optimize brand loyalty, purchase, and profit ( Petre et al., 2006 ). Others, such as academic researchers or healthcare providers, are more likely to prioritize privacy/confidentiality, and content accuracy in building websites ( Horvath, Ecklund, Hunt, Nelson, & Toomey, 2015 ). Ultimately, we advise website designers and researchers to consider the design elements delineated in this review, along with their unique needs, when developing user engagement strategies.

  • Arroyo Ernesto, Selker Ted, Wei Willy. Usability tool for analysis of web designs using mouse tracks. Paper presented at the CHI’06 Extended Abstracts on Human Factors in Computing Systems.2006. [ Google Scholar ]
  • Atterer Richard, Wnuk Monika, Schmidt Albrecht. Knowing the user’s every move: user activity tracking for website usability evaluation and implicit interaction. Paper presented at the Proceedings of the 15th international conference on World Wide Web.2006. [ Google Scholar ]
  • Auger Pat. The impact of interactivity and design sophistication on the performance of commercial websites for small businesses. Journal of Small Business Management. 2005; 43 (2):119–137. [ Google Scholar ]
  • Avouris Nikolaos, Tselios Nikolaos, Fidas Christos, Papachristos Eleftherios. Advances in Informatics. Springer; 2003. Website evaluation: A usability-based perspective; pp. 217–231. [ Google Scholar ]
  • Banati Hema, Bedi Punam, Grover PS. Evaluating web usability from the user’s perspective. Journal of Computer Science. 2006; 2 (4):314. [ Google Scholar ]
  • Belanche Daniel, Casaló Luis V, Guinalíu Miguel. Website usability, consumer satisfaction and the intention to use a website: The moderating effect of perceived risk. Journal of retailing and consumer services. 2012; 19 (1):124–132. [ Google Scholar ]
  • Bevan Nigel. Usability issues in web site design. Paper presented at the HCI; 1997. [ Google Scholar ]
  • Blackmon Marilyn Hughes, Kitajima Muneo, Polson Peter G. Repairing usability problems identified by the cognitive walkthrough for the web. Paper presented at the Proceedings of the SIGCHI conference on Human factors in computing systems.2003. [ Google Scholar ]
  • Blackmon Marilyn Hughes, Polson Peter G, Kitajima Muneo, Lewis Clayton. Cognitive walkthrough for the web. Paper presented at the Proceedings of the SIGCHI conference on human factors in computing systems.2002. [ Google Scholar ]
  • Braddy Phillip W, Meade Adam W, Kroustalis Christina M. Online recruiting: The effects of organizational familiarity, website usability, and website attractiveness on viewers’ impressions of organizations. Computers in Human Behavior. 2008; 24 (6):2992–3001. [ Google Scholar ]
  • Brady Laurie, Phillips Christine. Aesthetics and usability: A look at color and balance. Usability News. 2003; 5 (1) [ Google Scholar ]
  • Cyr Dianne, Head Milena, Larios Hector. Colour appeal in website design within and across cultures: A multi-method evaluation. International journal of human-computer studies. 2010; 68 (1):1–21. [ Google Scholar ]
  • Cyr Dianne, Ilsever Joe, Bonanni Carole, Bowes John. Website Design and Culture: An Empirical Investigation. Paper presented at the IWIPS.2004. [ Google Scholar ]
  • Dastidar Surajit Ghosh. Impact of the factors influencing website usability on user satisfaction. 2009. [ Google Scholar ]
  • De Angeli Antonella, Sutcliffe Alistair, Hartmann Jan. Interaction, usability and aesthetics: what influences users’ preferences?. Paper presented at the Proceedings of the 6th conference on Designing Interactive systems.2006. [ Google Scholar ]
  • Djamasbi Soussan, Siegel Marisa, Tullis Tom. Generation Y, web design, and eye tracking. International journal of human-computer studies. 2010; 68 (5):307–323. [ Google Scholar ]
  • Djonov Emilia. Website hierarchy and the interaction between content organization, webpage and navigation design: A systemic functional hypermedia discourse analysis perspective. Information Design Journal. 2007; 15 (2):144–162. [ Google Scholar ]
  • Duggan M, Ellison N, Lampe C, Lenhart A, Smith A. Social Media update 2014. Washington, D.C: Pew Research Center; 2015. [ Google Scholar ]
  • Flavián Carlos, Guinalíu Miguel, Gurrea Raquel. The role played by perceived usability, satisfaction and consumer trust on website loyalty. Information & Management. 2006; 43 (1):1–14. [ Google Scholar ]
  • George Carole A. Usability testing and design of a library website: an iterative approach. OCLC Systems & Services: International digital library perspectives. 2005; 21 (3):167–180. [ Google Scholar ]
  • Google.com. Bounce Rate. Analyrics Help. 2015 Retrieved 2/11, 2015, from https://support.google.com/analytics/answer/1009409?hl=en .
  • Green D, Pearson JM. Development of a web site usability instrument based on ISO 9241-11. Journal of Computer Information Systems. 2006 Fall [ Google Scholar ]
  • Horvath Keith J, Ecklund Alexandra M, Hunt Shanda L, Nelson Toben F, Toomey Traci L. Developing Internet-Based Health Interventions: A Guide for Public Health Researchers and Practitioners. J Med Internet Res. 2015; 17 (1):e28. doi: 10.2196/jmir.3770. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • International Standardization Organization. ISO 2941-11:1998 Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11: Guidance on usability: International Standardization Organization (ISO) 1998. [ Google Scholar ]
  • Internet Use Over Time. 2014 Jan 2; Retrieved February 15, 2015, from http://www.pewinternet.org/data-trend/internet-use/internet-use-over-time/
  • Internet User Demographics. 2013 Nov 14; Retrieved February 11, 2015, from http://www.pewinternet.org/data-trend/internet-use/latest-stats/
  • Kim Jinwoo, Lee Jungwon, Han Kwanghee, Lee Moonkyu. Businesses as Buildings: Metrics for the Architectural Quality of Internet Businesses. Information Systems Research. 2002; 13 (3):239–254. doi: 10.1287/isre.13.3.239.79. [ CrossRef ] [ Google Scholar ]
  • Lee Younghwa, Kozar Kenneth A. Understanding of website usability: Specifying and measuring constructs and their relationships. Decision Support Systems. 2012; 52 (2):450–463. [ Google Scholar ]
  • Lim Sun. The Self-Confrontation Interview: Towards an Enhanced Understanding of Human Factors in Web-based Interaction for Improved Website Usability. J Electron Commerce Res. 2002; 3 (3):162–173. [ Google Scholar ]
  • Lowry Paul Benjamin, Spaulding Trent, Wells Taylor, Moody Greg, Moffit Kevin, Madariaga Sebastian. A theoretical model and empirical results linking website interactivity and usability satisfaction. Paper presented at the System Sciences, 2006. HICSS’06. Proceedings of the 39th Annual Hawaii International Conference on.2006. [ Google Scholar ]
  • Maurer Steven D, Liu Yuping. Developing effective e-recruiting websites: Insights for managers from marketers. Business Horizons. 2007; 50 (4):305–314. [ Google Scholar ]
  • Mobile Technology Fact Sheet. 2013 Dec 27; Retrieved August 5, 2015, from http://www.pewinternet.org/fact-sheets/mobile-technology-fact-sheet/
  • Nielsen Jakob. Usability 101: introduction to Usability. 2012 Retrieved 2/11, 2015, from http://www.nngroup.com/articles/usability-101-introduction-to-usability/
  • Palmer Jonathan W. Web Site Usability, Design, and Performance Metrics. Information Systems Research. 2002; 13 (2):151–167. doi: 10.1287/isre.13.2.151.88. [ CrossRef ] [ Google Scholar ]
  • Petre Marian, Minocha Shailey, Roberts Dave. Usability beyond the website: an empirically-grounded e-commerce evaluation instrument for the total customer experience. Behaviour & Information Technology. 2006; 25 (2):189–203. [ Google Scholar ]
  • Petrie Helen, Hamilton Fraser, King Neil. Tension, what tension?: Website accessibility and visual design. Paper presented at the Proceedings of the 2004 international cross-disciplinary workshop on Web accessibility (W4A).2004. [ Google Scholar ]
  • Raward Roslyn. Academic library website design principles: development of a checklist. Australian Academic & Research Libraries. 2001; 32 (2):123–136. [ Google Scholar ]
  • Rosen Deborah E, Purinton Elizabeth. Website design: Viewing the web as a cognitive landscape. Journal of Business Research. 2004; 57 (7):787–794. [ Google Scholar ]
  • Shneiderman Ben, Hochheiser Harry. Universal usability as a stimulus to advanced interface design. Behaviour & Information Technology. 2001; 20 (5):367–376. [ Google Scholar ]
  • Song Jaeki, Zahedi Fatemeh “Mariam”. A theoretical approach to web design in e-commerce: a belief reinforcement model. Management Science. 2005; 51 (8):1219–1235. [ Google Scholar ]
  • Sutcliffe Alistair. Interactive systems: design, specification, and verification. Springer; 2001. Heuristic evaluation of website attractiveness and usability; pp. 183–198. [ Google Scholar ]
  • Tan Gek Woo, Wei Kwok Kee. An empirical study of Web browsing behaviour: Towards an effective Website design. Electronic Commerce Research and Applications. 2007; 5 (4):261–271. [ Google Scholar ]
  • Tarafdar Monideepa, Zhang Jie. Determinants of reach and loyalty-a study of Website performance and implications for Website design. Journal of Computer Information Systems. 2008; 48 (2):16. [ Google Scholar ]
  • Thompson Lori Foster, Braddy Phillip W, Wuensch Karl L. E-recruitment and the benefits of organizational web appeal. Computers in Human Behavior. 2008; 24 (5):2384–2398. [ Google Scholar ]
  • W3schools.com. Browser Statistics and Trends. Retrieved 1/15, 2015, from http://www.w3schools.com/browsers/browsers_stats.asp .
  • Williamson Ian O, Lepak David P, King James. The effect of company recruitment web site orientation on individuals’ perceptions of organizational attractiveness. Journal of Vocational Behavior. 2003; 63 (2):242–263. [ Google Scholar ]
  • Zhang Ping, Small Ruth V, Von Dran Gisela M, Barcellos Silvia. A two factor theory for website design. Paper presented at the System Sciences, 2000. Proceedings of the 33rd Annual Hawaii International Conference on.2000. [ Google Scholar ]
  • Zhang Ping, Von Dran Gisela M. Satisfiers and dissatisfiers: A two-factor model for website design and evaluation. Journal of the American society for information science. 2000; 51 (14):1253–1268. [ Google Scholar ]

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

literature review website development

Try for free

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

UCLA Previously Published Works banner

A Literature Review: Website Design and User Engagement.

  • Garett, Renee ;
  • Chiu, Jason ;
  • Zhang, Ly ;
  • Young, Sean D

Published Web Location

Proper design has become a critical element needed to engage website and mobile application users. However, little research has been conducted to define the specific elements used in effective website and mobile application design. We attempt to review and consolidate research on effective design and to define a short list of elements frequently used in research. The design elements mentioned most frequently in the reviewed literature were navigation, graphical representation, organization, content utility, purpose, simplicity, and readability. We discuss how previous studies define and evaluate these seven elements. This review and the resulting short list of design elements may be used to help designers and researchers to operationalize best practices for facilitating and predicting user engagement.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies . Let us know how this access is important for you.

Enter the password to open this PDF file:

This paper is in the following e-collection/theme issue:

Published on 24.10.2019 in Vol 3 , No 4 (2019) : Oct-Dec

A Comprehensive Framework to Evaluate Websites: Literature Review and Development of GoodWeb

Authors of this article:

Author Orcid Image

  • Rosalie Allison, BSc, MSc   ; 
  • Catherine Hayes, BSc   ; 
  • Cliodna A M McNulty, MBBS, FRCP   ; 
  • Vicki Young, BSc, PhD  

Public Health England, Gloucester, United Kingdom

Corresponding Author:

Rosalie Allison, BSc, MSc

Public Health England

Primary Care and Interventions Unit

Gloucester, GL1 1DQ

United Kingdom

Phone: 44 0208 495 3258

Email: [email protected]

Background: Attention is turning toward increasing the quality of websites and quality evaluation to attract new users and retain existing users.

Objective: This scoping study aimed to review and define existing worldwide methodologies and techniques to evaluate websites and provide a framework of appropriate website attributes that could be applied to any future website evaluations.

Methods: We systematically searched electronic databases and gray literature for studies of website evaluation. The results were exported to EndNote software, duplicates were removed, and eligible studies were identified. The results have been presented in narrative form.

Results: A total of 69 studies met the inclusion criteria. The extracted data included type of website, aim or purpose of the study, study populations (users and experts), sample size, setting (controlled environment and remotely assessed), website attributes evaluated, process of methodology, and process of analysis. Methods of evaluation varied and included questionnaires, observed website browsing, interviews or focus groups, and Web usage analysis. Evaluations using both users and experts and controlled and remote settings are represented. Website attributes that were examined included usability or ease of use, content, design criteria, functionality, appearance, interactivity, satisfaction, and loyalty. Website evaluation methods should be tailored to the needs of specific websites and individual aims of evaluations. GoodWeb, a website evaluation guide, has been presented with a case scenario.

Conclusions: This scoping study supports the open debate of defining the quality of websites, and there are numerous approaches and models to evaluate it. However, as this study provides a framework of the existing literature of website evaluation, it presents a guide of options for evaluating websites, including which attributes to analyze and options for appropriate methods.

Introduction

Since its conception in the early 1990s, there has been an explosion in the use of the internet, with websites taking a central role in diverse fields such as finance, education, medicine, industry, and business. Organizations are increasingly attempting to exploit the benefits of the World Wide Web and its features as an interface for internet-enabled businesses, information provision, and promotional activities [ 1 , 2 ]. As the environment becomes more competitive and websites become more sophisticated, attention is turning toward increasing the quality of the website itself and quality evaluation to attract new and retain existing users [ 3 , 4 ]. What determines website quality has not been conclusively established, and there are many different definitions and meanings of the term quality, mainly in relation to the website’s purpose [ 5 ]. Traditionally, website evaluations have focused on usability, defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use [ 6 ].” The design of websites and users’ needs go beyond pure usability, as increased engagement and pleasure experienced during interactions with websites can be more important predictors of website preference than usability [ 7 - 10 ]. Therefore, in the last decade, website evaluations have shifted their focus to users’ experience, employing various assessment techniques [ 11 ], with no universally accepted method or procedure for website evaluation.

This scoping study aimed to review and define existing worldwide methodologies and techniques to evaluate websites and provide a simple framework of appropriate website attributes, which could be applied to future website evaluations.

A scoping study is similar to a systematic review as it collects and reviews content in a field of interest. However, scoping studies cover a broader question and do not rigorously evaluate the quality of the studies included [ 12 ]. Scoping studies are commonly used in the fields of public services such as health and education, as they are more rapid to perform and less costly in terms of staff costs [ 13 ]. Scoping studies can be precursors to a systematic review or stand-alone studies to examine the range of research around a particular topic.

The following research question is based on the need to gain knowledge and insight from worldwide website evaluation to inform the future study design of website evaluations: what website evaluation methodologies can be robustly used to assess users’ experience?

To show how the framework of attributes and methods can be applied to evaluating a website, e-Bug, an international educational health website, will be used as a case scenario [ 14 ].

This scoping study followed a 5-stage framework and methodology, as outlined by Arksey and O’Malley [ 12 ], involving the following: (1) identifying the research question, as above; (2) identifying relevant studies; (3) study selection; (4) charting the data; and (5) collating, summarizing, and reporting the results.

Identifying Relevant Studies

Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [ 15 ], studies for consideration in the review were located by searching the following electronic databases: Excerpta Medica dataBASE, PsycINFO, Cochrane, Cumulative Index to Nursing and Allied Health Literature, Scopus, ACM digital library, and IEEE Xplore SPORTDiscus. The keywords used referred to the following:

  • Population: websites
  • Intervention: evaluation methodologies
  • Outcome: user’s experience.

Table 1 shows the specific search criteria for each database. These keywords were also used to search gray literature for unpublished or working documents to minimize publication bias.

a EMBASE: Excerpta Medica database.

b CINAHL: Cumulative Index to Nursing and Allied Health Literature.

c ACM: Association for Computing Machinery.

d IEEE: Institute of Electrical and Electronics Engineers.

Study Selection

Once all sources had been systematically searched, the list of citations was exported to EndNote software to identify eligible studies. By scanning the title, and abstract if necessary, studies that did not fit the inclusion criteria were removed by 2 researchers (RA and CH). As abstracts are not always representative of the full study that follows or capture the full scope [ 16 ], if the title and abstract did not provide sufficient information, the full manuscript was examined to ascertain whether they met all the inclusion criteria, which included (1) studies focused on websites, (2) studies of evaluative methods (eg, use of questionnaire and task completion), (3) studies that reported outcomes that affect the user’s experience (eg, quality, satisfaction, efficiency, effectiveness without necessarily focusing on methodology), (4) studies carried out between 2006 and 2016, (5) studies published in English, and (6) type of study (any study design that is appropriate).

Exclusion criteria included (1) studies that focus on evaluations using solely experts and are not transferrable to user evaluations; (2) studies that are in the form of electronic book or are not freely available on the Web or through OpenAthens, the University of Bath library, or the University of the West of England library; (3)studies that evaluate banking, electronic commerce (e-commerce), or online libraries’ websites and do not have transferrable measures to a range of other websites; (4) studies that report exclusively on minority or special needs groups (eg, blind or deaf users); and (5) studies that do not meet all the inclusion criteria.

Charting the Data

The next stage involved charting key items of information obtained from studies being reviewed. Charting [ 17 ] describes a technique for synthesizing and interpreting qualitative data by sifting, charting, and sorting material according to key issues and themes. This is similar to a systematic review in which the process is called data extraction. The data extracted included general information about the study and specific information relating to, for instance, the study population or target, the type of intervention, outcome measures employed, and the study design.

The information of interest included the following: type of website, aim or purpose of the study, study populations (users and experts), sample size, setting (laboratory, real life, and remotely assessed), website attributes evaluated, process of methodology, and process of analysis.

NVivo version 10.0 software was used for this stage by 2 researchers (RA and CH) to chart the data.

Collating, Summarizing, and Reporting the Results

Although the scoping study does not seek to assess the quality of evidence, it does present an overview of all material reviewed with a narrative account of findings.

Ethics Approval and Consent to Participate

As no primary research was carried out, no ethical approval was required to undertake this scoping study. No specific reference was made to any of the participants in the individual studies, nor does this study infringe on their rights in any way.

The electronic database searches produced 6657 papers; a further 7 papers were identified through other sources. After removing duplicates (n=1058), 5606 publications remained. After titles and abstracts were examined, 784 full-text papers were read and assessed further for eligibility. Of those, 69 articles were identified as suitable by meeting all the inclusion criteria ( Figure 1 ).

literature review website development

Study Characteristics

Studies referred to or used a mixture of users (72%) and experts (39%) to evaluate their websites; 54% used a controlled environment, and 26% evaluated websites remotely ( Multimedia Appendix 1 [ 2 - 4 , 11 , 18 - 85 ]). Remote usability, in its most basic form, involves working with participants who are not in the same physical location as the researcher, employing techniques such as live screen sharing or questionnaires. Advantages to remote website evaluations include the ability to evaluate using a larger number of participants as travel time and costs are not a factor, and participants are able to partake at a time that is appropriate to them, increasing the likelihood of participation and the possibility of a greater diversity of participants [ 18 ]. However, the disadvantages of remote website evaluations, in comparison with a controlled setting, are that system performance, network traffic, and the participant’s computer setup can all affect the results.

A variety of types of websites evaluated were included in this review including government (9%), online news (6%), education (1%), university (12%), and sports organizations (4%). The aspects of quality considered, and their relative importance varied according to the type of website and the goals to be achieved by the users. For example, criteria such as ease of paying or security are not very important to educational websites, whereas they are especially important for online shopping. In this sense, much attention must be paid when evaluating the quality of a website, establishing a specific context of use and purpose [ 19 ].

The context of the participants was also discussed, in relation to the generalizability of results. For example, when evaluations used potential or current users of their website, it was important that computer literacy was reflective of all users [ 20 ]. This could mean ensuring that participants with a range of computer abilities and experiences were used so that results were not biased to the most or least experienced users.

Intervention

A total of 43 evaluation methodologies were identified in the 69 studies in this review. Most of them were variations of similar methodologies, and a brief description of each is provided in Multimedia Appendix 2 . Multimedia Appendix 3 shows the methods used or described in each study.

Questionnaire

Use of questionnaires was the most common methodology referred to (37/69, 54%), including questions to rank or rate attributes and open questions to allow text feedback and suggested improvements. Questionnaires were used in a combination of before or after usability testing to assess usability and overall user experience.

Observed Browsing the Website

Browsing the website using a form of task completion with the participant, such as cognitive walkthrough, was used in 33/69 studies (48%), whereby an expert evaluator used a detailed procedure to simulate task execution and browse all particular solution paths, examining each action while determining if expected user’s goals and memory content would lead to choosing a correct option [ 30 ]. Screen capture was often used (n=6) to record participants’ navigation through the website, and eye tracking was used (n=7) to assess where the eye focuses on each page or the motion of the eye as an individual views a Web page. The think-aloud protocol was used (n=10) to encourage users to express out loud what they were looking at, thinking, doing, and feeling, as they performed tasks. This allows observers to see and understand the cognitive processes associated with task completion. Recording the time to complete tasks (n=6) and mouse movement or clicks (n=8) were used to assess the efficiency of the websites.

Qualitative Data Collection

Several forms of qualitative data collection were used in 27/69 studies (39%). Observed browsing, interviews, and focus groups were used either before or after the use of the website. Pre-website-use, qualitative research was often used to collect details of which website attributes were important for participants or what weighting participants would give to each attribute. Postevaluation, qualitative techniques were used to collate feedback on the quality of the website and any suggestions for improvements.

Automated Usability Evaluation Software

In 9/69 studies (13%), automated usability evaluation focused on developing software, tools, and techniques to speed evaluation (rapid), tools that reach a wider audience for usability testing (remote), and tools that have built-in analyses features (automated). The latter can involve assessing server logs, website coding, and simulations of user experience to assess usability [ 42 ].

Card Sorting

A technique that is often linked with assessing navigability of a website, card sorting, is useful for discovering the logical structure of an unsorted list of statements or ideas by exploring how people group items and structures that maximize the probability of users finding items (5/69 studies, 7%). This can assist with determining effective website structure.

Web Usage Analysis

Of 69 studies, 3 studies used Web usage analysis or Web analytics to identify browsing patterns by analyzing the participants’ navigational behavior. This could include tracking at the widget level, that is, combining knowledge of the mouse coordinates with elements such as buttons and links, with the layout of the HTML pages, enabling complete tracking of all user activity.

Outcomes (Attributes Used to Evaluate Websites)

Often, different terminology for website attributes was used to describe the same or similar concepts ( Multimedia Appendix 4 ). The most used website attributes that were assessed can be broken down into 8 broad categories and further subcategories:

  • Usability or ease of use is the degree to which a website can be used to achieve given goals (n=58). It includes navigation such as intuitiveness, learnability, memorability, and information architecture; effectiveness such as errors; and efficiency.
  • Content (n=41) includes completeness, accuracy, relevancy, timeliness, and understandability of the information.
  • Web design criteria (n=29) include use of media, search engines, help resources, originality of the website, site map, user interface, multilanguage, and maintainability.
  • Functionality (n=31) includes links, website speed, security, and compatibility with devices and browsers.
  • Appearance (n=26) includes layout, font, colors, and page length.
  • Interactivity (n=25) includes sense of community, such as ability to leave feedback and comments and email or share with a friend option or forum discussion boards; personalization; help options such as frequently answered questions or customer services; and background music.
  • Satisfaction (n=26) includes usefulness, entertainment, look and feel, and pleasure.
  • Loyalty (n=8) includes first impression of the website.

GoodWeb: Website Evaluation Guide

As there was such a range of methods used, a suggested guide of options for evaluating websites is presented below ( Figure 2 ), coined GoodWeb, and applied to an evaluation of e-Bug, an international educational health website [ 14 ]. Allison at al [ 86 ] show the full details of how GoodWeb has been applied and outcomes of the e-Bug website evaluation.

literature review website development

Step 1. What Are the Important Website Attributes That Affect User's Experience of the Chosen Website?

Usability or ease of use, content, Web design criteria, functionality, appearance, interactivity, satisfaction, and loyalty were the umbrella terms that encompassed the website attributes identified or evaluated in the 69 studies in this scoping study. Multimedia Appendix 4 contains a summary of the most used website attributes that have been assessed. Recent website evaluations have shifted focus from usability of websites to an overall user’s experience of website use. A decision on which website attributes to evaluate for specific websites could come from interviews or focus groups with users or experts or a literature search of attributes used in similar evaluations.

Application

In the scenario of evaluating e-Bug or similar educational health websites, the attributes chosen to assess could be the following:

  • Appearance: colors, fonts, media or graphics, page length, style consistency, and first impression
  • Content: clarity, completeness, current and timely information, relevance, reliability, and uniqueness
  • Interactivity: sense of community and modern features
  • Ease of use: home page indication, navigation, guidance, and multilanguage support
  • Technical adequacy: compatibility with other devices, load time, valid links, and limited use of special plug-ins
  • Satisfaction: loyalty

These cover the main website attributes appropriate for an educational health website. If the website did not currently have features such as search engines, site map, background music, it may not be appropriate to evaluate these, but may be better suited to question whether they would be suitable additions to the website; or these could be combined under the heading modern features . Furthermore, security may not be a necessary attribute to evaluate if participant identifiable information or bank details are not needed to use the website.

Step 2. What Is the Best Way to Evaluate These Attributes?

Often, a combination of methods is suitable to evaluate a website, as 1 method may not be appropriate to assess all attributes of interest [ 29 ] (see Multimedia Appendix 3 for a summary of the most used methods for evaluating websites). For example, screen capture of task completion may be appropriate to assess the efficiency of a website but would not be the chosen method to assess loyalty. A questionnaire or qualitative interview may be more appropriate for this attribute.

In the scenario of evaluating e-Bug, a questionnaire before browsing the website would be appropriate to rank the importance of the selected website attributes, chosen in step 1. It would then be appropriate to observe browsing of the website, collecting data on completion of typical task scenarios, using the screen capture function for future reference. This method could be used to evaluate the effectiveness (number of tasks successfully completed), efficiency (whether the most direct route through the website was used to complete the task), and learnability (whether task completion is more efficient or effective second time of trying). It may then be suitable to use a follow-up questionnaire to rate e-Bug against the website attributes previously ranked. The attribute ranking and rating could then be combined to indicate where the website performs well and areas for improvement.

Step 3: Who Should Evaluate the Website?

Both users and experts can be used to evaluate websites. Experts are able to identify areas for improvements, in relation to usability; whereas, users are able to appraise quality as well as identify areas for improvement. In this respect, users are able to fully evaluate user’s experience, where experts may not be able to.

For this reason, it may be more appropriate to use current or potential users of the website for the scenario of evaluating e-Bug.

Step 4: What Setting Should Be Used?

A combination of controlled and remote settings can be used, depending on the methods chosen. For example, it may be appropriate to collect data via a questionnaire, remotely, to increase sample size and reach a more diverse audience, whereas a controlled setting may be more appropriate for task completion using eye-tracking methods.

Strengths and Limitations

A scoping study differs from a systematic review, in that it does not critically appraise the quality of the studies before extracting or charting the data. Therefore, this study cannot compare the effectiveness of the different methods or methodologies in evaluating the website attributes. However, what it does do is review and summarize a huge amount of literature, from different sources, in a format that is understandable and informative for future designs of website evaluations.

Furthermore, studies that evaluate banking, e-commerce, or online libraries’ websites and do not have transferrable measures to a range of other websites were excluded from this study. This decision was made to limit the number of studies that met the remaining inclusion criteria, and it was deemed that the website attributes for these websites would be too specialist and not necessarily transferable to a range of websites. Therefore, the findings of this study may not be generalizable to all types of website. However, Multimedia Appendix 1 shows that data were extracted from a very broad range of websites when it was deemed that the information was transferrable to a range of other websites.

A robust website evaluation can identify areas for improvement to both fulfill the goals and desires of its users [ 62 ] and influence their perception of the organization and overall quality of resources [ 48 ]. An improved website could attract and retain more online users; therefore, an evidence-based website evaluation guide is essential.

Conclusions

This scoping study emphasizes the fact that the debate about how to define the quality of websites remains open, and there are numerous approaches and models to evaluate it. Multimedia Appendix 2 shows existing methodologies or tools that can be used to evaluate websites. Many of these are variations of similar approaches; therefore, it is not strictly necessary to use these tools at face value; however, some could be used to guide analysis, following data collection. By following steps 1 to 4 of GoodWeb, the framework suggested in this study, taking into account the desired participants and setting and website evaluation methods, can be tailored to the needs of specific websites and individual aims of evaluations.

Acknowledgments

This work was supported by the Primary Care Unit, Public Health England. This study is not applicable as secondary research.

Authors' Contributions

RA wrote the protocol with input from CH, CM, and VY. RA and CH conducted the scoping review. RA wrote the final manuscript with input from CH, CM, and VY. All authors reviewed and approved the final manuscript.

Conflicts of Interest

None declared.

Summary of included studies, including information on the participant.

Interventions: methodologies and tools to evaluate websites.

Methods used or described in each study.

Summary of the most used website attributes evaluated.

  • Straub DW, Watson RT. Research Commentary: Transformational Issues in Researching IS and Net-Enabled Organizations. Info Syst Res 2001;12(4):337-345. [ CrossRef ]
  • Bairamzadeh S, Bolhari A. Investigating factors affecting students' satisfaction of university websites. In: 2010 3rd International Conference on Computer Science and Information Technology. 2010 Presented at: ICCSIT'10; July 9-11, 2010; Chengdu, China p. 469-473. [ CrossRef ]
  • Fink D, Nyaga C. Evaluating web site quality: the value of a multi paradigm approach. Benchmarking 2009;16(2):259-273. [ CrossRef ]
  • Markaki OI, Charilas DE, Askounis D. Application of Fuzzy Analytic Hierarchy Process to Evaluate the Quality of E-Government Web Sites. In: Proceedings of the 2010 Developments in E-systems Engineering. 2010 Presented at: DeSE'10; September 6-8, 2010; London, UK p. 219-224. [ CrossRef ]
  • Eysenbach G, Powell J, Kuss O, Sa E. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. J Am Med Assoc 2002;287(20):2691-2700. [ CrossRef ] [ Medline ]
  • International Organization for Standardization. ISO 9241-11: Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs): Part 11: Guidance on Usability. Switzerland: International Organization for Standardization; 1998.
  • Hartmann J, Sutcliffe A, Angeli AD. Towards a theory of user judgment of aesthetics and user interface quality. ACM Trans Comput-Hum Interact 2008;15(4):1-30. [ CrossRef ]
  • Bargas-Avila JA, Hornbæk K. Old Wine in New Bottles or Novel Challenges: A Critical Analysis of Empirical Studies of User Experience. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2011 Presented at: CHI'11; May 7-12, 2011; Vancouver, BC, Canada p. 2689-2698. [ CrossRef ]
  • Hassenzahl M, Tractinsky N. User experience - a research agenda. Behav Info Technol 2006;25(2):91-97. [ CrossRef ]
  • Aranyi G, van Schaik P. Testing a model of user-experience with news websites. J Assoc Soc Inf Sci Technol 2016;67(7):1555-1575. [ CrossRef ]
  • Tsai W, Chou W, Lai C. An effective evaluation model and improvement analysis for national park websites: a case study of Taiwan. Tour Manag 2010;31(6):936-952. [ CrossRef ]
  • Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Method 2005;8(1):19-32. [ CrossRef ]
  • Anderson S, Allen P, Peckham S, Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Res Policy Syst 2008;6:7 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • e-Bug. 2018. Welcome to the e-Bug Teachers Area!   URL: https://e-bug.eu/eng_home.aspx?cc=eng&ss=1&t=Welcome%20to%20e-Bug [accessed 2019-08-23]
  • Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 2009 Jul 21;6(7):e1000100 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Badger D, Nursten J, Williams P, Woodward M. Should All Literature Reviews be Systematic? Eval Res Educ 2000;14(3-4):220-230. [ CrossRef ]
  • Ritchie J, Spencer L. Qualitative data analysis for applied policy research. In: Bryman A, Burgess B, editors. Analyzing Qualitative Data. Abingdon-on-Thames: Routledge; 2002:187-208.
  • Thomsett-Scott BC. Web site usability with remote users: Formal usability studies and focus groups. J Libr Adm 2006;45(3-4):517-547. [ CrossRef ]
  • Moreno JM, Morales del Castillo JM, Porcel C, Herrera-Viedma E. A quality evaluation methodology for health-related websites based on a 2-tuple fuzzy linguistic approach. Soft Comput 2010;14(8):887-897. [ CrossRef ]
  • Alva M, Martínez A, Labra Gayo J, Del Carmen Suárez M, Cueva J, Sagástegui H. Proposal of a tool of support to the evaluation of user in educative web sites. 2008 Presented at: 1st World Summit on the Knowledge Society, WSKS 2008; 2008; Athens p. 149-157. [ CrossRef ]
  • Usability: Home. Usability Evaluation Basics   URL: https://www.usability.gov/what-and-why/usability-evaluation.html [accessed 2019-08-24]
  • AddThis: Get more likes, shares and follows with smart. 10 Criteria for Better Website Usability: Heuristics Cheat Sheet   URL: http:/​/www.​addthis.com/​blog/​2015/​02/​17/​10-criteria-for-better-website-usability-heuristics-cheat-sheet/​#.​V712QfkrJD8 [accessed 2019-08-24]
  • Akgül Y. Quality evaluation of E-government websites of Turkey. In: Proceedings of the 2016 11th Iberian Conference on Information Systems and Technologies. 2016 Presented at: CISTI'16; June 15-18 2016; Las Palmas, Spain p. 1-7. [ CrossRef ]
  • Al Zaghoul FA, Al Nsour AJ, Rababah OM. Ranking Quality Factors for Measuring Web Service Quality. In: Proceedings of the 1st International Conference on Intelligent Semantic Web-Services and Applications. 2010 Presented at: 1st ACM Jordan Professional Chapter ISWSA Annual - International Conference on Intelligent Semantic Web-Services and Applications, ISWSA'10; June 14-16, 2010; Amman, Jordan. [ CrossRef ]
  • Alharbi A, Mayhew P. Users' Performance in Lab and Non-lab Enviornments Through Online Usability Testing: A Case of Evaluating the Usability of Digital Academic Libraries' Websites. In: 2015 Science and Information Conference. 2015 Presented at: SAI'15; July 28-30, 2015; London, UK p. 151-161. [ CrossRef ]
  • Aliyu M, Mahmud M, Md Tap AO. Preliminary Investigation of Islamic Websites Design & Content Feature: A Heuristic Evaluation From User Perspective. In: Proceedings of the 2010 International Conference on User Science and Engineering. 2010 Presented at: iUSEr'10; December 13-15, 2010; Shah Alam, Malaysia p. 262-267. [ CrossRef ]
  • Aliyu M, Mahmud M, Tap AO, Nassr RM. Evaluating Design Features of Islamic Websites: A Muslim User Perception. In: Proceedings of the 2013 5th International Conference on Information and Communication Technology for the Muslim World. 2013 Presented at: ICT4M'13; March 26-27, 2013; Rabat, Morocco. [ CrossRef ]
  • Al-Radaideh QA, Abu-Shanab E, Hamam S, Abu-Salem H. Usability evaluation of online news websites: a user perspective approach. World Acad Sci Eng Technol 2011;74:1058-1066 [ FREE Full text ]
  • Aranyi G, van Schaik P, Barker P. Using think-aloud and psychometrics to explore users’ experience with a news web site. Interact Comput 2012;24(2):69-77. [ CrossRef ]
  • Arrue M, Fajardo I, Lopez JM, Vigo M. Interdependence between technical web accessibility and usability: its influence on web quality models. Int J Web Eng Technol 2007;3(3):307-328. [ CrossRef ]
  • Arrue M, Vigo M, Abascal J. Quantitative metrics for web accessibility evaluation. 2005 Presented at: Proceedings of the ICWE 2005 Workshop on Web Metrics and Measurement; 2005; Sydney.
  • Atterer R, Wnuk M, Schmidt A. Knowing the User's Every Move: User Activity Tracking for Website Usability Evaluation and Implicit Interaction. In: Proceedings of the 15th international conference on World Wide. 2006 Presented at: WWW'06; May 23-26, 2006; Edinburgh, Scotland p. 203-212. [ CrossRef ]
  • Bahry F, Masrom M, Masrek M. Website evaluation measures, website user engagement and website credibility for municipal website. ARPN J Eng Appl Sci 2015;10(23):18228-18238. [ CrossRef ]
  • Bañón-Gomis A, Tomás-Miquel JV, Expósito-Langa M. Improving user experience: a methodology proposal for web usability measurement. In: Strategies in E-Business: Positioning and Social Networking in Online Markets. New York City: Springer US; 2014:123-145.
  • Barnes SJ, Vidgen RT. Data triangulation and web quality metrics: a case study in e-government. Inform Manag 2006;43(6):767-777. [ CrossRef ]
  • Bolchini D, Garzotto F. Quality of Web Usability Evaluation Methods: An Empirical Study on MiLE+. In: Proceedings of the 2007 international conference on Web information systems engineering. 2007 Presented at: WISE'07; December 3-3, 2007; Nancy, France p. 481-492. [ CrossRef ]
  • Chen FH, Tzeng G, Chang CC. Evaluating the enhancement of corporate social responsibility websites quality based on a new hybrid MADM model. Int J Inf Technol Decis Mak 2015;14(03):697-724. [ CrossRef ]
  • Cherfi SS, Tuan AD, Comyn-Wattiau I. An Exploratory Study on Websites Quality Assessment. In: Proceedings of the 32nd International Conference on Conceptual Modeling Workshops. 2013 Presented at: ER'13; November 11-13, 2014; Hong Kong, China p. 170-179. [ CrossRef ]
  • Chou W, Cheng Y. A hybrid fuzzy MCDM approach for evaluating website quality of professional accounting firms. Expert Sys Appl 2012;39(3):2783-2793. [ CrossRef ]
  • Churm T. Usability Geek. 2012 Jul 9. An Introduction To Website Usability Testing   URL: http://usabilitygeek.com/an-introduction-to-website-usability-testing/ [accessed 2019-08-24]
  • Demir Y, Gozum S. Evaluation of quality, content, and use of the web site prepared for family members giving care to stroke patients. Comput Inform Nurs 2015 Sep;33(9):396-403. [ CrossRef ] [ Medline ]
  • Dominic P, Jati H, Hanim S. University website quality comparison by using non-parametric statistical test: a case study from Malaysia. Int J Oper Res 2013;16(3):349-374. [ CrossRef ]
  • Elling S, Lentz L, de Jong M, van den Bergh H. Measuring the quality of governmental websites in a controlled versus an online setting with the ‘Website Evaluation Questionnaire’. Gov Inf Q 2012;29(3):383-393. [ CrossRef ]
  • Fang X, Holsapple CW. Impacts of navigation structure, task complexity, and users’ domain knowledge on web site usability—an empirical study. Inf Syst Front 2011;13(4):453-469. [ CrossRef ]
  • Fernandez A, Abrahão S, Insfran E. A systematic review on the effectiveness of web usability evaluation methods. In: Proceedings of the 16th International Conference on Evaluation & Assessment in Software Engineering. 2012 Presented at: EASE'12; May 14-15, 2012; Ciudad Real, Spain. [ CrossRef ]
  • Flavián C, Guinalíu M, Gurrea R. The role played by perceived usability, satisfaction and consumer trust on website loyalty. Inform Manag 2006;43(1):1-14. [ CrossRef ]
  • Flavián C, Guinalíu M, Gurrea R. The influence of familiarity and usability on loyalty to online journalistic services: the role of user experience. J Retail Consum Serv 2006;13(5):363-375. [ CrossRef ]
  • Gonzalez ME, Quesada G, Davis J, Mora-Monge C. Application of quality management tools in the evaluation of websites: the case of sports organizations. Qual Manage J 2015;22(1):30-46. [ CrossRef ]
  • Harrison C, Pétrie H. Deconstructing Web Experience: More Than Just Usability and Good Design. In: Proceedings of the 12th international conference on Human-computer interaction: applications and services. 2007 Presented at: HCI'07; July 22-27, 2007; Beijing, China p. 889-898. [ CrossRef ]
  • Hart D, Portwood DM. Usability Testing of Web Sites Designed for Communities of Practice: Tests of the IEEE Professional Communication Society (PCS) Web Site Combining Specialized Heuristic Evaluation and Task-based User Testing. In: Proceedings of the 2009 IEEE International Professional Communication Conference. 2009 Presented at: 2009 IEEE International Professional Communication Conference; July 19-22, 2009; Waikiki, HI, USA. [ CrossRef ]
  • Hedegaard S, Simonsen JG. Extracting Usability and User Experience Information From Online User Reviews. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2013 Presented at: CHI'13; April 27-May 2, 2013; Paris, France p. 2089-2098. [ CrossRef ]
  • Herrera F, Herrera-Viedma E, Martínez L, Ferez LG, López-Herrera AG, Alonso S. A multi-granular linguistic hierarchical model to evaluate the quality of web site services. In: Mathew S, Mordeson JN, Malik DS, editors. Studies in Fuzziness and Soft Computing. New York City: Springer; 2006:247-274.
  • Hinchliffe A, Mummery WK. Applying usability testing techniques to improve a health promotion website. Health Promot J Austr 2008 Apr;19(1):29-35. [ CrossRef ] [ Medline ]
  • Ijaz T, Andlib F. Impact of Usability on Non-technical Users: Usability Testing Through Websites. In: Proceedings of the 2014 National Software Engineering Conference. 2014 Presented at: 2014 National Software Engineering Conference, NSEC 2014; November 11-12, 2014; Rawalpindi, Pakistan. [ CrossRef ]
  • Janiak E, Rhodes E, Foster AM. Translating access into utilization: lessons from the design and evaluation of a health insurance Web site to promote reproductive health care for young women in Massachusetts. Contraception 2013 Dec;88(6):684-690. [ CrossRef ] [ Medline ]
  • Kaya T. Multi-attribute evaluation of website quality in e-business using an integrated fuzzy AHP-TOPSIS methodology. Int J Comput Intell Syst 2010;3(3):301-314. [ CrossRef ]
  • Kincl T, Štrach P. Measuring website quality: asymmetric effect of user satisfaction. Behav Inform Technol 2012;31(7):647-657. [ CrossRef ]
  • Koutsabasis P, Istikopoulou TG. Perceived website aesthetics by users and designers: implications for evaluation practice. Int J Technol Human Interact 2014;10(2):21-34. [ CrossRef ]
  • Leuthold S, Schmutz P, Bargas-Avila JA, Tuch AN, Opwis K. Vertical versus dynamic menus on the world wide web: eye tracking study measuring the influence of menu design and task complexity on user performance and subjective preference. Comput Human Behav 2011;27(1):459-472. [ CrossRef ]
  • Longstreet P. Evaluating Website Quality: Applying Cue Utilization Theory to WebQual. In: Proceedings of the 2010 43rd Hawaii International Conference on System Sciences. 2010 Presented at: HICSS'10; January 5-8, 2010; Honolulu, HI, USA. [ CrossRef ]
  • Manzoor M. Measuring user experience of usability tool, designed for higher educational websites. Middle East J Sci Res 2013;14(3):347-353. [ CrossRef ]
  • Mashable India. 22 Essential Tools for Testing Your Website's Usability   URL: http://mashable.com/2011/09/30/website-usability-tools/#cNv8ckxZsmqw [accessed 2019-08-16]
  • Matera M, Rizzo F, Carughi GT. Web usability: principles and evaluation methods. In: Web Engineering. New York City: Springer; 2006:143-180.
  • McClellan MA, Karumur RP, Vogel RI, Petzel SV, Cragg J, Chan D, et al. Designing an educational website to improve quality of supportive oncology care for women with ovarian cancer: an expert usability review and analysis. Int J Hum Comput Interact 2016;32(4):297-307 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nakamichi N, Shima K, Sakai M, Matsumoto KI. Detecting Low Usability Web Pages Using Quantitative Data of Users' Behavior. In: Proceedings of the 28th international conference on Software engineering. 2006 Presented at: ICSE'06; May 20-28, 2006; Shanghai, China p. 569-576. [ CrossRef ]
  • Nathan RJ, Yeow PH. An empirical study of factors affecting the perceived usability of websites for student internet users. Univ Access Inf Soc 2009;8(3):165-184. [ CrossRef ]
  • Oliver H, Diallo G, de Quincey E, Alexopoulou D, Habermann B, Kostkova P, et al. A user-centred evaluation framework for the Sealife semantic web browsers. BMC Bioinform 2009;10(S10). [ CrossRef ]
  • Paul A, Yadamsuren B, Erdelez S. An Experience With Measuring Multi-User Online Task Performance. In: Proceedings of the 2012 World Congress on Information and Communication Technologies. 2012 Presented at: WICT'12; October 30-November 2, 2012; Trivandrum, India p. 639-644. [ CrossRef ]
  • Petrie H, Power C. What Do Users Really Care About?: A Comparison of Usability Problems Found by Users and Experts on Highly Interactive Websites. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012 Presented at: CHI'12; May 5-10, 2012; Austin, Texas, USA p. 2107-2116. [ CrossRef ]
  • Rekik R, Kallel I. Fuzzy Reduced Method for Evaluating the Quality of Institutional Web Sites. In: Proceedings of the 2011 7th International Conference on Next Generation Web Services Practices. 2011 Presented at: NWeSP'11; October 19-21, 2011; Salamanca, Spain p. 296-301. [ CrossRef ]
  • Reynolds E. The secret to patron-centered Web design: cheap, easy, and powerful usability techniques. Comput Librar 2008;28(6):44-47 [ FREE Full text ]
  • Sheng H, Lockwood NS, Dahal S. Eyes Don't Lie: Understanding Users' First Impressions on Websites Using Eye Tracking. In: Proceedings of the 15th International Conference on Human Interface and the Management of Information: Information and Interaction Design. 2013 Presented at: HCI'13; July 21-26, 2013; Las Vegas, NV, USA p. 635-641. [ CrossRef ]
  • Swaid SI, Wigand RT. Measuring Web-based Service Quality: The Online Customer Point of View. In: Proceedings of the 13th Americas Conference on Information Systems. 2007 Presented at: AMCIS'07; August 9-12, 2007; Keystone, Colorado, USA p. 778-790.
  • Tan GW, Wei KK. An empirical study of Web browsing behaviour: towards an effective Website design. Elect Commer Res Appl 2006;5(4):261-271. [ CrossRef ]
  • Tan W, Liu D, Bishu R. Web evaluation: heuristic evaluation vs user testing. Int J Ind Ergonom 2009;39(4):621-627. [ CrossRef ]
  • Tao D, LeRouge CM, Deckard G, de Leo G. Consumer Perspectives on Quality Attributes in Evaluating Health Websites. In: Proceedings of the 2012 45th Hawaii International Conference on System Sciences. 2012 Presented at: HICSS'12; January 4-7, 2012; Maui, HI, USA. [ CrossRef ]
  • The Whole Brain Group. 2011. Conducting a Quick & Dirty Evaluation of Your Website's Usability   URL: http://blog.thewholebraingroup.com/conducting-quick-dirty-evaluation-websites-usability [accessed 2019-08-24]
  • Thielsch MT, Blotenberg I, Jaron R. User evaluation of websites: From first impression to recommendation. Interact Comput 2014;26(1):89-102. [ CrossRef ]
  • Tung LL, Xu Y, Tan FB. Attributes of web site usability: a study of web users with the repertory grid technique. Int J Elect Commer 2009;13(4):97-126. [ CrossRef ]
  • Agarwal R, Venkatesh V. Assessing a firm's Web presence: a heuristic evaluation procedure for the measurement of usability. Inform Syst Res 2002;13(2):168-186. [ CrossRef ]
  • Venkatesh V, Ramesh V. Web and wireless site usability: understanding differences and modeling use. Manag Inf Syst Q 2006;30(1):181-206. [ CrossRef ]
  • Vaananen-Vainio-Mattila K, Wäljas M. Development of Evaluation Heuristics for Web Service User Experience. In: Proceedings of the Extended Abstracts on Human Factors in Computing Systems. 2009 Presented at: CHI'09; April 4-9, 2009; Boston, MA, USA p. 3679-3684. [ CrossRef ]
  • Wang WT, Wang B, Wei YT. Examining the Impacts of Website Complexities on User Satisfaction Based on the Task-technology Fit Model: An Experimental Research Using an Eyetracking Device. In: Proceedings of the 18th Pacific Asia Conference on Information Systems. 2014 Presented at: PACIS'14; June 18-22, 2014; Jeju Island, South Korea.
  • Yen B, Hu PJ, Wang M. Toward an analytical approach for effective web site design: a framework for modeling, evaluation and enhancement. Elect Commer Res Appl 2007;6(2):159-170. [ CrossRef ]
  • Yen PY, Bakken S. A comparison of usability evaluation methods: heuristic evaluation versus end-user think-aloud protocol - an example from a web-based communication tool for nurse scheduling. AMIA Annu Symp Proc 2009 Nov 14;2009:714-718 [ FREE Full text ] [ Medline ]
  • Allison R, Hayes C, Young V, McNulty CAM. Evaluation of an educational health website on infections and antibiotics: a mixed-methods, user-centred approach in England. JMIR Formative Research 2019 (forthcoming).

Abbreviations

Edited by G Eysenbach; submitted 26.04.19; peer-reviewed by C Eley, C Brown; comments to author 31.05.19; revised version received 24.06.19; accepted 18.08.19; published 24.10.19

©Rosalie Allison, Catherine Hayes, Cliodna A M McNulty, Vicki Young. Originally published in JMIR Formative Research (http://formative.jmir.org), 24.10.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on http://formative.jmir.org, as well as this copyright and license information must be included.

Literature review: your definitive guide

literature review website development

Joanna Wilkinson

This is our ultimate guide on how to write a narrative literature review. It forms part of our Research Smarter series . 

How do you write a narrative literature review?

Researchers worldwide are increasingly reliant on literature reviews. That’s because review articles provide you with a broad picture of the field, and help to synthesize published research that’s expanding at a rapid pace .

In some academic fields, researchers publish more literature reviews than original research papers. The graph below shows the substantial growth of narrative literature reviews in the Web of Science™, alongside the percentage increase of reviews when compared to all document types.

literature review website development

It’s critical that researchers across all career levels understand how to produce an objective, critical summary of published research. This is no easy feat, but a necessary one. Professionally constructed literature reviews – whether written by a student in class or an experienced researcher for publication – should aim to add to the literature rather than detract from it.

To help you write a narrative literature review, we’ve put together some top tips in this blog post.

Best practice tips to write a narrative literature review:

  • Don’t miss a paper: tips for a thorough topic search
  • Identify key papers (and know how to use them)
  • Tips for working with co-authors
  • Find the right journal for your literature review using actual data
  • Discover literature review examples and templates

We’ll also provide an overview of all the products helpful for your next narrative review, including the Web of Science, EndNote™ and Journal Citation Reports™.

1. Don’t miss a paper: tips for a thorough topic search

Once you’ve settled on your research question, coming up with a good set of keywords to find papers on your topic can be daunting. This isn’t surprising. Put simply, if you fail to include a relevant paper when you write a narrative literature review, the omission will probably get picked up by your professor or peer reviewers. The end result will likely be a low mark or an unpublished manuscript, neither of which will do justice to your many months of hard work.

Research databases and search engines are an integral part of any literature search. It’s important you utilize as many options available through your library as possible. This will help you search an entire discipline (as well as across disciplines) for a thorough narrative review.

We provide a short summary of the various databases and search engines in an earlier Research Smarter blog . These include the Web of Science , Science.gov and the Directory of Open Access Journals (DOAJ).

Like what you see? Share it with others on Twitter:

[bctt tweet=”Writing a #LiteratureReview? Check out the latest @clarivateAG blog for top tips (from topic searches to working with coauthors), examples, templates and more”]

Searching the Web of Science

The Web of Science is a multidisciplinary research engine that contains over 170 million papers from more than 250 academic disciplines. All of the papers in the database are interconnected via citations. That means once you get started with your keyword search, you can follow the trail of cited and citing papers to efficiently find all the relevant literature. This is a great way to ensure you’re not missing anything important when you write a narrative literature review.

We recommend starting your search in the Web of Science Core Collection™. This database covers more than 21,000 carefully selected journals. It is a trusted source to find research papers, and discover top authors and journals (read more about its coverage here ).

Learn more about exploring the Core Collection in our blog, How to find research papers: five tips every researcher should know . Our blog covers various tips, including how to:

  • Perform a topic search (and select your keywords)
  • Explore the citation network
  • Refine your results (refining your search results by reviews, for example, will help you avoid duplication of work, as well as identify trends and gaps in the literature)
  • Save your search and set up email alerts

Try our tips on the Web of Science now.

2. Identify key papers (and know how to use them)

As you explore the Web of Science, you may notice that certain papers are marked as “Highly Cited.” These papers can play a significant role when you write a narrative literature review.

Highly Cited papers are recently published papers getting the most attention in your field right now. They form the top 1% of papers based on the number of citations received, compared to other papers published in the same field in the same year.

You will want to identify Highly Cited research as a group of papers. This group will help guide your analysis of the future of the field and opportunities for future research. This is an important component of your conclusion.

Writing reviews is hard work…[it] not only organizes published papers, but also positions t hem in the academic process and presents the future direction.   Prof. Susumu Kitagawa, Highly Cited Researcher, Kyoto University

3. Tips for working with co-authors

Writing a narrative review on your own is hard, but it can be even more challenging if you’re collaborating with a team, especially if your coauthors are working across multiple locations. Luckily, reference management software can improve the coordination between you and your co-authors—both around the department and around the world.

We’ve written about how to use EndNote’s Cite While You Write feature, which will help you save hundreds of hours when writing research . Here, we discuss the features that give you greater ease and control when collaborating with your colleagues.

Use EndNote for narrative reviews

Sharing references is essential for successful collaboration. With EndNote, you can store and share as many references, documents and files as you need with up to 100 people using the software.

You can share simultaneous access to one reference library, regardless of your colleague’s location or organization. You can also choose the type of access each user has on an individual basis. For example, Read-Write access means a select colleague can add and delete references, annotate PDF articles and create custom groups. They’ll also be able to see up to 500 of the team’s most recent changes to the reference library. Read-only is also an option for individuals who don’t need that level of access.

EndNote helps you overcome research limitations by synchronizing library changes every 15 minutes. That means your team can stay up-to-date at any time of the day, supporting an easier, more successful collaboration.

Start your free EndNote trial today .

4.Finding a journal for your literature review

Finding the right journal for your literature review can be a particular pain point for those of you who want to publish. The expansion of scholarly journals has made the task extremely difficult, and can potentially delay the publication of your work by many months.

We’ve written a blog about how you can find the right journal for your manuscript using a rich array of data. You can read our blog here , or head straight to Endnote’s Manuscript Matcher or Journal Citation Report s to try out the best tools for the job.

5. Discover literature review examples and templates

There are a few tips we haven’t covered in this blog, including how to decide on an area of research, develop an interesting storyline, and highlight gaps in the literature. We’ve listed a few blogs here that might help you with this, alongside some literature review examples and outlines to get you started.

Literature Review examples:

  • Aggregation-induced emission
  • Development and applications of CRISPR-Cas9 for genome engineering
  • Object based image analysis for remote sensing

(Make sure you download the free EndNote™ Click browser plugin to access the full-text PDFs).

Templates and outlines:

  • Learn how to write a review of literature , Univ. of Wisconsin – Madison
  • Structuring a literature review , Australian National University
  • Matrix Method for Literature Review: The Review Matrix , Duquesne University

Additional resources:

  • Ten simple rules for writing a literature review , Editor, PLoS Computational Biology
  • Video: How to write a literature review , UC San Diego Psychology

Related posts

Unlocking u.k. research excellence: key insights from the research professional news live summit.

literature review website development

For better insights, assess research performance at the department level

literature review website development

Getting the Full Picture: Institutional unification in the Web of Science

literature review website development

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

A LITERATURE REVIEW ON WEB DESIGN Directed Study

Profile image of ephrem alambo

Related Papers

Curtis Kelly

In the consumer-ruled environment of the Web, where competition is just a click away, good Web design is essential for keeping users on site. The key factor in good site design is usability, which means the ability of a user to accomplish tasks with ease, efficiency, and accuracy. The basic principles of usability can be organized into to three categories, those related to: 1) page design, 2) content design and 3) site design. The two most important factors in usability are speed and information architecture.

literature review website development

proceedings of Advances in Computing …

Andres Baravalle

This paper is made available online in accordance with publisher policies. Please scroll down to view the document itself. Please refer to the repository record for this item and our policy information available from the repository home page for further information. ... Author(s): Rukshan, ...

Cathy Cavanaugh

Web sites are important for schools to support teachers, administrators, counselors, students, parents, and the community. Redesigning a school's web site can become a complex process and requires careful planning. Studies with web users (Yale Style Guide, Nielsen, Siegel) have produced work that informs school web site revisers of ways to make a site more usable and enjoyable. The US government and the W3C consortium have accessibility guidelines for web sites to assist site reviewers in adapting their site so users of all abilities have equitable access. A school must reconcile time and budget limitations with the need to serve a diverse audience. This paper offers guidelines and tools for streamlining the process of web site review and redesign. Recommendations address the web site redesign team and their roles, supporting the process, identifying the site's purposes, user surveys, content audits, task lists, storyboards, color palettes, style guide, usability and accessibility. Web sites are becoming increasingly important for schools as support for teachers, administrators, counselors, students, parents, and the community. According to the Webb66 online school web site directory, over 13,000 of the nation's 108,000 schools have web sites. Jamie Mackenzie (1997) offers four reasons for maintaining effective school web sites. Reasons include introducing visitors to the school, pointing students to useful web resources, publishing student work, and collecting data on curriculum projects. Developing or redeveloping a school's web site can become a complex process and it requires careful planning. Judi Harris (1997) suggests exploring who will use the site, what information users will require or appreciate, and maximally useful ways to present the information. Years of study with web users (Yale Style Guide, Nielsen, Siegel) have produced an extensive body of work that informs school web site revisers of ways to make a site more usable and enjoyable. The US government and the W3C consortium recently released accessibility guidelines for web sites that will assist school web site reviewers in adapting their site so users of all abilities have equitable access. School web site review and redesign may be a limited, target process for making basic improvements and updates, or it may be an ambitious complete overhaul. Extensive web redesign projects can be facilitated using summer writing teams, grants, and class projects done by high school or college web design students. The following tools are presented sequentially as guidelines to assist in the school web site redesign process. Schools have unique needs, capabilities, and characters, and each school's uniqueness is reflected in its web site, as it is in the redesign process. Schools have widely varying resources to commit to web site redesign. Therefore the tools presented here should be considered a menu of options, any of which can be used to streamline the process. 1. Site redesign team

ACM Transactions on the Web

Harald Weinreich

James Cullin

Proceedings of the 15th …

steward mwale

Proceedings of the 15th international conference on World Wide Web - WWW '06

Malaysian Journal of Computer Science

Thiam Kian Chiew

Usability is one of the major factors that determines the successfulness of a website. It is important therefore to have certain measurement methods to assess the usability of websites. The methods could be used to help website designers make their websites more usable. This research focuses on website usability issues and implements a tool for evaluating the usability of websites, called WEBUSE (WEBsite USability Evaluation Tool). Based on literature research, a 24-question evaluation questionnaire has been ...

ACSIJ Journal , Dr. Sandeep Kumar Panda

We investigate the usability problems of e-commerce online shopping websites from user’s preferences and determine the relative importance of factors such as navigability, content, design, ease of use, and structure through user survey. The main intent of this ranking of web site characteristics is that a designer can relatively give higher efforts on designing features that may lead to higher merit and better usability. As such, our research work help us capture the data by involving user testing (usability testing) and open source automated tools such as Camtasia. Hence, the outcomes of the above approach show the navigation, content, design were the first, second, and third priority for evaluating the usability of e-commerce websites whereas ease of use and structure were the fourth and fifth features from the overall usability value calculation. There is a significant statistical difference between novice and expert users only for navigation feature. The maximum number of users feels satisfied with navigation, content, and design features whereas they are dissatisfied with ease of use and structural features of the websites.

RELATED PAPERS

Ali Al-Badi , Pam Mayhew

Human Factors and Ergonomics Society Annual Meeting Proceedings

Esa Rantanen

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Grad Coach

How To Write An A-Grade Literature Review

3 straightforward steps (with examples) + free template.

By: Derek Jansen (MBA) | Expert Reviewed By: Dr. Eunice Rautenbach | October 2019

Quality research is about building onto the existing work of others , “standing on the shoulders of giants”, as Newton put it. The literature review chapter of your dissertation, thesis or research project is where you synthesise this prior work and lay the theoretical foundation for your own research.

Long story short, this chapter is a pretty big deal, which is why you want to make sure you get it right . In this post, I’ll show you exactly how to write a literature review in three straightforward steps, so you can conquer this vital chapter (the smart way).

Overview: The Literature Review Process

  • Understanding the “ why “
  • Finding the relevant literature
  • Cataloguing and synthesising the information
  • Outlining & writing up your literature review
  • Example of a literature review

But first, the “why”…

Before we unpack how to write the literature review chapter, we’ve got to look at the why . To put it bluntly, if you don’t understand the function and purpose of the literature review process, there’s no way you can pull it off well. So, what exactly is the purpose of the literature review?

Well, there are (at least) four core functions:

  • For you to gain an understanding (and demonstrate this understanding) of where the research is at currently, what the key arguments and disagreements are.
  • For you to identify the gap(s) in the literature and then use this as justification for your own research topic.
  • To help you build a conceptual framework for empirical testing (if applicable to your research topic).
  • To inform your methodological choices and help you source tried and tested questionnaires (for interviews ) and measurement instruments (for surveys ).

Most students understand the first point but don’t give any thought to the rest. To get the most from the literature review process, you must keep all four points front of mind as you review the literature (more on this shortly), or you’ll land up with a wonky foundation.

Okay – with the why out the way, let’s move on to the how . As mentioned above, writing your literature review is a process, which I’ll break down into three steps:

  • Finding the most suitable literature
  • Understanding , distilling and organising the literature
  • Planning and writing up your literature review chapter

Importantly, you must complete steps one and two before you start writing up your chapter. I know it’s very tempting, but don’t try to kill two birds with one stone and write as you read. You’ll invariably end up wasting huge amounts of time re-writing and re-shaping, or you’ll just land up with a disjointed, hard-to-digest mess . Instead, you need to read first and distil the information, then plan and execute the writing.

Free Webinar: Literature Review 101

Step 1: Find the relevant literature

Naturally, the first step in the literature review journey is to hunt down the existing research that’s relevant to your topic. While you probably already have a decent base of this from your research proposal , you need to expand on this substantially in the dissertation or thesis itself.

Essentially, you need to be looking for any existing literature that potentially helps you answer your research question (or develop it, if that’s not yet pinned down). There are numerous ways to find relevant literature, but I’ll cover my top four tactics here. I’d suggest combining all four methods to ensure that nothing slips past you:

Method 1 – Google Scholar Scrubbing

Google’s academic search engine, Google Scholar , is a great starting point as it provides a good high-level view of the relevant journal articles for whatever keyword you throw at it. Most valuably, it tells you how many times each article has been cited, which gives you an idea of how credible (or at least, popular) it is. Some articles will be free to access, while others will require an account, which brings us to the next method.

Method 2 – University Database Scrounging

Generally, universities provide students with access to an online library, which provides access to many (but not all) of the major journals.

So, if you find an article using Google Scholar that requires paid access (which is quite likely), search for that article in your university’s database – if it’s listed there, you’ll have access. Note that, generally, the search engine capabilities of these databases are poor, so make sure you search for the exact article name, or you might not find it.

Method 3 – Journal Article Snowballing

At the end of every academic journal article, you’ll find a list of references. As with any academic writing, these references are the building blocks of the article, so if the article is relevant to your topic, there’s a good chance a portion of the referenced works will be too. Do a quick scan of the titles and see what seems relevant, then search for the relevant ones in your university’s database.

Method 4 – Dissertation Scavenging

Similar to Method 3 above, you can leverage other students’ dissertations. All you have to do is skim through literature review chapters of existing dissertations related to your topic and you’ll find a gold mine of potential literature. Usually, your university will provide you with access to previous students’ dissertations, but you can also find a much larger selection in the following databases:

  • Open Access Theses & Dissertations
  • Stanford SearchWorks

Keep in mind that dissertations and theses are not as academically sound as published, peer-reviewed journal articles (because they’re written by students, not professionals), so be sure to check the credibility of any sources you find using this method. You can do this by assessing the citation count of any given article in Google Scholar. If you need help with assessing the credibility of any article, or with finding relevant research in general, you can chat with one of our Research Specialists .

Alright – with a good base of literature firmly under your belt, it’s time to move onto the next step.

Need a helping hand?

literature review website development

Step 2: Log, catalogue and synthesise

Once you’ve built a little treasure trove of articles, it’s time to get reading and start digesting the information – what does it all mean?

While I present steps one and two (hunting and digesting) as sequential, in reality, it’s more of a back-and-forth tango – you’ll read a little , then have an idea, spot a new citation, or a new potential variable, and then go back to searching for articles. This is perfectly natural – through the reading process, your thoughts will develop , new avenues might crop up, and directional adjustments might arise. This is, after all, one of the main purposes of the literature review process (i.e. to familiarise yourself with the current state of research in your field).

As you’re working through your treasure chest, it’s essential that you simultaneously start organising the information. There are three aspects to this:

  • Logging reference information
  • Building an organised catalogue
  • Distilling and synthesising the information

I’ll discuss each of these below:

2.1 – Log the reference information

As you read each article, you should add it to your reference management software. I usually recommend Mendeley for this purpose (see the Mendeley 101 video below), but you can use whichever software you’re comfortable with. Most importantly, make sure you load EVERY article you read into your reference manager, even if it doesn’t seem very relevant at the time.

2.2 – Build an organised catalogue

In the beginning, you might feel confident that you can remember who said what, where, and what their main arguments were. Trust me, you won’t. If you do a thorough review of the relevant literature (as you must!), you’re going to read many, many articles, and it’s simply impossible to remember who said what, when, and in what context . Also, without the bird’s eye view that a catalogue provides, you’ll miss connections between various articles, and have no view of how the research developed over time. Simply put, it’s essential to build your own catalogue of the literature.

I would suggest using Excel to build your catalogue, as it allows you to run filters, colour code and sort – all very useful when your list grows large (which it will). How you lay your spreadsheet out is up to you, but I’d suggest you have the following columns (at minimum):

  • Author, date, title – Start with three columns containing this core information. This will make it easy for you to search for titles with certain words, order research by date, or group by author.
  • Categories or keywords – You can either create multiple columns, one for each category/theme and then tick the relevant categories, or you can have one column with keywords.
  • Key arguments/points – Use this column to succinctly convey the essence of the article, the key arguments and implications thereof for your research.
  • Context – Note the socioeconomic context in which the research was undertaken. For example, US-based, respondents aged 25-35, lower- income, etc. This will be useful for making an argument about gaps in the research.
  • Methodology – Note which methodology was used and why. Also, note any issues you feel arise due to the methodology. Again, you can use this to make an argument about gaps in the research.
  • Quotations – Note down any quoteworthy lines you feel might be useful later.
  • Notes – Make notes about anything not already covered. For example, linkages to or disagreements with other theories, questions raised but unanswered, shortcomings or limitations, and so forth.

If you’d like, you can try out our free catalog template here (see screenshot below).

Excel literature review template

2.3 – Digest and synthesise

Most importantly, as you work through the literature and build your catalogue, you need to synthesise all the information in your own mind – how does it all fit together? Look for links between the various articles and try to develop a bigger picture view of the state of the research. Some important questions to ask yourself are:

  • What answers does the existing research provide to my own research questions ?
  • Which points do the researchers agree (and disagree) on?
  • How has the research developed over time?
  • Where do the gaps in the current research lie?

To help you develop a big-picture view and synthesise all the information, you might find mind mapping software such as Freemind useful. Alternatively, if you’re a fan of physical note-taking, investing in a large whiteboard might work for you.

Mind mapping is a useful way to plan your literature review.

Step 3: Outline and write it up!

Once you’re satisfied that you have digested and distilled all the relevant literature in your mind, it’s time to put pen to paper (or rather, fingers to keyboard). There are two steps here – outlining and writing:

3.1 – Draw up your outline

Having spent so much time reading, it might be tempting to just start writing up without a clear structure in mind. However, it’s critically important to decide on your structure and develop a detailed outline before you write anything. Your literature review chapter needs to present a clear, logical and an easy to follow narrative – and that requires some planning. Don’t try to wing it!

Naturally, you won’t always follow the plan to the letter, but without a detailed outline, you’re more than likely going to end up with a disjointed pile of waffle , and then you’re going to spend a far greater amount of time re-writing, hacking and patching. The adage, “measure twice, cut once” is very suitable here.

In terms of structure, the first decision you’ll have to make is whether you’ll lay out your review thematically (into themes) or chronologically (by date/period). The right choice depends on your topic, research objectives and research questions, which we discuss in this article .

Once that’s decided, you need to draw up an outline of your entire chapter in bullet point format. Try to get as detailed as possible, so that you know exactly what you’ll cover where, how each section will connect to the next, and how your entire argument will develop throughout the chapter. Also, at this stage, it’s a good idea to allocate rough word count limits for each section, so that you can identify word count problems before you’ve spent weeks or months writing!

PS – check out our free literature review chapter template…

3.2 – Get writing

With a detailed outline at your side, it’s time to start writing up (finally!). At this stage, it’s common to feel a bit of writer’s block and find yourself procrastinating under the pressure of finally having to put something on paper. To help with this, remember that the objective of the first draft is not perfection – it’s simply to get your thoughts out of your head and onto paper, after which you can refine them. The structure might change a little, the word count allocations might shift and shuffle, and you might add or remove a section – that’s all okay. Don’t worry about all this on your first draft – just get your thoughts down on paper.

start writing

Once you’ve got a full first draft (however rough it may be), step away from it for a day or two (longer if you can) and then come back at it with fresh eyes. Pay particular attention to the flow and narrative – does it fall fit together and flow from one section to another smoothly? Now’s the time to try to improve the linkage from each section to the next, tighten up the writing to be more concise, trim down word count and sand it down into a more digestible read.

Once you’ve done that, give your writing to a friend or colleague who is not a subject matter expert and ask them if they understand the overall discussion. The best way to assess this is to ask them to explain the chapter back to you. This technique will give you a strong indication of which points were clearly communicated and which weren’t. If you’re working with Grad Coach, this is a good time to have your Research Specialist review your chapter.

Finally, tighten it up and send it off to your supervisor for comment. Some might argue that you should be sending your work to your supervisor sooner than this (indeed your university might formally require this), but in my experience, supervisors are extremely short on time (and often patience), so, the more refined your chapter is, the less time they’ll waste on addressing basic issues (which you know about already) and the more time they’ll spend on valuable feedback that will increase your mark-earning potential.

Literature Review Example

In the video below, we unpack an actual literature review so that you can see how all the core components come together in reality.

Let’s Recap

In this post, we’ve covered how to research and write up a high-quality literature review chapter. Let’s do a quick recap of the key takeaways:

  • It is essential to understand the WHY of the literature review before you read or write anything. Make sure you understand the 4 core functions of the process.
  • The first step is to hunt down the relevant literature . You can do this using Google Scholar, your university database, the snowballing technique and by reviewing other dissertations and theses.
  • Next, you need to log all the articles in your reference manager , build your own catalogue of literature and synthesise all the research.
  • Following that, you need to develop a detailed outline of your entire chapter – the more detail the better. Don’t start writing without a clear outline (on paper, not in your head!)
  • Write up your first draft in rough form – don’t aim for perfection. Remember, done beats perfect.
  • Refine your second draft and get a layman’s perspective on it . Then tighten it up and submit it to your supervisor.

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling Udemy Course, Literature Review Bootcamp . If you want to work smart, you don't want to miss this .

You Might Also Like:

How To Find a Research Gap (Fast)

38 Comments

Phindile Mpetshwa

Thank you very much. This page is an eye opener and easy to comprehend.

Yinka

This is awesome!

I wish I come across GradCoach earlier enough.

But all the same I’ll make use of this opportunity to the fullest.

Thank you for this good job.

Keep it up!

Derek Jansen

You’re welcome, Yinka. Thank you for the kind words. All the best writing your literature review.

Renee Buerger

Thank you for a very useful literature review session. Although I am doing most of the steps…it being my first masters an Mphil is a self study and one not sure you are on the right track. I have an amazing supervisor but one also knows they are super busy. So not wanting to bother on the minutae. Thank you.

You’re most welcome, Renee. Good luck with your literature review 🙂

Sheemal Prasad

This has been really helpful. Will make full use of it. 🙂

Thank you Gradcoach.

Tahir

Really agreed. Admirable effort

Faturoti Toyin

thank you for this beautiful well explained recap.

Tara

Thank you so much for your guide of video and other instructions for the dissertation writing.

It is instrumental. It encouraged me to write a dissertation now.

Lorraine Hall

Thank you the video was great – from someone that knows nothing thankyou

araz agha

an amazing and very constructive way of presetting a topic, very useful, thanks for the effort,

Suilabayuh Ngah

It is timely

It is very good video of guidance for writing a research proposal and a dissertation. Since I have been watching and reading instructions, I have started my research proposal to write. I appreciate to Mr Jansen hugely.

Nancy Geregl

I learn a lot from your videos. Very comprehensive and detailed.

Thank you for sharing your knowledge. As a research student, you learn better with your learning tips in research

Uzma

I was really stuck in reading and gathering information but after watching these things are cleared thanks, it is so helpful.

Xaysukith thorxaitou

Really helpful, Thank you for the effort in showing such information

Sheila Jerome

This is super helpful thank you very much.

Mary

Thank you for this whole literature writing review.You have simplified the process.

Maithe

I’m so glad I found GradCoach. Excellent information, Clear explanation, and Easy to follow, Many thanks Derek!

You’re welcome, Maithe. Good luck writing your literature review 🙂

Anthony

Thank you Coach, you have greatly enriched and improved my knowledge

Eunice

Great piece, so enriching and it is going to help me a great lot in my project and thesis, thanks so much

Stephanie Louw

This is THE BEST site for ANYONE doing a masters or doctorate! Thank you for the sound advice and templates. You rock!

Thanks, Stephanie 🙂

oghenekaro Silas

This is mind blowing, the detailed explanation and simplicity is perfect.

I am doing two papers on my final year thesis, and I must stay I feel very confident to face both headlong after reading this article.

thank you so much.

if anyone is to get a paper done on time and in the best way possible, GRADCOACH is certainly the go to area!

tarandeep singh

This is very good video which is well explained with detailed explanation

uku igeny

Thank you excellent piece of work and great mentoring

Abdul Ahmad Zazay

Thanks, it was useful

Maserialong Dlamini

Thank you very much. the video and the information were very helpful.

Suleiman Abubakar

Good morning scholar. I’m delighted coming to know you even before the commencement of my dissertation which hopefully is expected in not more than six months from now. I would love to engage my study under your guidance from the beginning to the end. I love to know how to do good job

Mthuthuzeli Vongo

Thank you so much Derek for such useful information on writing up a good literature review. I am at a stage where I need to start writing my one. My proposal was accepted late last year but I honestly did not know where to start

SEID YIMAM MOHAMMED (Technic)

Like the name of your YouTube implies you are GRAD (great,resource person, about dissertation). In short you are smart enough in coaching research work.

Richie Buffalo

This is a very well thought out webpage. Very informative and a great read.

Adekoya Opeyemi Jonathan

Very timely.

I appreciate.

Norasyidah Mohd Yusoff

Very comprehensive and eye opener for me as beginner in postgraduate study. Well explained and easy to understand. Appreciate and good reference in guiding me in my research journey. Thank you

Maryellen Elizabeth Hart

Thank you. I requested to download the free literature review template, however, your website wouldn’t allow me to complete the request or complete a download. May I request that you email me the free template? Thank you.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Get science-backed answers as you write with Paperpal's Research feature

What is a Literature Review? How to Write It (with Examples)

literature review

A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship, demonstrating your understanding of the topic and showing how your work contributes to the ongoing conversation in the field. Learning how to write a literature review is a critical tool for successful research. Your ability to summarize and synthesize prior research pertaining to a certain topic demonstrates your grasp on the topic of study, and assists in the learning process. 

Table of Contents

  • What is the purpose of literature review? 
  • a. Habitat Loss and Species Extinction: 
  • b. Range Shifts and Phenological Changes: 
  • c. Ocean Acidification and Coral Reefs: 
  • d. Adaptive Strategies and Conservation Efforts: 
  • How to write a good literature review 
  • Choose a Topic and Define the Research Question: 
  • Decide on the Scope of Your Review: 
  • Select Databases for Searches: 
  • Conduct Searches and Keep Track: 
  • Review the Literature: 
  • Organize and Write Your Literature Review: 
  • Frequently asked questions 

What is a literature review?

A well-conducted literature review demonstrates the researcher’s familiarity with the existing literature, establishes the context for their own research, and contributes to scholarly conversations on the topic. One of the purposes of a literature review is also to help researchers avoid duplicating previous work and ensure that their research is informed by and builds upon the existing body of knowledge.

literature review website development

What is the purpose of literature review?

A literature review serves several important purposes within academic and research contexts. Here are some key objectives and functions of a literature review: 2  

  • Contextualizing the Research Problem: The literature review provides a background and context for the research problem under investigation. It helps to situate the study within the existing body of knowledge. 
  • Identifying Gaps in Knowledge: By identifying gaps, contradictions, or areas requiring further research, the researcher can shape the research question and justify the significance of the study. This is crucial for ensuring that the new research contributes something novel to the field. 
  • Understanding Theoretical and Conceptual Frameworks: Literature reviews help researchers gain an understanding of the theoretical and conceptual frameworks used in previous studies. This aids in the development of a theoretical framework for the current research. 
  • Providing Methodological Insights: Another purpose of literature reviews is that it allows researchers to learn about the methodologies employed in previous studies. This can help in choosing appropriate research methods for the current study and avoiding pitfalls that others may have encountered. 
  • Establishing Credibility: A well-conducted literature review demonstrates the researcher’s familiarity with existing scholarship, establishing their credibility and expertise in the field. It also helps in building a solid foundation for the new research. 
  • Informing Hypotheses or Research Questions: The literature review guides the formulation of hypotheses or research questions by highlighting relevant findings and areas of uncertainty in existing literature. 

Literature review example

Let’s delve deeper with a literature review example: Let’s say your literature review is about the impact of climate change on biodiversity. You might format your literature review into sections such as the effects of climate change on habitat loss and species extinction, phenological changes, and marine biodiversity. Each section would then summarize and analyze relevant studies in those areas, highlighting key findings and identifying gaps in the research. The review would conclude by emphasizing the need for further research on specific aspects of the relationship between climate change and biodiversity. The following literature review template provides a glimpse into the recommended literature review structure and content, demonstrating how research findings are organized around specific themes within a broader topic. 

Literature Review on Climate Change Impacts on Biodiversity:

Climate change is a global phenomenon with far-reaching consequences, including significant impacts on biodiversity. This literature review synthesizes key findings from various studies: 

a. Habitat Loss and Species Extinction:

Climate change-induced alterations in temperature and precipitation patterns contribute to habitat loss, affecting numerous species (Thomas et al., 2004). The review discusses how these changes increase the risk of extinction, particularly for species with specific habitat requirements. 

b. Range Shifts and Phenological Changes:

Observations of range shifts and changes in the timing of biological events (phenology) are documented in response to changing climatic conditions (Parmesan & Yohe, 2003). These shifts affect ecosystems and may lead to mismatches between species and their resources. 

c. Ocean Acidification and Coral Reefs:

The review explores the impact of climate change on marine biodiversity, emphasizing ocean acidification’s threat to coral reefs (Hoegh-Guldberg et al., 2007). Changes in pH levels negatively affect coral calcification, disrupting the delicate balance of marine ecosystems. 

d. Adaptive Strategies and Conservation Efforts:

Recognizing the urgency of the situation, the literature review discusses various adaptive strategies adopted by species and conservation efforts aimed at mitigating the impacts of climate change on biodiversity (Hannah et al., 2007). It emphasizes the importance of interdisciplinary approaches for effective conservation planning. 

literature review website development

How to write a good literature review

Writing a literature review involves summarizing and synthesizing existing research on a particular topic. A good literature review format should include the following elements. 

Introduction: The introduction sets the stage for your literature review, providing context and introducing the main focus of your review. 

  • Opening Statement: Begin with a general statement about the broader topic and its significance in the field. 
  • Scope and Purpose: Clearly define the scope of your literature review. Explain the specific research question or objective you aim to address. 
  • Organizational Framework: Briefly outline the structure of your literature review, indicating how you will categorize and discuss the existing research. 
  • Significance of the Study: Highlight why your literature review is important and how it contributes to the understanding of the chosen topic. 
  • Thesis Statement: Conclude the introduction with a concise thesis statement that outlines the main argument or perspective you will develop in the body of the literature review. 

Body: The body of the literature review is where you provide a comprehensive analysis of existing literature, grouping studies based on themes, methodologies, or other relevant criteria. 

  • Organize by Theme or Concept: Group studies that share common themes, concepts, or methodologies. Discuss each theme or concept in detail, summarizing key findings and identifying gaps or areas of disagreement. 
  • Critical Analysis: Evaluate the strengths and weaknesses of each study. Discuss the methodologies used, the quality of evidence, and the overall contribution of each work to the understanding of the topic. 
  • Synthesis of Findings: Synthesize the information from different studies to highlight trends, patterns, or areas of consensus in the literature. 
  • Identification of Gaps: Discuss any gaps or limitations in the existing research and explain how your review contributes to filling these gaps. 
  • Transition between Sections: Provide smooth transitions between different themes or concepts to maintain the flow of your literature review. 

Conclusion: The conclusion of your literature review should summarize the main findings, highlight the contributions of the review, and suggest avenues for future research. 

  • Summary of Key Findings: Recap the main findings from the literature and restate how they contribute to your research question or objective. 
  • Contributions to the Field: Discuss the overall contribution of your literature review to the existing knowledge in the field. 
  • Implications and Applications: Explore the practical implications of the findings and suggest how they might impact future research or practice. 
  • Recommendations for Future Research: Identify areas that require further investigation and propose potential directions for future research in the field. 
  • Final Thoughts: Conclude with a final reflection on the importance of your literature review and its relevance to the broader academic community. 

what is a literature review

Conducting a literature review

Conducting a literature review is an essential step in research that involves reviewing and analyzing existing literature on a specific topic. It’s important to know how to do a literature review effectively, so here are the steps to follow: 1  

Choose a Topic and Define the Research Question:

  • Select a topic that is relevant to your field of study. 
  • Clearly define your research question or objective. Determine what specific aspect of the topic do you want to explore? 

Decide on the Scope of Your Review:

  • Determine the timeframe for your literature review. Are you focusing on recent developments, or do you want a historical overview? 
  • Consider the geographical scope. Is your review global, or are you focusing on a specific region? 
  • Define the inclusion and exclusion criteria. What types of sources will you include? Are there specific types of studies or publications you will exclude? 

Select Databases for Searches:

  • Identify relevant databases for your field. Examples include PubMed, IEEE Xplore, Scopus, Web of Science, and Google Scholar. 
  • Consider searching in library catalogs, institutional repositories, and specialized databases related to your topic. 

Conduct Searches and Keep Track:

  • Develop a systematic search strategy using keywords, Boolean operators (AND, OR, NOT), and other search techniques. 
  • Record and document your search strategy for transparency and replicability. 
  • Keep track of the articles, including publication details, abstracts, and links. Use citation management tools like EndNote, Zotero, or Mendeley to organize your references. 

Review the Literature:

  • Evaluate the relevance and quality of each source. Consider the methodology, sample size, and results of studies. 
  • Organize the literature by themes or key concepts. Identify patterns, trends, and gaps in the existing research. 
  • Summarize key findings and arguments from each source. Compare and contrast different perspectives. 
  • Identify areas where there is a consensus in the literature and where there are conflicting opinions. 
  • Provide critical analysis and synthesis of the literature. What are the strengths and weaknesses of existing research? 

Organize and Write Your Literature Review:

  • Literature review outline should be based on themes, chronological order, or methodological approaches. 
  • Write a clear and coherent narrative that synthesizes the information gathered. 
  • Use proper citations for each source and ensure consistency in your citation style (APA, MLA, Chicago, etc.). 
  • Conclude your literature review by summarizing key findings, identifying gaps, and suggesting areas for future research. 

The literature review sample and detailed advice on writing and conducting a review will help you produce a well-structured report. But remember that a literature review is an ongoing process, and it may be necessary to revisit and update it as your research progresses. 

Frequently asked questions

A literature review is a critical and comprehensive analysis of existing literature (published and unpublished works) on a specific topic or research question and provides a synthesis of the current state of knowledge in a particular field. A well-conducted literature review is crucial for researchers to build upon existing knowledge, avoid duplication of efforts, and contribute to the advancement of their field. It also helps researchers situate their work within a broader context and facilitates the development of a sound theoretical and conceptual framework for their studies.

Literature review is a crucial component of research writing, providing a solid background for a research paper’s investigation. The aim is to keep professionals up to date by providing an understanding of ongoing developments within a specific field, including research methods, and experimental techniques used in that field, and present that knowledge in the form of a written report. Also, the depth and breadth of the literature review emphasizes the credibility of the scholar in his or her field.  

Before writing a literature review, it’s essential to undertake several preparatory steps to ensure that your review is well-researched, organized, and focused. This includes choosing a topic of general interest to you and doing exploratory research on that topic, writing an annotated bibliography, and noting major points, especially those that relate to the position you have taken on the topic. 

Literature reviews and academic research papers are essential components of scholarly work but serve different purposes within the academic realm. 3 A literature review aims to provide a foundation for understanding the current state of research on a particular topic, identify gaps or controversies, and lay the groundwork for future research. Therefore, it draws heavily from existing academic sources, including books, journal articles, and other scholarly publications. In contrast, an academic research paper aims to present new knowledge, contribute to the academic discourse, and advance the understanding of a specific research question. Therefore, it involves a mix of existing literature (in the introduction and literature review sections) and original data or findings obtained through research methods. 

Literature reviews are essential components of academic and research papers, and various strategies can be employed to conduct them effectively. If you want to know how to write a literature review for a research paper, here are four common approaches that are often used by researchers.  Chronological Review: This strategy involves organizing the literature based on the chronological order of publication. It helps to trace the development of a topic over time, showing how ideas, theories, and research have evolved.  Thematic Review: Thematic reviews focus on identifying and analyzing themes or topics that cut across different studies. Instead of organizing the literature chronologically, it is grouped by key themes or concepts, allowing for a comprehensive exploration of various aspects of the topic.  Methodological Review: This strategy involves organizing the literature based on the research methods employed in different studies. It helps to highlight the strengths and weaknesses of various methodologies and allows the reader to evaluate the reliability and validity of the research findings.  Theoretical Review: A theoretical review examines the literature based on the theoretical frameworks used in different studies. This approach helps to identify the key theories that have been applied to the topic and assess their contributions to the understanding of the subject.  It’s important to note that these strategies are not mutually exclusive, and a literature review may combine elements of more than one approach. The choice of strategy depends on the research question, the nature of the literature available, and the goals of the review. Additionally, other strategies, such as integrative reviews or systematic reviews, may be employed depending on the specific requirements of the research.

The literature review format can vary depending on the specific publication guidelines. However, there are some common elements and structures that are often followed. Here is a general guideline for the format of a literature review:  Introduction:   Provide an overview of the topic.  Define the scope and purpose of the literature review.  State the research question or objective.  Body:   Organize the literature by themes, concepts, or chronology.  Critically analyze and evaluate each source.  Discuss the strengths and weaknesses of the studies.  Highlight any methodological limitations or biases.  Identify patterns, connections, or contradictions in the existing research.  Conclusion:   Summarize the key points discussed in the literature review.  Highlight the research gap.  Address the research question or objective stated in the introduction.  Highlight the contributions of the review and suggest directions for future research.

Both annotated bibliographies and literature reviews involve the examination of scholarly sources. While annotated bibliographies focus on individual sources with brief annotations, literature reviews provide a more in-depth, integrated, and comprehensive analysis of existing literature on a specific topic. The key differences are as follows: 

References 

  • Denney, A. S., & Tewksbury, R. (2013). How to write a literature review.  Journal of criminal justice education ,  24 (2), 218-234. 
  • Pan, M. L. (2016).  Preparing literature reviews: Qualitative and quantitative approaches . Taylor & Francis. 
  • Cantero, C. (2019). How to write a literature review.  San José State University Writing Center . 

Paperpal is an AI writing assistant that help academics write better, faster with real-time suggestions for in-depth language and grammar correction. Trained on millions of research manuscripts enhanced by professional academic editors, Paperpal delivers human precision at machine speed.  

Try it for free or upgrade to  Paperpal Prime , which unlocks unlimited access to premium features like academic translation, paraphrasing, contextual synonyms, consistency checks and more. It’s like always having a professional academic editor by your side! Go beyond limitations and experience the future of academic writing.  Get Paperpal Prime now at just US$19 a month!

Related Reads:

  • Empirical Research: A Comprehensive Guide for Academics 
  • How to Write a Scientific Paper in 10 Steps 
  • Life Sciences Papers: 9 Tips for Authors Writing in Biological Sciences
  • What is an Argumentative Essay? How to Write It (With Examples)

6 Tips for Post-Doc Researchers to Take Their Career to the Next Level

Self-plagiarism in research: what it is and how to avoid it, you may also like, what is hedging in academic writing  , how to use ai to enhance your college..., ai + human expertise – a paradigm shift..., how to use paperpal to generate emails &..., ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., do plagiarism checkers detect ai content, word choice problems: how to use the right..., how to avoid plagiarism when using generative ai..., what are journal guidelines on using generative ai....

  • Research Guides
  • Vanderbilt University Libraries

Peabody Library

Literature reviews.

  • Developing a Literature Review
  • Steps to Success: The Literature Review Process
  • Literature Reviews Webinar Recording
  • Literature Review Table
  • Writing Like an Academic
  • Publishing in Academic Journals
  • Managing Citations This link opens in a new window

Profile Photo

The Literature Review

What Is a Literature Review? According to the seventh edition of the APA Publication Manual, a literature review is "a critical evaluation of material that has already been published."  As one embarks on creating a literature review, it is important to note that the grouping of components within a literature review can be arranged according to the author's discretion.  However, it is important for the author to ensure the review reflects current APA publication standards.

literature review website development

  • Next: Steps to Success: The Literature Review Process >>
  • Last Updated: Feb 29, 2024 4:20 PM
  • URL: https://researchguides.library.vanderbilt.edu/peabody/litreviews

Creative Commons License

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

A Comprehensive Framework to Evaluate Websites: Literature Review and Development of GoodWeb

Affiliation.

  • 1 Public Health England, Gloucester, United Kingdom.
  • PMID: 31651406
  • PMCID: PMC6914275
  • DOI: 10.2196/14372

Background: Attention is turning toward increasing the quality of websites and quality evaluation to attract new users and retain existing users.

Objective: This scoping study aimed to review and define existing worldwide methodologies and techniques to evaluate websites and provide a framework of appropriate website attributes that could be applied to any future website evaluations.

Methods: We systematically searched electronic databases and gray literature for studies of website evaluation. The results were exported to EndNote software, duplicates were removed, and eligible studies were identified. The results have been presented in narrative form.

Results: A total of 69 studies met the inclusion criteria. The extracted data included type of website, aim or purpose of the study, study populations (users and experts), sample size, setting (controlled environment and remotely assessed), website attributes evaluated, process of methodology, and process of analysis. Methods of evaluation varied and included questionnaires, observed website browsing, interviews or focus groups, and Web usage analysis. Evaluations using both users and experts and controlled and remote settings are represented. Website attributes that were examined included usability or ease of use, content, design criteria, functionality, appearance, interactivity, satisfaction, and loyalty. Website evaluation methods should be tailored to the needs of specific websites and individual aims of evaluations. GoodWeb, a website evaluation guide, has been presented with a case scenario.

Conclusions: This scoping study supports the open debate of defining the quality of websites, and there are numerous approaches and models to evaluate it. However, as this study provides a framework of the existing literature of website evaluation, it presents a guide of options for evaluating websites, including which attributes to analyze and options for appropriate methods.

Keywords: human-computer interaction; quality testing; scoping study; software testing; usability; user experience.

©Rosalie Allison, Catherine Hayes, Cliodna A M McNulty, Vicki Young. Originally published in JMIR Formative Research (http://formative.jmir.org), 24.10.2019.

Publication types

Serious games in high-stakes assessment contexts: a systematic literature review into the game design principles for valid game-based performance assessment

  • Research Article
  • Open access
  • Published: 08 April 2024

Cite this article

You have full access to this open access article

  • Aranka Bijl   ORCID: orcid.org/0000-0001-5745-1396 1 , 2 , 3 ,
  • Bernard P. Veldkamp 2 ,
  • Saskia Wools 3 &
  • Sebastiaan de Klerk 3  

96 Accesses

Explore all metrics

The systematic literature review (1) investigates whether ‘serious games’ provide a viable solution to the limitations posed by traditional high-stakes performance assessments and (2) aims to synthesize game design principles for the game-based performance assessment of professional competencies. In total, 56 publications were included in the final review, targeting knowledge, motor skills and cognitive skills and further narrowed down to teaching, training or assessing professional competencies. Our review demonstrates that serious games are able to provide an environment and task authentic to the target competency. Collected in-game behaviors indicate that serious games are able to elicit behavior that is related to a candidates’ ability level. Progress feedback and freedom of gameplay in serious games can be implemented to provide an engaging and enjoyable environment for candidates. Few studies examined adaptivity and some examined serious games without an authentic environment or task. Overall, the review gives an overview of game design principles for game-based performance assessment. It highlights two research gaps regarding authenticity and adaptivity and concludes with three implications for practice.

Avoid common mistakes on your manuscript.

In the years since their first introduction (ca. 1950s), videogames have only increased in popularity. In education, videogames are already widely applied as tools to support students in learning (cf. Boyle et al., 2016 ; Ifenthaler et al., 2012 ; Young et al., 2012 ). In contrast, less research has been done on the use of videogames as summative assessment environments, even though administering (high-stakes) summative assessments through games has several advantages.

First, videogames can be used to administer standardized assessments that provide richer data about candidate ability in comparison to traditional standardized assessments (e.g., multiple-choice tests; Schwartz & Arena, 2013 ; Shaffer & Gee, 2012 ; Shute & Rahimi, 2021 ). Second, assessment through videogames gives considerable freedom in recreating real-life criterion situations, which allows for authentic, situated assessment even when this is not feasible in the real working environment (Bell et al., 2008 ; Dörner et al., 2016 ; Fonteneau et al., 2020 ; Harteveld, 2011 ; Kirriemur & McFarlane, 2004 ; Michael & Chen, 2006 ). Third, videogames can offer candidates a more enjoyable test experience by providing an engaging environment where they are given a high degree of autonomy (Boyle et al., 2012 ; Jones, 1998 ; Mavridis & Tsiatsos, 2017 ). Finally, videogames allow for assessment through in-game behaviors (i.e., stealth assessment), which intends to make assessment less salient for candidates and lets them retain engagement (Shute & Ke, 2012 ; Shute et al., 2009 ).

The benefits above highlight why videogames are viable assessment environments, irrespective of the specific level of cognitive achievement (e.g., those depicted in Bloom’s revised taxonomy; Krathwohl, 2002 ). Moreover, the possibility for immersing candidates in complex, situated contexts make them especially interesting for higher-order learning outcomes such as problem solving and critical thinking (Dede, 2009 ; Shute & Ke, 2012 ). Therefore, videogames may provide a solution to the validity threats associated with traditional high-stakes performance assessments: an assessment type to evaluate competencies through a construct-relevant task in the context for which it is intended (Lane & Stone, 2006 ; Messick, 1994 ; Stecher, 2010 ), often used for the purpose of vocational certification.

The first validity threat associated with high-stakes performance assessments is the prevalence of test anxiety among candidates (Lane & Stone, 2006 ; Messick, 1994 ; Stecher, 2010 ), which is shown to be negatively correlated to test performance (von der Embse et al., 2018 ; von der Embse & Witmer, 2014 ). Although some debate exists about the causal relationship between the two (Jerrim, 2022 ; von der Embse et al., 2018 ), it is apparent that candidates who experience test anxiety are unfairly disadvantaged in high-stakes assessment contexts.

The second threat identified is caused by a need for high-stakes performance assessment to be both standardized to ensure objectivity and fairness (AERA et al., 2014 ; Kane, 2006 ) as well as include a construct-relevant task (e.g., writing an essay, participating in a roleplay; Lane & Stone, 2006 ; Messick, 1994 ). While neither rule out adaptivity (e.g., adaptive testing and open-ended assessments), the combination often restricts us to use a linear performance task that is not adaptable to candidate ability level. The potential mismatch that could occur between task difficulty and the ability level of candidates posits two disadvantages. First, the mismatch can frustrate candidates, which negatively affects their test performance (Wainer, 2000 ). Second, candidates likely receive fewer tasks that align with their ability level, which negatively affects test reliability and efficiency (Burr et al., 2023 ). High-stakes performance assessments would thus benefit from adaptive testing that is personalized and appropriately difficult, allowing candidates to be challenged enough to retain engagement (Burr et al., 2023 ; Malone & Lepper, 1987 ; Van Eck, 2006 ) while assessors are able to determine whether the candidate is at the required level efficiently and reliably (Burr et al., 2023 ; Davey, 2011 ). Additionally, adaptive testing allows for more personalized (end-of-assessment) feedback that could further boost candidate performance (Burr et al., 2023 ; Martin & Lazendic, 2018 ).

The third threat identified in high-stakes performance assessment is a lack of assessment authenticity. Logically, assessment would be administered best in the authentic context (i.e., the workplace in the case of professional competencies). This leads to a high degree of fidelity: how closely the assessment environment mirrors reality (Alessi, 1988, as cited in Gulikers et al., 2004 ). Unfortunately, this is not attainable for competencies that are dangerous or unethical to carry out (Bell et al., 2008 ; Williams-Bell et al., 2015 ). Another concern is that in the workplace, assessments are largely dependent on the workplace in which they are carried out. This would lead to considerable variations in testing conditions between candidates, but also the construct relevance of tasks they are evaluated on (Baartman & Gulikers, 2017 ). Authenticity of physical context and task are two dimensions required for mobilizing the competencies of interest (Gulikers et al., 2004 ), there is a need to achieve authenticity in other ways. Authenticity is also related to transfer: applying what is learned to new contexts. The higher the alignment between assessment and reality is, the more likely it is that the transfer of competence to the professional practice is made.

The fourth threat identified are inconsistencies between raters in scoring candidate performance. Traditional high-stakes performance assessments are often accompanied by rubrics to evaluate candidate performance; however, inconsistencies in how rubrics are interpreted and used leads to construct-irrelevant variance (Lane & Stone, 2006 ; Wools et al., 2010 ). In this study, the aim is to investigate whether ‘serious games’ (SGs)—those “used for purposes other than mere entertainment” (Susi et al., 2007 ; p. 1)—provide a viable solution to this and the other limitations posed by traditional high-stakes performance assessments.

The most important characteristic of games is that they are played with a clear goal in mind. Many games have a predetermined goal, but other games allow players to define their own objectives (Charsky, 2010 ; Prensky, 2001 ). Goals are given structure by the provision of rules, choices, and feedback (Lameras et al., 2017 ). First, rules direct players towards the goal by placing restrictions on gameplay (Charsky, 2010 ). Second, choices enable players to make decisions, for example to choose between different strategies to attain the goal (Charsky, 2010 ). The extent to which rules are restrictive for the gameplay is also closely related to the choices players have in the game (Charsky, 2010 ). Thus, rules and choices seem to be on two ends of a continuum that determines the linearity of a game. Linearity is defined as the extent to which players are given freedom of gameplay (Kim & Shute, 2015 ; Rouse, 2004 ). The third characteristic, feedback, is a well-versed topic in the field of education. In education, the main purpose of feedback is to help students get insight into their learning and get student understanding to the level of learning goals (Hattie & Timperley, 2007 ; Shute, 2008 ; van der Kleij et al., 2012 ). In games, feedback is used in a similar way to guide players towards the goal, as well as facilitate interactivity (Prensky, 2001 ). Feedback in games is provided in many modalities and gives players information about how they are progressing and where they stand with regards to the goal. For instance whether their actions have brought them closer to the goal or further away. Games are made up of a collection of game mechanics that define the game and determine how it is played (Rouse, 2004 ; Schell, 2015 ). In other words, game mechanics are how the defining features of games are translated into gameplay. To illustrate, game mechanics that provide feedback to players can include hints, gaining or losing lives, progress bars, dashboards, currencies and/or progress trees (Lameras et al., 2017 ).

When designing a game-based performance assessment, determining the information that should be collected about candidates to inform competence and designing the tasks that fulfill this information need is something that should be considered carefully for each professional competency. One way is through the use of the evidence-centered design (ECD) framework (cf. Mislevy & Riconscente, 2006 ). The ECD framework is a systematic approach to test development that relies on evidentiary arguments to move from a candidates behavior on a task to inferences about candidate ability. It is beyond the scope of the current study to examine the design of game content in relation to the target professional competencies. In this systematic literature review, the aim is to determine which game mechanics could help overcome the validity threats associated with high-stakes performance assessments and are suitable for use in such assessments.

Previous research for game design has been done for instructional SGs (e.g., dos Santos & Fraternali, 2016 ; Gunter et al., 2008 ). For SGs used in high-stakes performance assessments, emphasis is put on the potential effect of game mechanics on the validity of inferences should be considered. For instance, choices in game design can affect correlations between in-game behavior and player ability (Kim & Shute, 2015 ). Moreover, game mechanics exist that are likely to introduce construct-irrelevant variance when used in high-stakes performance assessments. To illustrate, when direct feedback about performance (e.g., points, lives, feedback messages) is given to players, at least part of the variance in test scores would be explained by the type and amount of feedback a candidate has received.

Establishing design principles for SGs for high-stakes performance assessment is important for several reasons. First, such an overview allows future developers such assessments to make more informed choices regarding game design. Second, combining and organizing the insights gained from the available empirical evidence advances the knowledge framework around the implementation of high-stakes performance assessment through games. Reviews on the use of games exist for learning (e.g., Boyle et al., 2016 ; Connolly et al., 2012 ; Young et al., 2012 ) or are targeted at specific professional domains (e.g., Gao et al., 2019 ; Gorbanev et al., 2018 ; Graafland et al., 2012 ; Wang et al., 2016 ). Nevertheless, a research gap remains as there is no knowledge of a systematic literature review that addresses the high-stakes performance assessment of professional competencies. To this end, this study begins with identifying the available literature on SGs targeted at professional competencies; then extracts the implemented game mechanics that could help to overcome the validity threats associated with high-stakes performance assessment; and finally synthesizes game design principles for game-based performance assessment in high-stakes contexts.

The scope of the current review is limited to professional competencies specifically catered to a vocation (e.g., construction hazard recognition). More generic professional competencies (e.g., programming) are not taken into consideration, as the context in which they are used can also fall outside of secondary vocational and higher education. Additionally, there is a growing body of literature that recognizes the potential of in-game behavior as a source of information about ability level in the context of game-based learning (e.g., Chen et al., 2020 ; Kim & Shute, 2015 ; Shute et al., 2009 ; Wang et al., 2015 ; Westera et al., 2014 ). As the relationship between in-game behavior and candidate ability is of equal importance in assessment, the scope of the current review includes SGs that focus not only on assessment, but also teaching and training of professional competencies.

The following section describes the procedure followed in conducting the current systematic literature review. First, a description of the inclusion criteria and search terms is given. This is followed by a description of the selection process and data extraction, together with an evaluation of the objectivity of the inclusion and quality criteria. Then, the search and selection results are presented, where two further categorizations of included studies operationalized: the type of competency and the how a successful SG is defined.

Following the guidelines described in Systematic Reviews in the Social Sciences (Petticrew & Roberts, 2005 ), the protocol below gives a description and the rationale behind the review along with a description of how different studies were identified, analyzed, and synthesized.

Databases and search terms

The databases that include most publications from the field of educational measurement ( Education Resources Information Center (ERIC) , PsycInfo , Scopus , and Web of Science) were consulted for the literature search using the following search terms:

Serious game : (serious gam* or game-based assess* or game-based learn* or game-based train*) and

Quality measure : (perform* or valid* or effect* or affect*)

Inclusion criteria and selection process

The initial search results were narrowed down by selecting only publications that were published in English and in a scientific, peer-reviewed journal. To be included, studies were required to report on the empirical research results of a study that (1) focused on a digital SG used for teaching, training, or assessment of one or more professional competencies specific to a work setting, (2) was conducted in secondary vocational education, higher education or vocational settings, and (3) included a measure to assess the dependent variable related to the quality of the SG. Studies were excluded when the focus was on simulations; while they have an overlapping role in the acquisition of professional competencies to SGs, these modalities represent distinct types of digital environments.

All results from the databases were exported to Endnote X9 (The EndNote Team, 2013 ) for screening. The selection process was conducted in three rounds. First, duplicates, and alternative document types (e.g., editorials, conference proceedings, letters) were removed. Then, the publications were screened based on the titles and abstracts; publications were removed when the title or abstract mentioned features of the study mutually exclusive with the inclusion criteria (e.g., primary school, rehabilitation, systematic literature review). Second, titles and abstracts of the remaining results were screened again. When the title or abstract lacked information, the full article was inspected. To illustrate, some titles and abstracts did not mention the target population, or whether the game was digital, or whether the professional competency was specific to a work setting. Finally, full-text articles were screened for full compliance with the inclusion criteria. Data was extracted from those publications.

The objectivity of the inclusion criteria was determined by blinded double classification on two occasions. The first occasion, after the removal of duplicates and alternative document types, 30 randomly selected publications were independently double-classified by an expert in the field of educational measurement based on the title and abstract. An agreement rate of 93% with a Cohen’s Kappa coefficient of .81 translated to a near perfect inter-rater reliability (Landis & Koch, 1977 ). On the second occasion, a random selection of 32 publications considered for data extraction were blindly double-classified based on the full-text by a master student in educational measurement which resulted in an agreement rate of 97% was with a near perfect Cohen’s Kappa coefficient (.94; Landis & Koch, 1977 ).

To assess the comprehensiveness of the systematic review and identify additional relevant studies, snowballing was conducted by backward and forward reference searching in Web of Science . For publications not available on Web of Science , snowballing was done in Scopus .

Data extraction

For the publications included, data was extracted systematically by means of a data extraction form (Supplementary Information SI1). The data extraction form includes: (1) general information, (2) details on the professional competency and research design, (3) serious game (SG) specifics and (4) a quality checklist.

The quality checklist contains 12 closed questions with three response options: the criterion is met (1), the criterion is met partly (.5), and the criterion is not met (0). Studies that scored 7 or below were considered to be of poor quality and were excluded. Studies that scored between 7.5 and 9.5 were considered to be of medium quality, while studies with scores 10 or above were considered to be of good quality (denoted with an asterisk in the data selection table; Supplementary Information SI2). These categories were determined by piloting the study quality checklist on two publications that were included, based on the inclusion criteria: one that was considered to be of a poor quality and one that was considered to be of good quality. The scores obtained by those studies were set as the lower and upper threshold, respectively.

As this systematic literature review is focused on the extraction of game mechanics to inform game design principles, all articles included in the review needed to obtain a score of at least .5 on the criteria that the game is discussed in enough detail. When publications explicitly refer to external sources for additional information, information from those sources were included in the data extraction form as well.

Blinded double coding to determine the reliability of the quality criteria for inclusion was done by the same raters described above. 24 randomly selected publications from the final review were included, with a varying overlap between three raters. The assigned scores were translated to the corresponding class (i.e., poor, medium, and good) to calculate the agreement rate. The rates ranged between 82 and 93%, which correspond to Cohen’s Kappa coefficients between substantial and near perfect (.66–.88; Landis & Koch, 1977 ; Table  1 ).

Search and selection results

In the PRISMA flow diagram of the publication selection process (Fig.  1 ; Moher et al., 2009 ), the two rounds in which titles and abstracts were screened for eligibility are combined. The databases were consulted on the 21st of December 2020 and yielded a total of 6,128 publications. After the removal of duplicates, 3,160 publications were left. On the basis of the inclusion criteria, another 2,981 publications were excluded from the review. In total, data was extracted from 179 publications. During the examination of the full-text articles, 129 studies were excluded due to insufficient quality (n = 42), lack of a detailed game description (n = 6), unavailability of the article (n = 5), not classifying the application as a game (n = 10) and an overall mismatch with the inclusion criteria (n = 66). In total, 50 publications were included. Snowballing was conducted in November of 2021 and resulted in the inclusion of six additional studies. In total, 56 publications were included in the final review.

figure 1

PRISMA flow diagram of inclusion of the systematic literature review. PRISMA  preferred reporting items for systematic reviews and meta-analyses

Categorization of selected studies

Competency types.

Professional competencies are acquired and assessed in different ways. Given the variety of professional competencies, there is no universal game design that is likely to be beneficial across the board (Wouters et al., 2009 ). Other researchers (e.g., Young et al., 2012 ) even suggest that game design principles should not be generalized across games, contexts or competencies. While more content-related game design principles likely need to be defined per context, this review is conducted with the idea that generic game design principles exist that can be successfully used in multiple contexts. In that sense, the aim is to provide a starting point from where more context-specific SGs can be designed, for example through the use of ECD.

The review is organized according to the type of professional competency that is evaluated rather than the content of the SG under investigation, as this provides an idea of what researchers expect to train or assess within the SG. Different distinctions between competencies can be made. For example, Wouters et al. ( 2009 ) distinguish between cognitive, motor, affective, and communicative competencies. Moreover, Harteveld ( 2011 ) distinguishes between knowledge, skills, and attitudes. These taxonomies served as a basis to inductively categorize the targeted professional competencies into knowledge, motor skills, and cognitive skills.

The knowledge category includes studies that focus on for instance declarative knowledge (i.e., fact-based) or procedural knowledge (i.e., how to do something). For instance, the procedural steps involved in cardiopulmonary resuscitation (CPR). The motor skills category refers to motor behaviors (i.e., movements). For CPR, an example would be compression depth. The cognitive skills category encompasses skills such as reasoning, planning, and decision making. For example, studies that focus on the recognition of situations that require CPR.

Successful SGs

The scope of this systematic literature review is limited to SGs that are shown to be successful in teaching, training, or the assessment of professional competencies. As research methodologies differ between studies, there is a need to define what characterizes a successful SG. When SGs were used in teaching or training, it was deemed successful when a significant improvement in the targeted professional competency was found (e.g., through an external validated measure of the competency). Some studies compared an active control group and an experimental group that additionally received an SG (e.g., Boada et al., 2015 ; Dankbaar et al., 2016 ; Graafland et al., 2017 ; see Supplementary Information SI2 for a full account): an SG was not deemed successful in the current results when such two groups showed comparable results. When SGs were used for assessment, it was deemed successful when (1) research results showed a significant relationship between the SG and a validated measure of the targeted competency, or (2) the SG was shown to accurately distinguish between different competency levels.

The studies included in the review are discussed in two ways. First, descriptives of the included studies are given in terms of the degree to which games were successful in teaching, training, or assessment of professional competencies, the professional domains, and the competency types. Then, the game mechanics associated with the potential solutions to the validity threats in traditional performance assessment are presented.

Descriptives of the included studies

The final review includes 56 studies, published between 2006 and 2020 (consult Supplementary Information SI2 for a more detailed overview). No noteworthy differences were found between the SGs that aimed to teach, train, and assess professional competencies. Therefore, the results for the SGs included in the review are presented collectively.

Serious games with successful results

Divided over the type of professional competency evaluated, 84%, 83%, and 100% reported research results showing the SG was successful for cognitive skills, knowledge, and motor skills respectively (Table  2 ). Of the studies included in the systematic review, three studies found mixed effects of the SG under investigation between competency types (i.e., Luu et al., 2020 ; Phungoen et al., 2020 ; Tan et al., 2017 ).

Professional domains and competency types

The studies included in the review can be divided over seven professional domains (Table  3 ). These are further separated into professional competencies (see Supplementary Information SI2 for a full account). Examples include history taking (Alyami et al., 2019 ), crisis management (Steinrücke et al., 2020 ) and cultural understanding (Brown et al., 2018 ). Furthermore, the studies included in the review can be divided into three competency types: cognitive skills (n = 21), knowledge (n = 31), and motor skills (n = 4). An important note is that some studies evaluate the SG on more than one competency type, thus the sum of these categories is greater than the total number of studies included.

Game mechanics

The following section discusses the inclusion of game mechanics—all design choices within the game—for the SGs discussed in the studies included in the review. Following the aim of the current paper, the game mechanics discussed are selected for having the potential to (1) mediate the validity threats associated with traditional performance assessments, and (2) be appropriate for implementing in a game-based performance assessment.

Authenticity

Authenticity in the SGs is divided into two dimensions: authenticity of the physical context and task. First, an example of a physical context that was not representative of the real working environment was found for all three competencies (Table  4 ). Regarding the SGs targeted at cognitive skills, this was the case for Effic’ Asthme (Fonteneau et al., 2020 ). In this SG, the target population—medical students—would normally carry out pediatric asthma exacerbation in a hospital setting. The game environment used is, however, the virtual bedroom of a child. Regarding the SGs targeted at knowledge, Alyami et al. ( 2019 ) implemented the game Metaphoria to teach history taking content to medical students. Here, the game environment is inside a pyramid within a fantasy world. The final SG using a game environment that does not resemble the real working environment within the motor skill competency type studied by Jalink et al. ( 2014 ). In this SG, laparoscopic skills are trained by having players perform tasks in an underground mining environment.

Second, of the studies for which task authenticity could be determined, all but four included an authentic task for the professional competency targeted (Table  5 ). Examples of a task that was not authentic were found for all three competency types. Two SGs that targeted cognitive skills did not include an authentic task (Brown et al., 2018 ; Chee et al., 2019 ) as a result of implementing role reversals. Within these SGs, the players played in a reversed role fashion, and thus the task was not authentic for the task in the real working environment. One SG targeting knowledge did not include an authentic task (Alyami et al., 2019 ). In Metaphoria , the task for players is to interpret visual metaphors in relation to symptoms, whereas the target professional competency was history taking content. Finally, the SG studied by Drummond et al. ( 2017 ), targeting motor skills, the professional competency under investigation was not represented authentically within the game as the navigation was through point-and-click.

Unobtrusive data collection

For all three competency types, studies were found that use in-game data to make inferences about player ability (Table  6 ). While other studies did mention the collection of in-game behaviors, the results were limited to those that assessed the appropriateness of using the data in the assessment of competencies.

Different measures of in-game behaviors were found. First, 12 SGs determine competency by comparing player performance to some predetermined target, sometimes also translated to a score. In the game VERITAS (Veracity Education and Reactance Instruction through Technology and Applied Skills; Miller et al., 2019 ), for instance, players are assessed on whether they accurately assess whether the statement given by a character in the game is true or false. Second, seven SGs use time spent (i.e., completion time or playing time) as a measure of performance. For example, in the SG Wii Laparoscopy (Jalink et al., 2014 ), completion time is used to assess performance. This performance metric in the game showed a high correlation with performance on a validated measure for laparoscopic skills, but it should be noted that time penalties were included for mistakes made during the task. Finally, the use of log data was found in one SG targeted at cognitive skills (Steinrücke et al., 2020 ). In the Dilemma Game, in-game measures collected during gameplay were found to have promising relationships with competency levels.

In SGs, the difficulty level can be adapted in two ways: independent of the actions of players or dependent on the actions of players (Table  7 ). Whereas SGs that varied in difficulty level were found for professional competencies related to both knowledge and motor skills, none were found for professional competencies related to cognitive skills. Three SGs were found that adjust difficulty level based on player actions; however, none of the SGs adjusts the difficulty level down based on player actions. Three studies evaluated SGs where difficulty level was varied independent of player actions. Regarding the SGs targeted at knowledge, players either received fixed assignments (Boada et al., 2015 ) or were able to set the difficulty level prior to gameplay (Taillandier & Adam, 2018 ). The SG studied by Asadipour et al. ( 2017 ), targeting motor skills, increased challenge by building up the flying speed during the game as well as random generation of coins, but this was independent of player ability. Two SGs targeted at knowledge did mention difficulty levels, but not how they were adjusted. The SG Metaphoria (Alyami et al., 2019 ) included three difficulty levels. The SG Sustainability Challenge (Dib & Adamo-Villani, 2014 ) became more challenging as players progress to higher levels, but it is not clear when or how this was done.

Test anxiety

As described earlier, games are able to provide a more enjoyable testing experience by providing an engaging environment with a high degree of autonomy. Therefore, the way game characteristics, feedback, rules, and choices—are expressed in the studies included in the review are discussed below. To avoid confusion with linearity of assessment, the expression freedom of gameplay to describe the interaction between rules and choices.

First, seven examples were found where players are given feedback unrelated to performance (Table  8 ). Some ways feedback was given included a dashboard (Perini et al., 2018 ), remaining resources (Calderón et al., 2018 ; Taillandier & Adam, 2018 ) remaining time (Calderón et al., 2018 ; Dankbaar et al., 2017a , 2017b ; Mohan et al., 2014 ) or remaining tasks (Jalink et al., 2014 ).

Second, all studies included in the review but two include game mechanics to give some freedom of gameplay (Table  9 ). For cognitive skills and knowledge, game mechanics included the choice between multiple options (n = 14 for both), the inclusion of interactive elements (n = 8, for both) and the possibility for free exploration (n = 5 and n = 8, respectively). Two examples of customization were found: Dib and Adamo-Villani ( 2014 ) gave players the choice of avatar, whereas Alyami et al. ( 2019 ) allowed for a custom name. For the SGs that target motor skills, freedom of gameplay was given through control over the movements. For three out of four SGs in this category, special controllers were developed to give players authentic control over the movements in the game. This was not the case for Drummond et al. ( 2017 ), as their game did not explicitly train CPR; however, the researchers did assess its effect on motor skills.

Included studies

The final review included 56 studies. Of these, many reported positive results. This suggests that SGs are often successful in teaching, training, or assessing professional competencies, but could also point to a publication bias of positive results. As similar reviews to the current one (e.g., Connolly et al., 2012 ; Randel et al., 1992 ; Vansickle, 1986 ; Wouters et al., 2009 ) draw on similar databases, it is difficult to establish what is true. Some studies found mixed results for different competency types, suggesting that different approaches are warranted. Therefore, game mechanics in SGs for different competency types are discussed separately.

The review included few studies on SGs targeting motor skills compared to those targeting cognitive skills and knowledge. The low number of SGs for motor skills could be due to the need for specialized equipment to create an SG targeting motor skills. For example, Wii Laparoscopy (Jalink et al., 2014 ) is played using controllers that are specifically designed for the game. Not only does it require an extra investment, it also affects the ease of large scale implementation. There is no indication that motor skills cannot be assessed through SGs: four out of five studies have shown positive effects, both in learning effectiveness and assessment accuracy. Despite this, the benefits may only outweigh the added costs in situations where it is unfeasible to perform the professional competency in the real working environment.

Focusing on game mechanics for the authenticity of the physical context and the task, the results indicate that SGs are able to provide both. It should be noted that, while SGs are able to simulate the physical context and task with high fidelity, authenticity remains a matter of perception (Gulikers et al., 2008 ). The review focused only on those SGs that were successful when compared to validated measures of the targeted professional competency. Since these measures are considered to be accurate proxies for workplace performance, the transfer to the real working environment is likely to have been made. For all three competency types, examples were found for SGs that did not include an authentic physical context or authentic task, while still mobilizing competencies of interest. Even though the number of SGs in these categories is quite small, it does indicate that it is possible to assess professional competencies without an authentic environment or task.

The in-game measures most often used in the included SGs are those that indicate how well a player did in comparison to some standard or target. This suggests that SGs are able to elicit behavior in players that is dependent on their ability level in the target professional competency. Since the accuracy measures varied depending on the professional competency, an investigation is warranted to determine which in-game measures are indicative of ability per situation. Evidentiary frameworks such as the ECD framework can provide guidance in determining which data could be used to make inferences about candidate ability. Despite the promising results, more research should be done on the informational value of log data before claims can be made.

Some examples of studies were found where adaptivity was implemented was adaptive. In particular, some promising relationships between in-game behaviors and ability level were found. In traditional (high-stakes) testing, adaptivity has already been implemented successfully (Martin & Lazendic, 2018 ; Straetmans & Eggen, 2007 ). Although there are professional competencies for which ability levels cannot be differentiated, you are either able to do it or not. For such competencies, adaptivity does not have an added benefit. In contrast, for professional competencies where it is possible to differentiate ability levels, adaptivity should be considered.

Considering the appropriateness of game mechanics for high-stakes assessment, feedback considered in the current review was limited to progress feedback. This adds a fourth type of feedback to the feedback already recognized for assessment: knowledge of correct response, elaborated feedback, and delayed knowledge of results (van der Kleij et al., 2012 ). Although the small number of SGs that incorporated progress feedback affect the generalizability of the finding, it does indicate that feedback about progress may be the most appropriate solution.

Freedom of gameplay

A variety of game mechanics implemented in the SGs included in the review fulfill freedom of gameplay. While some studies did not elaborate on the choices given in the game, common ways players are given freedom are through choice options, interactive elements, and freedom to explore. These game mechanics were found in various studies, which raises the possibility that these findings can be generalized to new SGs targeted at assessing professional competencies. Other game mechanics related to freedom of gameplay were also found in a smaller capacity. Thus, further research should shed light on their generalizability. Moreover, the freedom of gameplay provided to the player plays a substantial role in shaping overall player experience and behavior (Kim & Shute, 2015 ; Kirginas & Gouscos, 2017 ). Therefore, future research should shed further light on whether different game mechanics influence players in different ways.

Limitations

Although the current systematic literature review provides a useful overview of the game design principles for game-based performance assessment of professional competencies, some limitations are identified.

First, the review covered a substantial amount of studies from the healthcare domain. This may be because the medical field consists of many higher order standardized tasks which may be particularly suitable to SGs. Although the large contribution of studies in the healthcare domain could limit the generalizability to other domains. The results of this systematic review were quite uniform; no indication was found that SGs in healthcare employed different game mechanics were employed. Moreover, there is a growing popularity of SGs in healthcare education (Wang et al., 2016 ), resulting in a higher number of studies that were available compared to other professional domains. It is advisable to regard the current results as a starting point for game design principles game-based performance assessment. Further research into the generalizability of game design principles across professional domains is warranted.

The second limitation is true for all systematic literature reviews: it is a cross section of the literature and may not present the full picture. The inclusion of studies is dependent on what is available in the search databases, what is accessible, and what keywords are included in the literature. Likely due to this limitation, only studies published from 2006 are included in the review, while the use of SGs dates back much further (Randel et al., 1992 ; Vansickle, 1986 ). To minimize the omission of relevant literature, snowballing was conducted on the final selection of studies. This method allowed for including related and potentially relevant studies. In total, six additional publications were included through this method out of the 2,370 considered.

After snowballing, an assessment of why these additionally included studies were not found through the search results resulted in various insights. First, three studies used the terms (educational) video game in their publication on SGs (Duque et al., 2008 ; Jalink et al., 2014 ; Mohan et al., 2017 ). Including this term in the original search would have resulted in too many hits outside of the scope of the current review. Second, Moreno-Ger et al. ( 2010 ) used the term simulation to describe the application, but refer to the application as game-like. As simulations fall outside of the scope of the current review, the absence of this study in the initial search cannot be attributed to a gap in the search terms, Third, the publication from Blanié et al. ( 2020 ) was probably not found due to a mismatch in search terms related to the quality measure. Additional search terms such as impact or improve could have been included. As only one additional study was found that presented this issue, it is unlikely to have had a great effect on the outcome of the review. Finally, it is unclear why the study by Fonteneau et al. ( 2020 ) was not found through the initial search, as it showed a match with the search terms used in the current review. Perhaps, this misclassification can be ascribed to the search databases queried.

Finally, many of the studies included in the review compare SGs to other, non-digital or digital, alternatives in terms of learning. These types of studies often include many confounding variables (Cook, 2005 ). This is because a comparison is done between interventions that are different in more ways than one. These differences affect the results in different ways: positive, negative, or even through an interaction with other features.

Suggestions for future research

Besides providing interesting insights, the current review also has implications for research. First, the review identified SGs successful in teaching, training, or assessment that did not authentically represent the physical context or task. Although in this review, too few examples were found to generalize the findings. Second, while some studies were found in which the SGs difficulty was adaptive, more studies should be conducted on the implementation of adaptivity within SGs. In particular, how in-game behavior to match the difficulty level to the ability level of the candidates. Third, Fantasy is included in many games (Charsky, 2010 ; Prensky, 2001 ) and is regarded as one of the reasons for playing them (Boyle et al., 2016 ). By including fantasy elements in game-based performance assessments, assessment can become even more engaging and enjoyable and candidates can become even less aware of being assessed. For learning, it has been suggested that fantasy should be closely connected to the learning content (Gunter et al., 2008 ; Malone, 1981 ), but further research might explore whether this holds for SGs used for the (high-stakes) assessment of professional competencies. Furthermore, while fantasy elements may blur the direct link between the SG and the professional practice, in-game behavior may still have a clear relationship with professional competencies (Kim & Shute, 2015 ; Simons et al., 2021 ). More research into the effect of authenticity on the measurement validity of SGs in assessing professional competencies is warranted.

Implications for practice

Based on the results of the review, four recommendations can be made for practice. First, regardless of the competency type: design the SG in such a way that both the task and the context are authentic. The results have shown that SGs are able to provide a representation of the physical context and task, authentic to the professional competency under investigation. Thus, in situations where the physical context or assessment task are difficult to represent in a traditional performance assessments, SGs can provide a solution. At the same time, implementing non-authentic (fantasy) contexts and tasks should be investigated further before being implemented in high-stakes performance assessment.

Second, ensure that in-game behavior within the SG is collected. This review has synthesized additional evidence for the potential of in-game behavior as a source of information about ability level. That being said, the in-game behavior that can be used to inform ability level is dependent on both the professional competency of interest and game design. While no generalized design principles regarding the collection of gameplay data can be given, evidentiary frameworks (e.g., ECD) can be used to determine which in-game behavior can be used to infer ability level. This is ultimately connected to implementation of adaptivity. While a limited number of SGs were found that implemented adaptivity, the potential to unobtrusively data about ability level underscores a missed opportunity for the wider implementation of adaptivity in SGs. Taken together with the successful implementation of adaptive testing in traditional high-stakes assessments (Martin & Lazendic, 2018 ; Straetmans & Eggen, 2007 ), a third recommendation would be to implement adaptivity where appropriate.

Finally, this review gives an overview of the game mechanics for high-stakes game-based performance assessment with little risk of affecting validity. To provide freedom of gameplay for SGs targeted at cognitive skills and knowledge, include free exploration, interactive elements and providing options. For motor skills, giving control over movements is a, perhaps straightforward, game design principle. Furthermore, feedback in SGs for high-stakes performance assessments can be done through providing progress feedback, which is different from traditional types of feedback in education (van der Kleij et al., 2012 ) but has potential to satisfy feedback as a game mechanic. These recommendations, intended for game developers, may prove useful in designing future SGs for the (high-stakes) assessment of professional competencies.

In-text citations

American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (2014). Standards for educational and psychological testing . American Educational Research Association.

Google Scholar  

Baartman, L., & Gulikers, J. (2017). Assessment in Dutch vocational education: Overview and tensions of the past 15 years. In E. De Bruijn, S. Billet, & J. Onstenk (Eds.), Enhancing teaching and learning in the Dutch vocational education system: Reforms enacted (pp. 245–266). Springer.

Chapter   Google Scholar  

Bell, B. S., Kanar, A. M., & Kozlowski, S. W. J. (2008). Current issues and future directions in simulation-based training in North America. The International Journal of Human Resource Management, 19 (8), 1416–1434. https://doi.org/10.1080/09585190802200173

Article   Google Scholar  

Boyle, E., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., Lim, T., Ninaus, M., Ribeiro, C., & Pereira, J. (2016). An update to the systematic literature review of empirical evidence on the impacts and outcomes of computer games and serious games. Computers & Education, 94 , 178–192. https://doi.org/10.1016/j.compedu.2015.11.003

Boyle, E. A., Connolly, T. M., Hainey, T., & Boyle, J. M. (2012). Engagement in digital entertainment games: A systematic review. Computers in Human Behavior, 28 (3), 771–780. https://doi.org/10.1016/j.chb.2011.11.020

Burr, S., Gale, T., Kisielewska, J., Millin, P., Pêgo, J., Pinter, G., Robinson, I., & Zahra, D. (2023). A narrative review of adaptive testing and its application to medical education. MedEdPublish . https://doi.org/10.12688/mep.19844.1

Charsky, D. (2010). From edutainment to serious games: A change in the use of game characteristics. Games and Culture, 5 (2), 177–198. https://doi.org/10.1177/1555412009354727

Chen, F., Cui, Y., & Chu, M.-W. (2020). Utilizing game analytics to inform and validate digital game-based assessment with evidence-centered game design: A case study. International Journal of Artificial Intelligence in Education, 30 (3), 481–503. https://doi.org/10.1007/s40593-020-00202-6

Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59 (2), 661–686. https://doi.org/10.1016/j.compedu.2012.03.004

Cook, D. A. (2005). The research we still are not doing: An agenda for the study of computer-based learning. Academic Medicine, 80 (6), 541–548. https://doi.org/10.1097/00001888-200506000-00005

Davey, T. (2011). A guide to computer adaptive testing systems . C. o. C. S. S. Officers.

Dede, C. (2009). Immersive interfaces for engagement and learning. Science, 323 (5910), 66–69. https://doi.org/10.1126/science.1167311

Dörner, R., Göbel, S., Effelsberg, W., & Wiemeyer, J. (2016). Introduction. In R. Dörner, S. Göbel, W. Effelsberg, & J. Wiemeyer (Eds.), Serious games: Foundations, concepts and practice (pp. 1–34). Springer.

dos Santos, A. D., & Fraternali, P. (2016). A Comparison of methodological frameworks for digital learning game design. Lecture notes in computer science games and learning alliance. Springer.

Gao, Y., Gonzalez, V. A., & Yiu, T. W. (2019). The effectiveness of traditional tools and computer-aided technologies for health and safety training in the in the construction sector: A systematic review. Computers & Education, 138 , 101–115. https://doi.org/10.1016/j.compedu.2019.05.003

Gorbanev, I., Agudelo-Londoño, S., González, R. A., Cortes, A., Pomares, A., Delgadillo, V., Yepes, F. J., & Muñoz, Ó. (2018). A systematic review of serious games in medical education: Quality of evidence and pedagogical strategy. Medical Education Online, 23 (1), Article 1438718. https://doi.org/10.1080/10872981.2018.1438718

Graafland, M., Schraagen, J. M., & Schijven, M. P. (2012). Systematic review of serious games for medical education and surgical skills training. British Journal of Surgery, 99 (10), 1322–1330. https://doi.org/10.1002/bjs.8819

Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52 (3), 67. https://doi.org/10.1007/BF02504676

Gulikers, J. T. M., Bastiaens, T. J., Kirschner, P. A., & Kester, L. (2008). Authenticity is in the eye of the beholder: Student and teacher perceptions of assessment authenticity. Journal of Vocational Education and Training, 60 (4), 401–412. https://doi.org/10.1080/13636820802591830

Gunter, G. A., Kenny, R. F., & Vick, E. H. (2008). Taking educational games seriously: using the RETAIN model to design endogenous fantasy into standalone educational games. Educational Technology Research and Development, 56 (5), 511–537. https://doi.org/10.1007/s11423-007-9073-2

Harteveld, C. (2011). Foundations. Triadic Game design: balancing reality, meaning and play (pp. 31–93). Springer.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112. https://doi.org/10.3102/003465430298487

Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment in game-based learning: Foundations, innovations, and perspectives . Springer.

Book   Google Scholar  

Jerrim, J. (2022). Test anxiety: Is it associated with performance in high-stakes examinations? Oxford Review of Education . https://doi.org/10.1080/03054985.2022.2079616

Jones, M. G. (1998). Creating engagement in computer-based learning environments . https://www.yumpu.com/en/document/read/18776351/creating-engagement-in-computer-based-learning-environments

Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Praeger Publishers.

Kim, Y. J., & Shute, V. J. (2015). The interplay of game elements with psychometric qualities, learning, and enjoyment in game-based assessment. Computers & Education, 87 , 340–356. https://doi.org/10.1016/j.compedu.2015.07.009

Kirginas, S., & Gouscos, D. (2017). Exploring the impact of freeform gameplay on players’ experience: an experiment with maze games at varying levels of freedom of movement. International Journal of Serious Games . https://doi.org/10.17083/ijsg.v4i4.175

Kirriemur, J., & McFarlane, A. (2004). Literature review in games and learning . Sage.

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into Practice, 41 (4), 212–218. https://doi.org/10.1207/s15430421tip4104_2

Lameras, P., Arnab, S., Dunwell, I., Stewart, C., Clarke, S., & Petridis, P. (2017). Essential features of serious games design in higher education: Linking learning attributes to game mechanics. British Journal of Educational Technology, 48 (4), 972–994. https://doi.org/10.1111/bjet.12467

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33 (1), 159–174.

Lane, S., & Stone, C. A. (2006). Performance assessment. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 387–431). Praeger Publishers.

Malone, T. W. (1981). Toward a theory of intrinsically motivating instruction. Cogntive Science, 4 , 333–369. https://doi.org/10.1207/s15516709cog0504_2

Malone, T. W., & Lepper, M. R. (1987). Making learning fun: A taxonomy of intrinsic motivations for learning. In R. E. Snow & M. J. Farr (Eds.), Aptitude, learning, and instruction: Conative and affective process analyses (pp. 223–253). Lawrence Erlbaum Associates, Inc.

Martin, A. J., & Lazendic, G. (2018). Computer-adaptive testing: Implications for students’ achievement, motivation, engagement, and subjective test experience. Journal of Educational Psychology, 110 , 27–45. https://doi.org/10.1037/edu0000205

Mavridis, A., & Tsiatsos, T. (2017). Game-based assessment: Investigating the impact on test anxiety and exam performance. Journal of Computer Assisted Learning, 33 (2), 137–150. https://doi.org/10.1111/jcal.12170

Messick, S. (1994). Alternative modes of assessment, uniform standards of validity. ETS Research Report Series, 1994 (2), i–22. https://doi.org/10.1002/j.2333-8504.1994.tb01634.x

Michael, D., & Chen, S. (2006). Serious games: Games that educate, train, and inform . Muska & Lipman/Premier-Trade.

Mislevy, R. J., & Riconscente, M. M. (2006). Evidence-centered assessment design. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 61–90). Lawrence Erlbaum Associates.

Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The, P. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLOS Medicine, 6 (7), e1000097. https://doi.org/10.1371/journal.pmed.1000097

Petticrew, M., & Roberts, H. (2005). Systematic reviews in the social sciences: A practical guide. Blackwell Publishing . https://doi.org/10.1002/9780470754887

Prensky, M. (2001). Fun, play and games: What makes games engaging? In M. Prensky (Ed.), Digital game-based learning (pp. 16–47). McGraw-Hill.

Randel, J. M., Morris, B. A., Wetzel, C. D., & Whitehill, B. V. (1992). The effectiveness of games for educational purposes: A review of recent research. Simulation & Gaming, 23 (3), 261–276. https://doi.org/10.1177/1046878192233001

Rouse, R. (2004). Game design: Theory and practice (2nd ed.). Jones and Bartlett Publishers, Inc.

Schell, J. (2015). The art of game design: A book of lenses (2nd ed.). CRC Press.

Schwartz, D. L., & Arena, D. (2013). Measuring what matters most: Choice-based assessment for the digital age . The MIT Press.

Shaffer, D. W., & Gee, J. P. (2012). The right kind of GATE: Computer games and the future of assessment. In M. C. Mayrath, J. Clarke-Midura, & D. H. Robinson (Eds.), Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research (pp. 211–228). Information Age Publishing.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153–189. https://doi.org/10.3102/0034654307313795

Shute, V. J., & Ke, F. (2012). Games, learning, and assessment. In D. Ifenthaler, D. Eseryel, & X. Ge (Eds.), Assessment in game-based learning: foundations, innovations, and perspectives (pp. 43–58). Springer.

Shute, V. J., & Rahimi, S. (2021). Stealth assessment of creativity in a physics video game. Computers in Human Behavior, 116 , Article 106647. https://doi.org/10.1016/j.chb.2020.106647

Shute, V. J., Ventura, M., Bauer, M., & Zapata-Rivera, D. (2009). Melding the power of serious games and embedded assessment to monitor and foster learning: Flow and grow. In U. Ritterfeld, M. J. Cody, & P. Vorderer (Eds.), Serious games: Mechanisms and effects (pp. 295–321). Routledge.

Simons, A., Wohlgenannt, I., Weinmann, M., & Fleischer, S. (2021). Good gamers, good managers? A proof-of-concept study with Sid Meier’s Civilization. Review of Managerial Science, 15 (4), 957–990. https://doi.org/10.1007/s11846-020-00378-0

Stecher, B. (2010). Performance assessment in an era of standards-based educational accountability . Stanford University, Stanford Center for Opportunity Policy in Education.

Straetmans, G. J. J. M., & Eggen, T. J. H. M. (2007). WISCAT-pabo: computergestuurd adaptief toetspakket rekenen. Onderwijsinnovatie, 2017 (3), 17–27.

Susi, T., Johannesson, J., & Backlund, P. (2007). Serious game—An overview [IKI Technical Reports] . https://www.diva-portal.org/smash/get/diva2:2416/FULLTEXT01.pdf

The EndNote Team. (2013). EndNote (Version X9) Clarivate. https://endnote.com/

van der Kleij, F. M., Eggen, T. J. H. M., Timmers, C. F., & Veldkamp, B. P. (2012). Effects of feedback in a computer-based assessment for learning. Computers & Education, 58 (1), 263–272. https://doi.org/10.1016/j.compedu.2011.07.020

Van Eck, R. (2006). Digital game-based learning: It’s not just the digital natives who are restless. Educause Review, 41 (2), 16–30.

Vansickle, R. L. (1986). A quantitative review of research on instructional simulation gaming: A twenty-year perspective. Theory & Research in Social Education, 14 (3), 245–264. https://doi.org/10.1080/00933104.1986.10505525

von der Embse, N., Jester, D., Roy, D., & Post, J. (2018). Test anxiety effects, predictors, and correlates: A 30-year meta-analytic review. Journal of Affective Disorders, 483–493 , 132–156. https://doi.org/10.1016/j.jad.2017.11.048

von der Embse, N., & Witmer, S. E. (2014). High-stakes accountability: Student anxiety and large-scale testing. Journal of Applied School Psychology, 30 (2), 132–156. https://doi.org/10.1080/15377903.2014.888529

Wainer, H. (2000). Introduction and history. In H. Wainer, N. J. Dorans, R. Eignor, B. F. Green, R. J. Mislevy, L. Steinberg, & D. Thissen (Eds.), Computerized adaptive testing: A primer (2nd ed., pp. 1–21). Lawrence Erlbaum Associates Inc.

Wang, L., Shute, V., & Moore, G. R. (2015). Lessons learned and best practices of stealth assessment. International Journal of Gaming and Computer-Mediated Simulations, 7 (4), 66–87. https://doi.org/10.4018/ijgcms.2015100104

Wang, R., DeMaria, S., Jr., Goldberg, A., & Katz, D. (2016). A systematic review of serious games in training health care professionals. Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare, 11 (1), 41–51. https://doi.org/10.1097/sih.0000000000000118

Westera, W., Nadolski, R., & Hummel, H. (2014). Serious gaming analytics—What students’ log files tell us about gaming and learning. International Journal of Serious Games, 1 (2), 35–50. https://doi.org/10.17083/ijsg.v1i2.9

Williams-Bell, F. M., Kapralos, B., Hogue, A., Murphy, B. M., & Weckman, E. J. (2015). Using serious games and virtual simulation for training in the fire service: A review. Fire Technology, 51 , 553–584. https://doi.org/10.1007/s10694-014-0398-1

Wools, S., Eggen, T., & Sanders, P. (2010). Evaluation of validity and validation by means of the argument-based approach. Cadmo . https://doi.org/10.3280/cad2010-001007

Wouters, P., van der Spek, E. D., & van Oostendorp, H. (2009). Current practices in serious game research: A review from a learning outcomes perspective. In T. Connolly, M. Stansfield, & L. Boyle (Eds.), Games-based learning advancements for multi-sensory human computer interfaces: Techniques and effective practices (pp. 232–250). IGI Global.

Young, M. F., Slota, S., Cutter, A. B., Jalette, G., Mullin, G., Lai, B., Simeoni, Z., Tran, M., & Yukhymenko, M. (2012). Our princess is in another castle: A review of trends in serious gaming for education. Review of Educational Research, 82 (1), 61–89. https://doi.org/10.3102/0034654312436980

Studies included in the systematic review

Adams, A., Hart, J., Iacovides, I., Beavers, S., Oliveira, M., & Magroudi, M. (2019). Co-created evaluation: Identifying how games support police learning. International Journal of Human-Computer Studies, 132 , 34–44. https://doi.org/10.1016/j.ijhcs.2019.03.009

Aksoy, E. (2019). Comparing the effects on learning outcomes of tablet-based and virtual reality–based serious gaming modules for basic life support training: Randomized trial. JMIR Serious Games, 7 (2), Article e13442. https://doi.org/10.2196/13442

Albert, A., Hallowell, M. R., Kleiner, B., Chen, A., & Golparvar-Fard, M. (2014). Enhancing construction hazard recognition with high-fidelity augmented virtuality. Journal of Construction Engineering and Management, 140 (7), Article 04014024. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000860

Alyami, H., Alawami, M., Lyndon, M., Alyami, M., Coomarasamy, C., Henning, M., Hill, A., & Sundram, F. (2019). Impact of using a 3D visual metaphor serious game to teach history-taking content to medical students: Longitudinal mixed methods pilot study. JMIR Serious Games, 7 (3), Article e13748. https://doi.org/10.2196/13748

Ameerbakhsh, O., Maharaj, S., Hussain, A., & McAdam, B. (2019). A comparison of two methods of using a serious game for teaching marine ecology in a university setting. International Journal of Human-Computer Studies, 127 , 181–189. https://doi.org/10.1016/j.ijhcs.2018.07.004

Asadipour, A., Debattista, K., & Chalmers, A. (2017). Visuohaptic augmented feedback for enhancing motor skill acquisition. The Visual Computer, 33 (4), 401–411. https://doi.org/10.1007/s00371-016-1275-3

Barab, S. A., Scott, B., Siyahhan, S., Goldstone, R., Ingram-Goble, A., Zuiker, S. J., & Warren, S. (2009). Transformational play as a curriculur scaffold: Using videogames to support science education. Journal of Science Education and Technology, 18 (4), 305–320. https://doi.org/10.1007/s10956-009-9171-5

Benda, N. C., Kellogg, K. M., Hoffman, D. J., Fairbanks, R. J., & Auguste, T. (2020). Lessons learned from an evaluation of serious gaming as an alternative to mannequin-based simulation technology: Randomized controlled trial. JMIR Serious Games, 8 (3), Article e21123. https://doi.org/10.2196/21123

Bindoff, I., Ling, T., Bereznicki, L., Westbury, J., Chalmers, L., Peterson, G., & Ollington, R. (2014). A computer simulation of community pharmacy practice for educational use. American Journal of Pharmaceutical Education, 78 (9), Article 168. https://doi.org/10.5688/ajpe789168

Binsubaih, A., Maddock, S., & Romano, D. (2006). A serious game for traffic accident investigators. Interactive Technology and Smart Education, 3 (4), 329–346. https://doi.org/10.1108/17415650680000071

Blanié, A., Amorim, M. A., & Benhamou, D. (2020). Comparative value of a simulation by gaming and a traditional teaching method to improve clinical reasoning skills necessary to detect patient deterioration: A randomized study in nursing students. BMC Medical Education, 20 (1), Article 53. https://doi.org/10.1186/s12909-020-1939-6

Boada, I., Rodriguez-Benitez, A., Garcia-Gonzalez, J. M., Olivet, J., Carreras, V., & Sbert, M. (2015). Using a serious game to complement CPR instruction in a nurse faculty. Computer Methods and Programs in Biomedicine, 122 (2), 282–291. https://doi.org/10.1016/j.cmpb.2015.08.006

Brown, D. E., Moenning, A., Guerlain, S., Turnbull, B., Abel, D., & Meyer, C. (2018). Design and evaluation of an avatar-based cultural training system. The Journal of Defense Modeling and Simulation, 16 (2), 159–174. https://doi.org/10.1177/1548512918807593

Buttussi, F., Pellis, T., Cabas Vidani, A., Pausler, D., Carchietti, E., & Chittaro, L. (2013). Evaluation of a 3D serious game for advanced life support retraining. International Journal Medical Informatics, 82 (9), 798–809. https://doi.org/10.1016/j.ijmedinf.2013.05.007

Calderón, A., Ruiz, M., & O’Connor, R. V. (2018). A serious game to support the ISO 21500 standard education in the context of software project management. Computer Standards & Interfaces, 60 , 80–92. https://doi.org/10.1016/j.csi.2018.04.012

Chan, W. Y., Qin, J., Chui, Y. P., & Heng, P. A. (2012). A serious game for learning ultrasound-guided needle placement skills. IEEE Transactions on Information Technology in Biomedicine, 16 (6), 1032–1042. https://doi.org/10.1109/titb.2012.2204406

Chang, C., Kao, C., Hwang, G., & Lin, F. (2020). From experiencing to critical thinking: A contextual game-based learning approach to improving nursing students’ performance in electrocardiogram training. Educational Technology Research and Development, 68 (3), 1225–1245. https://doi.org/10.1007/s11423-019-09723-x

Chee, E. J. M., Prabhakaran, L., Neo, L. P., Carpio, G. A. C., Tan, A. J. Q., Lee, C. C. S., & Liaw, S. Y. (2019). Play and learn with patients—Designing and evaluating a serious game to enhance nurses’ inhaler teaching techniques: A randomized controlled trial. Games for Health Journal, 8 (3), 187–194. https://doi.org/10.1089/g4h.2018.0073

Chon, S., Timmermann, F., Dratsch, T., Schuelper, N., Plum, P., Berlth, F., Datta, R. R., Schramm, C., Haneder, S., Späth, M. R., Dübbers, M., Kleinert, J., Raupach, T., Bruns, C., & Kleinert, R. (2019). Serious games in surgical medical education: A virtual emergency department as a tool for teaching clinical reasoning to medical students. JMIR Serious Games, 7 (1), Article e13028. https://doi.org/10.2196/13028

Cook, N. F., McAloon, T., O’Neill, P., & Beggs, R. (2012). Impact of a web based interactive simulation game (PULSE) on nursing students’ experience and performance in life support training—A pilot study. Nurse Education Today, 32 (6), 714–720. https://doi.org/10.1016/j.nedt.2011.09.013

Cowley, B., Fantato, M., Jennett, C., Ruskov, M., & Ravaja, N. (2014). Learning when serious: Psychophysiological evaluation of a technology-enhanced learning game. Journal of Educational Technology & Society, 17 (1), 3–16.

Creutzfeldt, J., Hedman, L., & Felländer-Tsai, L. (2012). Effects of pre-training using serious game technology on CPR performance—An exploratory quasi-experimental transfer study. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine, 20 (1), Article 79. https://doi.org/10.1186/1757-7241-20-79

Creutzfeldt, J., Hedman, L., Medin, C., Heinrichs, W. L., & Felländer-Tsai, L. (2010). Exploring virtual worlds for scenario-based repeated team training of cardiopulmonary resuscitation in medical students. Journal of Medical Internet Research, 12 (3), Article e38. https://doi.org/10.2196/jmir.1426

Dankbaar, M. E. W., Alsma, J., Jansen, E. E. H., van Merrienboer, J. J. G., van Saase, J. L. C. M., & Schuit, S. C. E. (2016). An experimental study on the effects of a simulation game on students’ clinical cognitive skills and motivation. Advances in Health Sciences Education, 21 (3), 505–521. https://doi.org/10.1007/s10459-015-9641-x

Dankbaar, M. E. W., Bakhuys Roozeboom, M., Oprins, E. A. P. B., Rutten, F., van Merrienboer, J. J. G., van Saase, J. L. C. M., & Schuit, S. C. E. (2017a). Preparing residents effectively in emergency skills training with a serious game. Simulation in Healthcare, 12 (1), 9–16. https://doi.org/10.1097/sih.0000000000000194

Dankbaar, M. E. W., Richters, O., Kalkman, C. J., Prins, G., ten Cate, O. T. J., van Merrienboer, J. J. G., & Schuit, S. C. E. (2017). Comparative effectiveness of a serious game and an e-module to support patient safety knowledge and awareness. BMC Medical Education, 17 (1), Article 30. https://doi.org/10.1186/s12909-016-0836-5

de Sena, D. P., Fabrício, D. D., da Silva, V. D., Bodanese, L. C., & Franco, A. R. (2019). Comparative evaluation of video-based on-line course versus serious game for training medical students in cardiopulmonary resuscitation: A randomised trial. PLOS ONE, 14 (4), Article e0214722. https://doi.org/10.1371/journal.pone.0214722

Dib, H., & Adamo-Villani, N. (2014). Serious sustainability challenge game to promote teaching and learning of building sustainability. Journal of Computing in Civil Engineering, 28 (5), Article A4014007. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000357

Diehl, L. A., Souza, R. M., Gordan, P. A., Esteves, R. Z., & Coelho, I. C. M. (2017). InsuOnline, an electronic game for medical education on insulin therapy: A randomized controlled trial with primary care physicians. Journal of Medical Internet Research, 19 (3), Article e72. https://doi.org/10.2196/jmir.6944

Drummond, D., Delval, P., Abdenouri, S., Truchot, J., Ceccaldi, P., Plaisance, P., Hadchouel, A., & Tesnière, A. (2017). Serious game versus online course for pretraining medical students before a simulation-based mastery learning course on cardiopulmonary resuscitation: A randomised controlled study. European Journal of Anaesthesiology, 34 (12), 836–844. https://doi.org/10.1097/EJA.0000000000000675

Duque, G., Fung, S., Mallet, L., Posel, N., & Fleiszer, D. (2008). Learning while having fun: The use of video gaming to teach geriatric house calls to medical students. Journal of the American Geriatrics Society, 56 (7), 1328–1332. https://doi.org/10.1111/j.1532-5415.2008.01759.x

Fonteneau, T., Billion, E., Abdoul, C., Le, S., Hadchouel, A., & Drummond, D. (2020). Simulation game versus multiple choice questionnaire to assess the clinical competence of medical students: Prospective sequential trial. Journal of Medical Internet Research, 22 (12), Article e23254. https://doi.org/10.2196/23254

Gerard, J. M., Scalzo, A. J., Borgman, M. A., Watson, C. M., Byrnes, C. E., Chang, T. P., Auerbach, M., Kessler, D. O., Feldman, B. L., Payne, B. S., Nibras, S., Chokshi, R. K., & Lopreiato, J. O. (2018). Validity evidence for a serious game to assess performance on critical pediatric emergency medicine scenarios. Simulation in Healthcare, 13 (3), 168–180. https://doi.org/10.1097/SIH.0000000000000283

Graafland, M., Bemelman, W. A., & Schijven, M. P. (2014). Prospective cohort study on surgeons’ response to equipment failure in the laparoscopic environment. Surgical Endoscopy, 28 (9), 2695–2701. https://doi.org/10.1007/s00464-014-3530-x

Graafland, M., Bemelman, W. A., & Schijven, M. P. (2017). Game-based training improves the surgeon’s situational awareness in the operation room: A randomized controlled trial. Surgical Endoscopy, 31 (10), 4093–4101. https://doi.org/10.1007/s00464-017-5456-6

Hannig, A., Lemos, M., Spreckelsen, C., Ohnesorge-Radtke, U., & Rafai, N. (2013). Skills-O-Mat: Computer supported interactive motion- and game-based training in mixing alginate in dental education. Journal of Educational Computing Research, 48 (3), 315–343. https://doi.org/10.2190/EC.48.3.c

Hummel, H. G. K., van Houcke, J., Nadolski, R. J., van der Hiele, T., Kurvers, H., & Löhr, A. (2011). Scripted collaboration in serious gaming for complex learning: Effects of multiple perspectives when acquiring water management skills. British Journal of Educational Technology, 42 (6), 1029–1041. https://doi.org/10.1111/j.1467-8535.2010.01122.x

Jalink, M. B., Goris, J., Heineman, E., Pierie, J. P., & ten Cate Hoedemaker, H. O. (2014). Construct and concurrent validity of a Nintendo Wii video game made for training basic laparoscopic skills. Surgical Endoscopy, 28 (2), 537–542. https://doi.org/10.1007/s00464-013-3199-6

Katz, D., Zerillo, J., Kim, S., Hill, B., Wang, R., Goldberg, A., & DeMaria, S. (2017). Serious gaming for orthotopic liver transplant anesthesiology: A randomized control trial. Liver Transplantation, 23 (4), 430–439. https://doi.org/10.1002/lt.24732

Knight, J. F., Carley, S., Tregunna, B., Jarvis, S., Smithies, R., de Freitas, S., Dunwell, I., & Mackway-Jones, K. (2010). Serious gaming technology in major incident triage training: A pragmatic controlled trial. Resuscitation, 81 (9), 1175–1179. https://doi.org/10.1016/j.resuscitation.2010.03.042

LeFlore, J. L., Anderson, M., Zielke, M. A., Nelson, K. A., Thomas, P. E., Hardee, G., & John, L. D. (2012). Can a virtual patient trainer teach student nurses how to save lives—Teaching student nurses about pediatric respiratory diseases. Simulation in Healthcare, 7 (1), 10–17. https://doi.org/10.1097/SIH.0b013e31823652de

Li, K., Hall, M., Bermell-Garcia, P., Alcock, J., Tiwari, A., & González-Franco, M. (2017). Measuring the learning effectiveness of serious gaming for training of complex manufacturing tasks. Simulation & Gaming, 48 (6), 770–790. https://doi.org/10.1177/1046878117739929

Luu, C., Talbot, T. B., Fung, C. C., Ben-Isaac, E., Espinoza, J., Fischer, S., Cho, C. S., Sargsyan, M., Korand, S., & Chang, T. P. (2020). Development and performance assessment of a digital serious game to assess multi-patient care skills in a simulated pediatric emergency department. Simulation & Gaming, 51 (4), 550–570. https://doi.org/10.1177/1046878120904984

Middeke, A., Anders, S., Schuelper, M., Raupach, T., & Schuelper, N. (2018). Training of clinical reasoning with a serious game versus small-group problem-based learning: A prospective study. PLoS ONE, 13 (9), Article e0203851. https://doi.org/10.1371/journal.pone.0203851

Miller, C. H., Dunbar, N. E., Jensen, M. L., Massey, Z. B., Lee, Y., Nicholls, S. B., Anderson, C., Adams, A. S., Cecena, J. E., Thompson, W. M., & Wilson, S. N. (2019). Training law enforcement officers to identify reliable deception cues with a serious digital game. International Journal of Game-Based Learning, 9 (3), 1–22. https://doi.org/10.4018/IJGBL.2019070101

Mohan, D., Angus, D. C., Ricketts, D., Farris, C., Fischhoff, B., Rosengart, M. R., Yealy, D. M., & Barnato, A. E. (2014). Assessing the validity of using serious game technology to analyze physician decision making. PLOS ONE, 9 (8), Article e105445. https://doi.org/10.1371/journal.pone.0105445

Mohan, D., Farris, C., Fischhoff, B., Rosengart, M. R., Angus, D. C., Yealy, D. M., Wallace, D. J., & Barnato, A. E. (2017). Efficacy of educational video game versus traditional educational apps at improving physician decision making in trauma triage: Randomized controlled trial. BMJ, 359 , Article j5416. https://doi.org/10.1136/bmj.j5416

Mohan, D., Fischhoff, B., Angus, D. C., Rosengart, M. R., Wallace, D. J., Yealy, D. M., Farris, C., Chang, C. H., Kerti, S., & Barnato, A. E. (2018). Serious games may improve physician heuristics in trauma triage. Proceedings of the National Academy of Sciences, 115 (37), 9204–9209. https://doi.org/10.1073/pnas.1805450115

Moreno-Ger, P., Torrente, J., Bustamante, J., Fernandez-Galaz, C., Fernandez-Manjon, B., & Comas-Rengifo, M. D. (2010). Application of a low-cost web-based simulation to improve students’ practical skills in medical education. International Journal of Medical Informatics, 79 (6), 459–467. https://doi.org/10.1016/j.ijmedinf.2010.01.017

Perini, S., Luglietti, R., Margoudi, M., Oliveira, M., & Taisch, M. (2018). Learning and motivational effects of digital game-based learning (DGBL) for manufacturing education—The life cycle assessment (LCA) game. Computers in Industry, 102 , 40–49. https://doi.org/10.1016/j.compind.2018.08.005

Phungoen, P., Promto, S., Chanthawatthanarak, S., Maneepong, S., Apiratwarakul, K., Kotruchin, P., & Mitsungnern, T. (2020). Precourse preparation using a serious smartphone game on advanced life support knowledge and skills: Randomized controlled trial. Journal of Medical Internet Research, 22 (3), Article e16987. https://doi.org/10.2196/16987

Steinrücke, J., Veldkamp, B. P., & de Jong, T. (2020). Information literacy skills assessment in digital crisis management training for the safety domain: Developing an unobtrusive method. Frontiers in Education, 5 (140), Article 140. https://doi.org/10.3389/feduc.2020.00140

Su, C. (2016). The efects of students’ learning anxiety and motivation on the learning achievement in the activity theory based gamified learning environment. Eurasia Journal of Mathematics, Science and Technology Education, 13 , 1229–1258. https://doi.org/10.12973/eurasia.2017.00669a

Taillandier, F., & Adam, C. (2018). Games ready to use: A serious game for teaching natural risk management. Simulation & Gaming, 49 (4), 441–470. https://doi.org/10.1177/1046878118770217

Tan, A. J. Q., Lee, C. C. S., Lin, P. Y., Cooper, S., Lau, L. S. T., Chua, W. L., & Liaw, S. Y. (2017). Designing and evaluating the effectiveness of a serious game for safe administration of blood transfusion: A randomized controlled trial. Nurse Education Today, 55 , 38–44. https://doi.org/10.1016/j.nedt.2017.04.027

Zualkernan, I. A., Husseini, G. A., Loughlin, K. F., Mohebzada, J. G., & El Gami, M. (2013). Remote labs and game-based learning for process control. Chemical Engineering Education, 47 (3), 179–188.

Download references

Author information

Authors and affiliations.

eX:plain, Department of Applied Research, P.O. Box 1230, 3800 BE, Amersfoort, The Netherlands

Aranka Bijl

Faculty of Behavioural, Management and Social Sciences, Cognition, Data and Education, University of Twente, P.O. Box 217, 7500 AE, Enschede, The Netherlands

Aranka Bijl & Bernard P. Veldkamp

Cito, Department of Research and Innovation, P.O. Box 1034, 6801 MG, Arnhem, The Netherlands

Aranka Bijl, Saskia Wools & Sebastiaan de Klerk

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Aranka Bijl .

Ethics declarations

Conflict of interest.

We have no conflict of interest to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 42 kb)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Bijl, A., Veldkamp, B.P., Wools, S. et al. Serious games in high-stakes assessment contexts: a systematic literature review into the game design principles for valid game-based performance assessment. Education Tech Research Dev (2024). https://doi.org/10.1007/s11423-024-10362-0

Download citation

Accepted : 24 February 2024

Published : 08 April 2024

DOI : https://doi.org/10.1007/s11423-024-10362-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic literature review
  • Serious games
  • Professional competencies
  • Performance assessment
  • Game design principles
  • Find a journal
  • Publish with us
  • Track your research

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Science News
  • Meetings and Events
  • Social Media
  • Press Resources
  • Email Updates
  • Innovation Speaker Series

Workshop: Neurofeedback Intervention Development: Opportunities and Challenges

Date and time.

This workshop will convene researchers and federal officials to review the state of the science for neurofeedback (NF) intervention development for mental disorders, with an emphasis on real time fMRI approaches. The program will highlight recent developments in both early treatment development and later efficacy and effectiveness trials, as well as regulatory issues relevant to the approval and implementation of device-based interventions like NF. Panel presentations and discussion will aid in identifying new opportunities and challenges for NF development. The workshop is free and open to the scientific community and the public.

Sponsored by

National Institute of Mental Health, Division of Translational Research (DTR) .

Registration

This workshop is free, but registration is required.  

If you have questions about this workshop or need reasonable accommodations, please email Chris Sarampote . Requests need to be made five business days before the event.

COMMENTS

  1. A Literature Review: Website Design and User Engagement

    2.3. Analysis. The literature review uncovered 20 distinct design elements commonly discussed in research that affect user engagement. They were (1) organization - is the website logically organized, (2) content utility - is the information provided useful or interesting, (3) navigation - is the website easy to navigate, (4) graphical representation - does the website utilize icons ...

  2. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  3. A Literature Review: Website Design and User Engagement

    Ensuring the appropriate design of web-based resources is a critical component of website development 41 in which the use of iterative, co-designed methods is strongly advocated, 42 43 especially ...

  4. A Literature Review: Website Design and User Engagement

    The design elements mentioned most frequently in the reviewed literature were navigation, graphical representation, organization, content utility, purpose, simplicity, and readability. We discuss how previous studies define and evaluate these seven elements. This review and the resulting short list of design elements may be used to help ...

  5. A Literature Review: Website Design and User Engagement.

    A review and consolidate research on effective design and a short list of elements frequently used in research is defined to help designers and researchers to operationalize best practices for facilitating and predicting user engagement. Proper design has become a critical element needed to engage website and mobile application users. However, little research has been conducted to define the ...

  6. PDF A Literature Review: Website Design and User Engagement

    The literature review uncovered 20 distinct design elements commonly discussed in research that affect user engagement. They were (1) organization - is the website logically organized, (2) content utility - is the information provided useful or interesting, (3) navigation - is the

  7. (PDF) A Comprehensive Framework to Evaluate Websites: Literature Review

    Allison et al. (2019) suggested a comprehensive website evaluation to improve the quality of websites. Besides, as claimed by Kettle et al. (2012), the evaluation results of ELL websites assisted ...

  8. A Literature Review: Website Design and User Engagement.

    Author(s): Garett, Renee; Chiu, Jason; Zhang, Ly; Young, Sean D | Abstract: Proper design has become a critical element needed to engage website and mobile application users. However, little research has been conducted to define the specific elements used in effective website and mobile application design. We attempt to review and consolidate research on effective design and to define a short ...

  9. 13703 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on WEBSITE DEVELOPMENT. Find methods information, sources, references or conduct a literature review on ...

  10. A Comprehensive Framework to Evaluate Websites: Literature Review and

    Background: Attention is turning toward increasing the quality of websites and quality evaluation to attract new users and retain existing users. Objective: This scoping study aimed to review and define existing worldwide methodologies and techniques to evaluate websites and provide a framework of appropriate website attributes that could be applied to any future website evaluations.

  11. Development of an Approach to Evaluate Website Effectiveness

    It used a theoretical framework of 13 elements as a guideline to conduct a literature review. The study identified 108 general elements to evaluate website appeal, which were combined into five categories and 26 elements. These categories are look and feel, navigation, credentials, content, and customization.

  12. PDF Conducting a Literature Review

    Literature Review A literature review is a survey of scholarly sources that provides an overview of a particular topic. Literature reviews are a collection of the most relevant and significant publications regarding that topic in order to provide a comprehensive look at what has been said on the topic and by whom.

  13. Writing a literature review

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  14. Literature review: your definitive guide

    Try our tips on the Web of Science now. 2. Identify key papers (and know how to use them) As you explore the Web of Science, you may notice that certain papers are marked as "Highly Cited.". These papers can play a significant role when you write a narrative literature review.

  15. A LITERATURE REVIEW ON WEB DESIGN Directed Study

    The following literature review will examine the theories, principles, and practices recommended for website planning and creation. Adopting a design strategy Most novices jump right into page construction with only a vague notion of a plan, which Nielsen (2000) considers one of the two great errors in site design.

  16. Web application testing: A systematic literature review

    In this paper, we present a systematic literature review (SLR) of the web application testing (WAT) research domain. In a recent work, we conducted a systematic mapping (SM) study ( Garousi et al., 2013) in which we reviewed 79 papers in the WAT domain.

  17. How To Write A Literature Review (+ Free Template)

    Okay - with the why out the way, let's move on to the how. As mentioned above, writing your literature review is a process, which I'll break down into three steps: Finding the most suitable literature. Understanding, distilling and organising the literature. Planning and writing up your literature review chapter.

  18. What is a Literature Review? How to Write It (with Examples)

    A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship ...

  19. Developing a Literature Review

    According to the seventh edition of the APA Publication Manual, a literature review is "a critical evaluation of material that has already been published." As one embarks on creating a literature review, it is important to note that the grouping of components within a literature review can be arranged according to the author's discretion ...

  20. Writing a Literature Review

    A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays).

  21. A Comprehensive Framework to Evaluate Websites: Literature Review and

    Website evaluation methods should be tailored to the needs of specific websites and individual aims of evaluations. GoodWeb, a website evaluation guide, has been presented with a case scenario. Conclusions: This scoping study supports the open debate of defining the quality of websites, and there are numerous approaches and models to evaluate it.

  22. Serious games in high-stakes assessment contexts: a ...

    The systematic literature review (1) investigates whether 'serious games' provide a viable solution to the limitations posed by traditional high-stakes performance assessments and (2) aims to synthesize game design principles for the game-based performance assessment of professional competencies. In total, 56 publications were included in the final review, targeting knowledge, motor skills ...

  23. Ecommerce Website Development: The Ultimate Guide

    Now, let's go over some ecommerce website development steps that apply no matter which route you decide to take. 1. Pick Your Domain Name. A domain name (e.g., semrush.com) is your ecommerce website's digital address. And it's an important part of your online identity. Here are a few tips to choose the right one:

  24. (PDF) A systematic review of e-commerce websites literature in 2010

    A systematic review of e-commerce websites literature in 2010-2020 period. December 2022. Business And Management Studies An International Journal 10 (4):1305-1323. DOI: 10.15295/bmij.v10i4.2144 ...

  25. Labor and the Bibi-Modi Bromance

    The Modi-Netanyahu labor deal has an even older historical predecessor: British indenture. In the nineteenth century, as chattel slavery came to an end in Britain, indentured labor from countries such as India and China was introduced as a more "humane" alternative. The practice was abolished in 1920, but a century later, traces of its ...

  26. Workshop: Neurofeedback Intervention Development: Opportunities ...

    The program will highlight recent developments in both early treatment development and later efficacy and effectiveness trials, as well as regulatory issues relevant to the approval and implementation of device-based interventions like NF. Panel presentations and discussion will aid in identifying new opportunities and challenges for NF ...