Research Methods

  • Getting Started
  • Literature Review Research
  • Research Design
  • Research Design By Discipline
  • SAGE Research Methods
  • Teaching with SAGE Research Methods

Literature Review

  • What is a Literature Review?
  • What is NOT a Literature Review?
  • Purposes of a Literature Review
  • Types of Literature Reviews
  • Literature Reviews vs. Systematic Reviews
  • Systematic vs. Meta-Analysis

Literature Review  is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.

Also, we can define a literature review as the collected body of scholarly works related to a topic:

  • Summarizes and analyzes previous research relevant to a topic
  • Includes scholarly books and articles published in academic journals
  • Can be an specific scholarly paper or a section in a research paper

The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic

  • Help gather ideas or information
  • Keep up to date in current trends and findings
  • Help develop new questions

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Helps focus your own research questions or problems
  • Discovers relationships between research studies/ideas.
  • Suggests unexplored ideas or populations
  • Identifies major themes, concepts, and researchers on a topic.
  • Tests assumptions; may help counter preconceived ideas and remove unconscious bias.
  • Identifies critical gaps, points of disagreement, or potentially flawed methodology or theoretical approaches.
  • Indicates potential directions for future research.

All content in this section is from Literature Review Research from Old Dominion University 

Keep in mind the following, a literature review is NOT:

Not an essay 

Not an annotated bibliography  in which you summarize each article that you have reviewed.  A literature review goes beyond basic summarizing to focus on the critical analysis of the reviewed works and their relationship to your research question.

Not a research paper   where you select resources to support one side of an issue versus another.  A lit review should explain and consider all sides of an argument in order to avoid bias, and areas of agreement and disagreement should be highlighted.

A literature review serves several purposes. For example, it

  • provides thorough knowledge of previous studies; introduces seminal works.
  • helps focus one’s own research topic.
  • identifies a conceptual framework for one’s own research questions or problems; indicates potential directions for future research.
  • suggests previously unused or underused methodologies, designs, quantitative and qualitative strategies.
  • identifies gaps in previous studies; identifies flawed methodologies and/or theoretical approaches; avoids replication of mistakes.
  • helps the researcher avoid repetition of earlier research.
  • suggests unexplored populations.
  • determines whether past studies agree or disagree; identifies controversy in the literature.
  • tests assumptions; may help counter preconceived ideas and remove unconscious bias.

As Kennedy (2007) notes*, it is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field. In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.

Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:

Argumentative Review      This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to to make summary claims of the sort found in systematic reviews.

Integrative Review      Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.

Historical Review      Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review      A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.

Systematic Review      This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyse data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"

Theoretical Review      The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

* Kennedy, Mary M. "Defining a Literature."  Educational Researcher  36 (April 2007): 139-147.

All content in this section is from The Literature Review created by Dr. Robert Larabee USC

Robinson, P. and Lowe, J. (2015),  Literature reviews vs systematic reviews.  Australian and New Zealand Journal of Public Health, 39: 103-103. doi: 10.1111/1753-6405.12393

empirical studies of secretary and technology in literature review

What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters . By Lynn Kysh from University of Southern California

empirical studies of secretary and technology in literature review

Systematic review or meta-analysis?

A  systematic review  answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria.

A  meta-analysis  is the use of statistical methods to summarize the results of these studies.

Systematic reviews, just like other research articles, can be of varying quality. They are a significant piece of work (the Centre for Reviews and Dissemination at York estimates that a team will take 9-24 months), and to be useful to other researchers and practitioners they should have:

  • clearly stated objectives with pre-defined eligibility criteria for studies
  • explicit, reproducible methodology
  • a systematic search that attempts to identify all studies
  • assessment of the validity of the findings of the included studies (e.g. risk of bias)
  • systematic presentation, and synthesis, of the characteristics and findings of the included studies

Not all systematic reviews contain meta-analysis. 

Meta-analysis is the use of statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.  More information on meta-analyses can be found in  Cochrane Handbook, Chapter 9 .

A meta-analysis goes beyond critique and integration and conducts secondary statistical analysis on the outcomes of similar studies.  It is a systematic review that uses quantitative methods to synthesize and summarize the results.

An advantage of a meta-analysis is the ability to be completely objective in evaluating research findings.  Not all topics, however, have sufficient research evidence to allow a meta-analysis to be conducted.  In that case, an integrative review is an appropriate strategy. 

Some of the content in this section is from Systematic reviews and meta-analyses: step by step guide created by Kate McAllister.

  • << Previous: Getting Started
  • Next: Research Design >>
  • Last Updated: Aug 21, 2023 4:07 PM
  • URL: https://guides.lib.udel.edu/researchmethods

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Digital literacy in the university setting: A literature review of empirical studies between 2010 and 2021

Nieves gutiérrez-Ángel.

1 Departamento de Psicología, Área de Psicología Evolutiva y de la Educación, Universidad de Almería, Almeria, Spain

Jesús-Nicasio Sánchez-García

2 Departamento de Psicología, Sociología y Filosofía, Universidad de León, Leon, Spain

Isabel Mercader-Rubio

Judit garcía-martín.

3 Departamento de Psicología Evolutiva y de la Educación, Universidad de Salamanca, Salamanca, Spain

Sonia Brito-Costa

4 Instituto Politécnico de Coímbra, Coimbra, Portugal

5 Coimbra Education School, Research Group in Social and Human Sciences Núcleo de Investigação em Ciências Sociais e Humanas da ESEC (NICSH), Coimbra, Portugal

Associated Data

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

The impact of digital devices and the Internet has generated various changes at social, political, and economic levels, the repercussion of which is a great challenge characterized by the changing and globalized nature of today's society. This demands the development of new skills and new learning models in relation to information and communication technologies. Universities must respond to these social demands in the training of their future professionals. This paper aims to analyze the empirical evidence provided by international studies in the last eleven years, related to the digital literacy of university students, including those pursuing degrees related to the field of education. Our findings highlight the fact that the digital literacy that is offered in universities to graduate/postgraduate students, in addition to treating digital literacy as a central theme, also focuses on perceived and developed self-efficacy. This is done by strengthening competencies related to digital writing and reading, the use of databases, the digital design of content and materials, and the skills to edit, publish or share them on the web, or applications aimed at treating digital literacy as emerging pedagogies and educational innovation. Secondly, we found studies related to digital competencies and use of the Internet, social networks, web 2.0, or the treatment of digital risks and their relationship with digital literacy. Thirdly, we found works that, in addition to focusing on digital literacy, also focused on different psychological constructs such as motivation, commitment, attitudes, or satisfaction.

Systematic review registration: https://www.scopus.com/home.uri ; https://www.recursoscientificos.fecyt.es/ .

Introduction

The concept of digital literacy (DL) appears for the first time in the works of Zurkowski ( 1974 ), for whom it is an ability to identify, locate, and examine information. However, despite its novelty, the conceptions it encompasses have been changing (Lim and Newby, 2021 ). Proof of this are the contributions of Gilster ( 1997 ) who combines the idea that DL is also closely linked to skills such as access, evaluation, and management of information used in learning processes. Digital learning is understood as the set of technical-procedural, cognitive, and socio-emotional skills necessary to live, learn, and work in a digital society (Eshet-Alkalai, 2012 ; European Commission, 2018 ). It is related to reading, writing, calculation skills, and effective use of technology in personal, social, and professional areas. It is also considered inseparable from the social and educational needs of the society in which we live (Larraz, 2013 ; Brata et al., 2022 ). Therefore, we refer to a concept that has several aspects including the technological aspect, the informative and multimedia aspect, and the communicative aspect. It involves a complete process and multiple literacies (Gisbert and Esteve, 2011 ; Lázaro, 2015 ; Valverde et al., 2022 ). It requires mastery of certain competencies related to the identification of training needs, access to information in digital environments, the use of ICT tools to manage information, interpretation, and representation of information, and the evaluation of information and the transmission of information (Covello and Lei, 2010 ; Walsh et al., 2022 ).

Digital literacy in university students

In recent years, society has undergone enormous changes with the digitalization of many of its spheres at the information level, the communication level, the level of knowledge acquisition, the level of the establishment of social relations, and even the level of leisure. Thus, our habits and means of accessing, managing, and transforming information have also changed (European Union, 2013 ; Cantabrana and Cervera, 2015 ; Allen et al., 2020 ; López-Meneses et al., 2020 ).

These developments have also had a great impact on the educational field, in which we have to rethink firstly what kind of students we are training in terms of the skills they need in today's society, and secondly, whether we are training a profile of future teachers capable of training a student body that uses information and communication technologies as something inherent to their own personal and social development. In short, digital communication has changed practices related to literacy and has gained great relevance in the development of knowledge in the twenty-first century (Comisión Europea, 2012 , 2013 ; European Commission, 2012 ; OECD, 2012 ; Unión Europea, 2013 ; Instituto Nacional de Tecnologías Educativas y Formación del Profesorado, 2017 ; Gudmundsdottir and Hatlevik, 2018 ; Pérez and Nagata, 2019 ; Fernández-de-la-Iglesia et al., 2020 ).

The European Commission ( 2013 ) indicates that initial teacher training (IDT) should integrate teachers' digital literacy, betting on the pedagogical use of digital tools, enabling them to use them in an effective, appropriate, and contextualized manner. This teaching competence should be characterized by having a holistic, contextualized, performance-, function-, and development-oriented character. In short, it is about incorporating and adequately using ICT as a didactic resource (Cantabrana and Cervera, 2015 ; Castañeda et al., 2018 ; Tourón et al., 2018 ; Chow and Wong, 2020 ; Vodá et al., 2022 ).

In this sense, according to the work of Krumsvik ( 2009 ), the CDD ( competencia digital docente de los profesores –digital competency training for teachers) is composed of four components: basic digital skills (Bawden, 2008 ), didactic competence with ICT (Koehler and Mishra, 2008 ; Gisbert and Esteve, 2011 ), learning strategies, and digital training or training.

While at the Spanish level, the Common Framework of Digital Teaching Competence of the National Institute of Educational Technologies and Teacher Training (INTEF, 2017 ) standardizes it in five areas: information and information literacy, communication and collaboration, digital content creation, security, and problem solving (López-Meneses et al., 2020 ). Recently, they have been consolidated as competencies that must be acquired by any university student, along with the knowledge, skills, and attitude that make up a digitally competent citizen (Recio et al., 2020 ; Indah et al., 2022 ).

Digital literacy in future teachers

Several efforts have been made to equip future teachers with these competencies through different standards and frameworks to the level of learning acquired (Fraser et al., 2013 ; INTEF, 2017 ; UNESCO, 2018 ). However, how to work these competencies in initial training is still a hotly debated topic, in which special attention is paid to the promotion of experiences of a pedagogical and innovative nature to transform teaching practices, involving the integration of technologies in the classroom, as stated in the Horizon Report 2019 for the Higher Education (Educause, 2019 ; Le et al., 2022 ).

Universities are in a moment of transformation, from a teacher-focused teaching model to a model based on active learning through the use of digital technologies, giving rise to a new type of education in which the use of digital devices is intrinsic (Area, 2018 ; Aarsand, 2019 ). If digital resources and devices are an inescapable part of current and future teaching practice, digital competency training for future teachers becomes extremely relevant, given that teachers need to acquire these competencies in their initial training to integrate them into their practices as future teachers. That is, the digital competence (DC) acquired during their initial training significantly predicts the integration of technologies in future teaching practice (Nikou and Aavakare, 2021 ), which could range from basic digital literacy to the integration of technologies in their daily teaching practice (Gisbert et al., 2016 ; Alanoglu et al., 2022 ). Several studies have defined the different indicators that make up DC (Siddiq et al., 2017 ; González et al., 2018 ; Rodríguez-García et al., 2019 ; Cabero-Almenara and Palacios-Rodríguez, 2020 ).

This calls for a new paradigm, in which future teachers must be digitally literate, in terms of the application of active methodologies, digital competencies, and the use of innovative strategies, styles, and approaches (Garcia-Martin and Garcia-Sanchez, 2017 ; Gómez-García et al., 2021 ).

Currently, literacy workshops for future professionals are being carried out in a timely and precise manner from customized short training capsules to specific semester-long subjects in undergraduate or postgraduate studies. The training is focused on several specific aspects of digital literacy, but there is a lack of experience in imparting comprehensive digital training. In addition, there are just a few interactions with professional experts in such literacy (Ata and Yildirim, 2019 ; Campbell and Kapp, 2020 ; Domingo-Coscolla et al., 2020 ; Tomczyk et al., 2020 ; Vinokurova et al., 2021 ).

The present study

For the present study, we based our approach on quality and current education, in which DC was postulated as a key element for the development of students. The educational system was tasked with preparing them for their full development and participation in society (OECD, 2011 ). For this reason, digital literacy is understood as an essential requirement for development in the society in which we live, based on the promotion of strategies related to searching, obtaining, processing, and communicating information. All these aspects have been consolidated as the dimensions of literacy in the twenty-first century (Piscitelli, 2009 ; Martín and Tyner, 2012 ). It is, therefore, necessary to understand the reality of this subject and to investigate how these practices are being developed in the context of work. And secondly, it is equally necessary to implement new interventions and lines of research that respond to this urgent need for literacy required by today's society. Therefore, we posed the following research questions: What psychoeducational and learning variables are key in digital literacy? What is the current situation internationally regarding digital literacy in all disciplines in pre-service teacher education? What are the differences in digital literacy requirements pre and post pandemic?

The objective of this study is to analyze the empirical evidence provided by international studies from 2010 to 2021 related to the digital literacy of university students, including those who are pursuing careers related to the educational field.

Relevant differences will be observed in the contributions in empirical evidence from international studies pre-post-pandemic; and drawn from diverse cultural backgrounds (Spanish-Latin, Portuguese, Finnish, etc.,), gender, and personal digital resources.

Materials and methods

The systematic review is composed of four phases, following the model of Miller et al. ( 2016 ) and Scott et al. ( 2018 ).

PHASE 1: Search terms: In this phase, we developed a schematic of search terms from Web of Science and Scopus databases. We also accessed the databases to locate specific studies that were referenced in the publications that we found in the databases during our initial search. The schematic of terms and thematic axes that were used as a starting point for scanning both databases for anything related to the descriptor “digital” and the descriptor “literacy” is presented in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-896800-g0001.jpg

Diagram of search terms used in the systematic review.

PHASE 2: Selection process based on inclusion and exclusion criteria. The following selection criteria were applied: year of publication between 2010 and 2021, availability of full text, and language of publication in English, Portuguese, or Spanish. Once the first results were obtained, they were selected based on title, abstract, and the use of standardized instruments in their methodology. We rejected the studies that used “ ad hoc ” instruments to measure digital competence.

In addition, the selection indicators provided by Cooper and Hedges ( 1994 ) and Cooper ( 2009 ) were used, such as peer-reviewed journals, referenced databases, and citation indexes.

PHASE 3: Analysis of methodological quality and indicators based on scientific evidence. Following Torgerson ( 2007 ) and Risko et al. ( 2008 ) and taking into consideration the MQQn (Risko et al., 2008 ), we used seven indicators to analyze the quality and effectiveness of the studies (Acosta and Garza, 2011 ). These were: alignment of theory, findings, reliability and validity, descriptive details of participants and the study, sample, and consistency of findings and conclusions with the data (Risko et al., 2008 ). Alternatively, evidence-based indicators were also used along with study effect sizes (Díaz and García, 2016 ; Canedo-García et al., 2017 ).

PHASE 4: Reliability and outcomes. Reliability was established for both the selection criteria and the coding criteria during each phase, to evidence the replicability of the results. In addition, the results entailed a qualitative analysis of the selected studies, the central arguments, and the evidence provided in a modulated way to address the research questions.

Therefore, the procedure to be followed was documented and charted according to the PRISMA statement (Moher et al., 2009 ; Page et al., 2021 ) (see Figure 2 ). Likewise, an analysis was undertaken of the key foci in the various studies to highlight the relevant findings and evidence they provided in this regard. The key focus of our work was: first, to analyze the documents related to the digital literacy of university students; second, to identify which variables affect digital literacy; and third, to undertake a comparative analysis between the different variables that were analyzed.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-896800-g0002.jpg

Flowchart of search results of empirical studies in databases applying the criteria of Moher et al. ( 2009 ) and Page et al. ( 2021 ).

All the selected studies had as samples university students who were pursuing some type of degree or postgraduate degree related to education, and therefore, studying to become future teachers. An intervention design was presented that corresponds to a pre-intervention, the intervention itself, and a post-intervention using techniques such as the activation of prior knowledge, instructions, emulation, and subsequent tests. We also found studies that had an experimental design assessing control groups and experimental groups (Kajee and Balfour, 2011 ; Kuhn, 2017 ; Pequeño et al., 2017 ; Sharp, 2018 ; Lerdpornkulrat et al., 2019 ).

In the case of those responsible for the intervention, practically in all cases, the teacher acts as such, with one or two of them taking the lead. Although the presence of specialized personnel should also be highlighted, as is the case of the work elaborated by Alfonzo and Batson ( 2014 ) and Elliott et al. ( 2018 ) in which a professional librarian also intervened. Or, in the work detailed by Ball ( 2019 ), where a consultant who is not a teacher but a professional expert in the use of digital devices and trained for such an occasion by a responsible brand (Apple) carried out the training at the center.

If we examine the constructs or competencies covered by the works selected in our search, we find that all of them, in addition to dealing with digital literacy, also focus on self-efficacy perceived and developed through digital literacy.

The results of our study could be understood under different themes.

First, we found studies that referred to digital competence and other educational issues. Within them, we found a series of competencies that are emphasized such as digital writing and reading. Research developed from digital media, such as databases, web, or applications aimed at the treatment of digital literacy was noted as emerging pedagogies and educational innovation. The digital design of content and materials and the skills to edit, publish or share them, and competencies related to mathematics and its digital literacy, formed part of digital literacy.

Second, we found studies related to digital competence and the use and employment of the Internet, social networks, web 2.0, and the treatment of digital risks and their relationship with digital literacy.

Third, we found works that in addition to focusing on digital literacy, also focused on different psychological constructs such as motivation, commitment, attitudes, or satisfaction ( Tables 1 , ​ ,2 2 ).

Summary of the results found.

Summary of the interventions found.

Regarding instructional literature, we found a large number of results on mass training programs or courses in which digital literacy was the focus. Examples include a course offered in which students could sign up to, or modules taught during the teaching of a subject. We also found investigations on interventions that had been carried out through different subjects in the study program from where the sample was taken. In this case, the samples were taken on an ad hoc basis from a specific student body which the researcher intentionally decided based on a previous intervention experience with them (Ata and Yildirim, 2019 ; Ball, 2019 ; Campbell and Kapp, 2020 ; Domingo-Coscolla et al., 2020 ; Tomczyk et al., 2020 ; Vinokurova et al., 2021 ).

In terms of material resources, all the studies used some type of documentation (digital or not) with instructions on the development of the activities, in which the students were provided with what to do and the steps to follow. In this case, the development scenario was both online and face-to-face, based on different activities given through workshops or seminars for their development.

It should also be noted that in those investigations in which the intervention itself required a specific application or program, the same was used, specifically, and even the intervention had a specific scenario since it was carried out in person in specialized laboratories where experts and specific material was available for this purpose. As an example of these specific materials, in our results, we found the use of the Photo Story 3, Dashboard, and Wikipedia, as well as the EMODO program or the SELI platform (Kajee and Balfour, 2011 ; Robertson et al., 2012 ; Ball, 2019 ; Hamutoglu et al., 2019 ; Tomczyk et al., 2020 ).

Regardless of the setting and the program or application employed, we can classify the duration of these interventions into two broad groups: those that had a duration of <1 semester, and those that had an intervention whose duration ranged from one semester to one academic year.

Regarding the instruments used, it should be noted that most of them used survey forms as an evaluation instrument, either by the researcher or by the students. In addition, it is usually used as a resource to collect information of a personal nature and about one's own experience throughout the intervention. We must also highlight the fact that in many of the results found, this form was used digitally or virtually, abandoning the old paper forms (Kajee and Balfour, 2011 ; Robertson et al., 2012 ; Carl and Strydom, 2017 ; Elliott et al., 2018 ; Ball, 2019 ; Lerdpornkulrat et al., 2019 ; Campbell and Kapp, 2020 ).

Regarding the use of questionnaires, scales or self-reports, we found several works that used participants' digital literacy histories as instruments. Through them, the researcher could learn first-hand about the sample's personal experience of digital literacy, the previous knowledge they possess, the digital skills they had mastered, those they lack, or those they consider they should improve. It also included the sample's vision regarding the use and employment of digital resources in teaching practice (Kajee and Balfour, 2011 ; Robertson et al., 2012 ; Pequeño et al., 2017 ; Elliott et al., 2018 ).

In the case of scales, we found two papers that employed a Likert-scale elaborated ad hoc . We also found studies that employed standardized scales like the Information Literacy Assessment Scale for Education (ILAS-ED), the Digital Literacy Scale, or the E-Learning Attitudes Scale.

Some of the studies we reviewed used semi-structured interviews as a means of monitoring and providing feedback to the students Table 3 ; (Kajee and Balfour, 2011 ; Alfonzo and Batson, 2014 ; Gill et al., 2015 ; Carl and Strydom, 2017 ; Elliott et al., 2018 ; Elphick, 2018 ; Ata and Yildirim, 2019 ; Campbell and Kapp, 2020 ).

Assessment intervention in the reviewed studies.

As for the sequence through which the different interventions were developed, we found two types—first, those that divided the contents in time, as is the case of the work of Kajee and Balfour ( 2011 ), who covered a first semester digital writing from online classes, self-instructions and face-to-face classes in a specific laboratory, and in a second semester was exposed to different digital research techniques, following the same methodology. In contrast, we spotted the second type, where the same technique was followed throughout the study, as is the case of Robertson et al. ( 2012 ). They applied digital stories as a tool for the development of the activity, but also the evaluation of the competency. In the research carried out by Lerdpornkulrat et al. ( 2019 ), it is apparent that with the use of the rubric, the teacher gave them an example of the work and asked them all to practice evaluating and grading this work. In this way, they could check if they understood how to use a rubric. They then used the rubric to self-assess their work. After receiving feedback, both groups of students revised and resubmitted their completed projects again.

In the investigation by Elliott et al. ( 2018 ), the intervention was structured in work modules with the following sequence of sessions: they were introduced in the first session with opportunities for group discussions and questions. Essential module reading was provided in weekly online study units and module workshops integrated academic reading and writing activities, such as paraphrasing and referencing, with module content.

In the study by Ball ( 2019 ), in the first year, the students took modules on publishing history, culture, markets, and media. In the second year, the intervention was based on their publishing skills, reading for writing development, and grammar and general literacy.

Hamutoglu et al. ( 2019 ) organized their intervention in different weeks, such that during the first week of the 14-week semester, the instructor oriented the students for the course and administered pre-tests. In the following week, students were provided with a session on the Edmodo platform and orientation training on the course content.

In the work of Gabriele et al. ( 2019 ), the experimental research plan (i.e., activities to be performed, methodology to be adopted) was established over 4 months followed by the organization of the reading material (power point presentations, introductory videos of the software, handouts, ad hoc created applications as examples).

We also found interventions that had very short time durations, but provide daily detail of the contents and interventions. Similarly, Alfonzo and Batson ( 2014 ) dedicate 1 day to the search and orientation in digital resources, 1 day to the APA standards, and 3 days to develop and use a specific application.

In the research by Istenic et al. ( 2016 ), the intervention was based on six different types of tasks related to a variety of mathematical problems, including problems with redundant data, problems with multiple solutions, problems with multiple paths to the solution, problems with no solution, mathematical problems in logic, and problems with insufficient information.

In some interventions, the sequence through which they are developed is the very development of the subject of the degree course from which they are implemented, as is the case of the work of Gill et al. ( 2015 ).

In the work of Carl and Strydom ( 2017 ), students were first familiarized with the devices and then introduced to electronic portfolios, which helped them to create blogs that serve as platforms for electronic portfolios, and guided them on how to collect artifacts and how to reflect and share content.

In one work we found narrative was used as a technique so that the students could later present their work, analyze it in groups, rework it and present it again to their classmates. Kuhn ( 2017 ), Pequeño et al. ( 2017 ), and Elphick ( 2018 ) followed this model.

Adopting a novel consultative approach, Botturi ( 2019 ) co-designed the intervention with his students in two steps: they were surveyed 4 weeks before the start of the course and asked to choose between two options: an overview of different topics/methods/experiences, or an in-depth exploration of one or two topics/methods/experiences. All respondents indicated a preference for the first option and provided indications of the topics they wished to cover (see Tables 4 , ​ ,5 5 ).

Assessment instruments used in the instructional intervention in the reviewed studies.

Treatment fidelity.

Indicators and controls used in the instructional intervention in the empirical studies reviewed II.

The limitations of our search are listed in Table 6 . At the theoretical level, we encountered studies that were not very current, missing research questions or hypotheses, or even missing objectives. At the statistical level, we found several studies had a small or unrepresentative sample.

Limitations of the instructional interventions described in the empirical studies reviewed.

Analyzing the interventions themselves, we identified a few limitations, especially in those studies that neither indicates the tasks, record the entire process, or lack key information to replicate the intervention. In some studies, key information relating to the person carrying out the intervention was missing, particularly on whether they had the specific training for this purpose. Another limitation that was identified was that very few evaluation strategies were in place to evaluate the interventions (see Table 7 ).

Indicators and controls used in the instructional intervention in the empirical studies reviewed.

Similarly, gaps were found regarding ethical controls, where in some studies the main limitation was that ethical controls were non-existent or not specified (Robertson et al., 2012 ; Istenic et al., 2016 ; Kuhn, 2017 ; Elphick, 2018 ; Ata and Yildirim, 2019 ; Tomczyk et al., 2020 ).

Figure 3 shows the evolution over the years of the samples used in each of the studies from 2011 to 2020.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-896800-g0003.jpg

Evolution over years of the samples used in the studies from 2010 to 2021.

Figure 4 shows the evolution over the years of the controls used in each of the studies from 2011 to 2021.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-896800-g0004.jpg

Evolution over years of the controls used in studies from 2010 to 2021.

This work aimed to analyze the empirical evidence found in international studies between 2011 to 2021 related to the digital literacy of university students, including those pursuing degrees in education. This objective has been met.

Regarding the first focus related to literacy, this paper highlighted the fact that studies from the West are the most prevalent in this field (Çoklar et al., 2017 ; Ata and Yildirim, 2019 ; Hamutoglu et al., 2019 ; Sujarwo et al., 2022 ), which correspond to cross-sectional studies, mostly employing instruments such as “the Digital Literacy Scale” developed by Ng ( 2012 ), and “the information literacy self-efficacy scale (ILS)” developed by Kurbanoglu et al. ( 2006 ). Regarding the level of mastery, the results showed an upper intermediate level of competence in information and digital literacy, communication, and collaboration, but a low intermediate level in terms of digital content creation, particularly in the creation and dissemination of multimedia content using different tools (López-Meneses et al., 2020 ; Moreno et al., 2020 ).

Regarding the second focus, digital literacy in university students, this study reviewed the various contributions of other works and found the presence of a competent group in this field, which makes efficient use of both the Internet and digital media (Çoklar et al., 2016 ; Ata and Yildirim, 2019 ; Lim and Newby, 2021 ). However, differences were also found in this collective relating to gender, where women were more competent than men in digital literacy, information literacy, technological literacy, and communicative literacy (Hamutoglu et al., 2019 ; López-Meneses et al., 2020 ; Navarro, 2020 ). However, on the other hand, we lso found studies that revealed particular gender gaps where men showed a higher propensity for DL, while women outperform men in the overall digital literacy test (Ata and Yildirim, 2019 ). Ata and Yildirim ( 2019 ) also found differences in DL between students where university students studying science or mathematics-related majors had higher levels of digital literacy than students majoring in social sciences or psychology fields (Ata and Yildirim, 2019 ; Chow and Wong, 2020 ).

And as for the third focus, digital literacy in future teachers, we found a dual use of digital literacy, in its social and leisure aspect (searching or maintaining friendships through social networks, sharing digital content, downloading content, or playing online games), and in its academic aspect (searching in search engines, working through online documents, organizing or synthesizing information from different processors, using computer programs to make presentations, edit images or content, or create audiovisual content (López-Meneses et al., 2020 ).

The main contribution of this review lies in its comparison between pre/post-pandemic studies, which show a great increase in the use of technologies in the educational world (across the curriculum), and research work focused on measuring the competencies of these devices (Baber et al., 2022 ). These new investigations have not only followed the line of previous ones but focused on the measurement of digital literacy and its influence on it by variables such as the degree of origin, gender, age, or being a digital native or immigrant (Castañeda-Peña et al., 2015 ; Çoklar et al., 2016 ; Castañeda et al., 2018 ; Ata and Yildirim, 2019 ; Gür et al., 2019 ; Hamutoglu et al., 2019 ; Lerdpornkulrat et al., 2019 ; González et al., 2020 ; Navarro, 2020 ; De Sixte et al., 2021 ). But there has been an expansion of the topics and variables that are studied in conjunction with digital literacy, among which we find as a novelty, the study of psycho-educational variables such as academic motivation (Chow and Wong, 2020 ), self-efficacy and motivation (Lerdpornkulrat et al., 2019 ), effort expectations (Nikou and Aavakare, 2021 ), and self-concept as a student and as a teacher (Yeşilyurt et al., 2016 ). The importance attached to the educational field, the identification of different roles or behaviors within the concept of digital literacy that is delimited, or even the types of uses within the concept of digital literacy (López-Meneses et al., 2020 ; Moreno et al., 2020 ; Navarro, 2020 ; Lim and Newby, 2021 ) are new trends.

Therefore, we can affirm that in this study the research predictions are fulfilled, in that the results found show relevant differences from international studies pre-post pandemic; and by different cultural backgrounds (Spanish Latin, Portuguese, Finnish...), gender, and personal digital resources. In terms of applications for educational practice, these results do not indicate that university students are competent in terms of digital literacy, although they demonstrate some competencies like online information search, information evaluation, information processing, information communication, and dissemination skills (Çoklar et al., 2016 ; Lerdpornkulrat et al., 2019 ). Therefore, there is the risk of training an incomplete student body in digital competence. For complete and comprehensive digital literacy for university students, especially future teachers, there is an urgent need to invest in digital literacy programs. This will ensure that the comprehensive digital competence of students corresponds to the use and employment of the Internet and digital devices in their teaching tasks (Gisbert et al., 2016 ), and be a guarantee of their integration into teaching practice (Aslan and Zhu, 2016 ; Nikou and Aavakare, 2021 ).

As for the limitations of this work, they are closely related to the seven indicators for analyzing study quality and effectiveness (Acosta and Garza, 2011), which are: alignment of theory, findings, reliability and validity, descriptive details of participants, and the study, sample, and consistency of findings and conclusions with the data (Risko et al., 2008 ). Along with evidence-based indicators, and effect sizes of studies (Díaz and García, 2016 ; Canedo-García et al., 2017 ). So future lines of research or work, should take into account overcoming these limitations, and embrace them in the face of their development.

The number of studies found in the systematic review is comparable to what is usual in this type of study and even higher. For example, in the exemplary systematic review by Scott et al. ( 2018 ), they identified only 29 studies that met the quality criteria, reviewing 50 years of studies published in the US, and of these, only four were quantitative. In the study by Borgi et al. ( 2020 ), they only found ten studies that fit the criteria in a very good analysis. Other systematic reviews go along the same lines, and in the same journal and section Frontiers in Psychology . For example, Dickson and Schubert ( 2020 ) and Liu et al. ( 2022 ) found only six studies in a review of great interest; the study by Nguyen et al. ( 2021 ) identified 18 eligible articles; Shou et al. ( 2022 ) with 12 studies included; or Tarchi et al. ( 2021 ); Huang ( 2022 ) found seven studies for quantitative analysis and eight for indirect evidence; Coxen et al. ( 2021 ) with 21 articles included in the focal analyzes of the systematic review. The number of studies to be representative is not defined by the number but by the existence of such studies. In a systematic review, all studies are reviewed, thus the population of published studies that fit the indicated criteria. With these studies, it was possible to do an analysis of objective indicators in a general comparison between studies; assessing the instruments used; examining the characteristics of the interventions such as strategies, instructional procedure, and psychological variables considered; comparing the fidelity controls of the treatments, which guarantees their rigor and their application in the terms prescribed by the empirical validation of the interventions; and reviewing the limitations of the studies and their contributions by years. These contributions were based on objective data from the studies and have been represented in tables and figures. In addition, a qualitative analysis is provided that highlights the value of intervention studies in relation to digital competence, and the key psychological variables that have been used. It is true that the studies published since 2010 were used, and that there could have been more studies before, but considering the evolution of this type of focus in relation to digital competence and the psychological variables involved, it is evident that the most interesting thing is to consider the recent years which is when its need and use has been generalized throughout the population.

Conclusions

In general, the results show that university students are digitally literate and make efficient use of both the Internet and digital media. In this sense, we found an intermediate or higher level in skills related to communication and collaboration, such as through different chat rooms, platforms, and communication applications. But an intermediate-low level in terms of digital content creation, especially in the creation and dissemination of multimedia content. So, this should be one of the future competencies to increase in this group. Although there are differences according to gender, age, or degree of origin.

We have to invest in comprehensive digital literacy programs for teachers in initial training, which appears implicit in the training plans of their official studies. Digital literacy needs to be a part of the official curriculum, and be developed rather quickly as a separate subject but in an interdisciplinary manner throughout their training. In this way, they become digitally literate people capable of creating and generating digital content and possessing the necessary competencies and skills to use and share such content.

We must also invest in assessing teachers' self-perception. Only by knowing their opinion, skills, and shortcomings, can digital training programs be designed. Digital literacy is a predictor of good digital use and a predictor of the good use and employment of digital devices and the Internet in the future when they would be teaching.

The findings of this study compel us to consider the following: first, we need to rethink the form and manner in which future teachers are capacitated in digital literacy, if we are doing it in the best way, or if on the contrary there are gaps that should be solved. Second, we should take into account the contributions of the results found and their consequences to formulate effective intervention designs and strategies to effectively capacitate pre-service teachers in digital literacy.

Data availability statement

Author contributions.

J-NS-G, NG-Á, IM-R, JG-M, and SB-C: conceptualization, methodology, software, writing—review and editing, visualization, supervision, and validation. NG-A: formal analysis, investigation, and resources: UAL, ULE, USAL, IPC, data curation, writing—original draft preparation, and funding acquisition. J-NS-G and NG-A: project administration. All authors contributed to the article and approved the submitted version.

The generalx operating funds of the universities have been used Universidad de León (Spain), Universidad de Almería (Spain), Universidad de Salamanca (Spain), Instituto Politécnico de Coimbra and NICSH (Portugal).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

* The asterisk of focal references are APA standards.

  • Aarsand P. (2019). Categorization activities in norwegian preschools: digital tools in identifying, articulating, and assessing . Front. Psychol. 10 , 973. 10.3389/fpsyg.2019.00973 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Acosta S., Garza T. (2011). The Podcasting Playbook: A Typology of Evidence-Based Podagogy for PreK-12 Classrooms with English Language Learners . [ Google Scholar ]
  • Alanoglu M., Aslan S., Karabatak S. (2022). Do teachers' educational philosophies affect their digital literacy? The mediating effect of resistance to change . Educ. Inf. Technol . 27 , 3447–3466. 10.1007/s10639-021-10753-3 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • * . Alfonzo P. M., Batson J. (2014). Utilizing a co-teaching model to enhance digital literacy instruction for doctoral students . Int. J. Doc. Stud. 9 , 61–71. 10.28945/1973 [ CrossRef ] [ Google Scholar ]
  • Allen J., Belfi B., Borghans L. (2020). Is there a rise in the importance of socioemotional skills in the labor market? Evidence from a trend study among college graduates . Front. Psychol. 11 , 1710. 10.3389/fpsyg.2020.01710 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Area M. (2018). Hacia la universidad digital: dónde estamos y a dónde vamos? Rev. Iberoam. Educ. Dist. 21 , 25–30. 10.5944/ried.21.2.21801 [ CrossRef ] [ Google Scholar ]
  • Aslan A., Zhu C. (2016). Investigating variables predicting turkish pre-service teachers' integration of ICT into teaching practices . Br. J. Educ. Technol. 48 , 552–570. 10.1111/bjet.12437 [ CrossRef ] [ Google Scholar ]
  • * . Ata R., Yildirim K. (2019). Exploring Turkish pre-service teachers' perceptions and views of digital literacy . Educ. Sci. 9 , 40–56. 10.3390/educsci9010040 [ CrossRef ] [ Google Scholar ]
  • Baber H., Fanea-Ivanovici M., Lee Y.-T., Tinmaz H. (2022). A bibliometric analysis of digital literacy research and emerging themes pre-during COVID-19 pandemic . Inform. Learn. Sci. 123 , 214–232. 10.1108/ILS-10-2021-0090 [ CrossRef ] [ Google Scholar ]
  • * . Ball C. (2019). WikiLiteracy: enhancing students' digital literacy with wikipedia . J. Inform. Liter. 13 , 253–271. 10.11645/13.2.2669 [ CrossRef ] [ Google Scholar ]
  • Bawden D. (2008). Revisión de los conceptos de alfabetización informacional y alfabetización digital: traducción . Anal. Document. 5 , 361–408. Available online at: https://revistas.um.es/analesdoc/article/view/2261 [ Google Scholar ]
  • Borgi M., Collacchi B., Giuliani A., Cirulli F. (2020). Dog visiting programs for managing depressive symptoms in older adults: a meta-analysis . Gerontologist 60 , e66–e75. 10.1093/geront/gny149 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • * . Botturi L. (2019). Digital and media literacy in pre-service teacher education: a case study from Switzerland . Nordic J. Dig. Liter. 14 , 147–163. 10.18261/issn.1891-943x-2019-03-04-05 [ CrossRef ] [ Google Scholar ]
  • Brata W., Padang R., Suriani C., Prasetya E., Pratiwi N. (2022). Student's digital literacy based on students' interest in digital technology, internet costs, gender, and learning outcomes . Int. J. Emerg. Technol. Learn. 17,138–151. 10.3991/ijet.v17i03.27151 [ CrossRef ] [ Google Scholar ]
  • Cabero-Almenara J., Palacios-Rodríguez A. (2020). Marco europeo de competencia digital docente “DigCompEdu”. Traducción y adaptación del cuestionario “DigCompEdu Check-In” . Edmetic 9 , 213–234. 10.21071/edmetic.v9i1.12462 [ CrossRef ] [ Google Scholar ]
  • * . Campbell E., Kapp R. (2020). Developing an integrated, situated model for digital literacy in pre-service teacher education . J. Educ. 79 , 18–30. 10.17159/2520-9868/i79a02 [ CrossRef ] [ Google Scholar ]
  • Canedo-García A., García-Sánchez J. N., Pacheco-Sanz D. I. (2017). a systematic review of the effectiveness of intergenerational programs . Front. Psychol. 8 , 1882. 10.3389/fpsyg.2017.01882 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cantabrana J. L. L., Cervera M. G. (2015). El desarrollo de la competencia digital docente a partir de una experiencia piloto de formación en alternancia en el Grado de Educación . Educar 51 , 321–348. 10.5565/rev/educar.725 [ CrossRef ] [ Google Scholar ]
  • * . Carl A., Strydom S. (2017). e-Portfolio as reflection tool during teaching practice: the interplay between contextual and dispositional variables . S. Afr. J. Educ. 37 , 1–10. 10.15700/saje.v37n1a1250 [ CrossRef ] [ Google Scholar ]
  • Castañeda L., Esteve F., Adell J. (2018). Por qué es necesario repensar la competencia docente para el mundo digital? Rev. Educ. Distan. 56 , 2–20. 10.6018/red/56/6 [ CrossRef ] [ Google Scholar ]
  • Castañeda-Peña H., Barbosa-Chacón J. W., Marciales G., Barreto I. (2015). Profiling information literacy in higher education: traces of a local longitudinal study . Univer. Psychol. 14 , 445–458. 10.11144/Javeriana.upsy14-2.pilh [ CrossRef ] [ Google Scholar ]
  • Chow S. K. Y., Wong J. L. K. (2020). Supporting academic self-efficacy, academic motivation, and information literacy for students in tertiary institutions . Educ. Sci. 10 , 361. 10.3390/educsci10120361 [ CrossRef ] [ Google Scholar ]
  • Çoklar A., Efilti E., Şahin Y., AkÇay A. (2016). Determining the reasons of technostress experienced by teachers: A qualitative study. Turkish Online J. Qual. Inq . 7 , 71–96. 10.17569/tojqi.96082 [ CrossRef ] [ Google Scholar ]
  • Çoklar A. N., Yaman N. D., Yurdakul I. K. (2017). Information literacy and digital nativity as determinants of online information search strategies . Comput. Hum. Behav. 70 , 1–9. 10.1016/j.chb.2016.12.050 [ CrossRef ] [ Google Scholar ]
  • Comisión Europea (2012). Informe Conjunto de 2012 del Consejo y de la Comisión sobre la Aplicación del Marco Estratégico Para la Cooperación Europea en el Ámbito de la Educación y la Formación (ET 2020). Comisión Europea . Available online at: https://eur-lex.europa.eu/legal-content/ES/TXT/PDF/?uri=CELEX:52012XG0308(01)andfrom=EN
  • Comisión Europea (2013). Monitor Education and Training 2013. Comisión Europea . Available online at: https://op.europa.eu/es/publication-detail/-/publication/25626e01-1bb8-403c-95da-718c3cfcdf19/language-en
  • Cooper H. (2009). Research Synthesis and Meta-Analysis: A Step-By-Step Approach . London: Sage. [ Google Scholar ]
  • Cooper H., Hedges L. V. (1994). The Handbook of Research Synthesis. New York: Russell Sage. [ Google Scholar ]
  • Covello S., Lei J. (2010). A Review of Digital Literacy Assessment Instruments. IDE-712 Front-End Analysis Research . Syracuse: Analysis for Human Performance Technology Decisions. [ Google Scholar ]
  • Coxen L., van der Vaart L., Van den Broeck A., Rothmann S. (2021). Basic psychological needs in the work context: a systematic literature review of diary studies . Front. Psychol . 12 :698526. 10.3389/fpsyg.2021.698526 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Sixte R., Fajardo I., Mañá A., Jáñez Á., Ramos M., García-Serrano M., et al.. (2021). Beyond the educational context: relevance of intrinsic reading motivation during COVID-19 confinement in Spain . Front. Psychol. 12 , 703251. 10.3389/fpsyg.2021.703251 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Díaz C., García J. N. (2016). Identification of relevant elements for promoting eficient interventions in older people . J. Psychodidact. 21 , 157–173. 10.1387/RevPsicodidact.13854 [ CrossRef ] [ Google Scholar ]
  • Dickson G. T., Schubert E. (2020). Music on prescription to aid sleep quality: a literature review . Front. Psychol. 11 , 1695. 10.3389/fpsyg.2020.01695 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • * . Domingo-Coscolla M., Bosco A., Carrasco Segovia S., Sánchez Valero J. A. (2020). Fomentando la competencia digital docente en la universidad: percepción de estudiantes y docentes . Rev. Invest. Educ. 38 , 167–782. 10.6018/rie.340551 [ CrossRef ] [ Google Scholar ]
  • Educause (2019). Horizon Report 2019—Higher Education Edition . Boulder: Educause. [ Google Scholar ]
  • * . Elliott S., Hendry H., Ayres C., Blackman K., Browning F., Colebrook D., et al.. (2018). 'On the outside I'm smiling but inside I'm crying.' Communication successes and challenges for undergraduate academic writing . J. Furth. High. Educ. 45 , 758–770. 10.1080/0309877X.2018.1455077 [ CrossRef ] [ Google Scholar ]
  • * . Elphick M. (2018). The impact of embedded iPad use on student perceptions of their digital capabilities . Educ. Sci. 8 , 102–125. 10.3390/educsci8030102 [ CrossRef ] [ Google Scholar ]
  • Eshet-Alkalai Y. (2012). Thinking in the digital era: a revised model for digital literacy . Iss. Inform. Sci. Inform. Technol. 9 , 267–276. 10.28945/1621 [ CrossRef ] [ Google Scholar ]
  • European Comission . (2012). The European Commission and bureaucratic autonomy: Europe's custodians . Cambridge: Cambridge University Press. [ Google Scholar ]
  • European Comission . (2013). The European Commission of the Twenty-First Century . Oxford: Oxford University Press. [ Google Scholar ]
  • European Commission (Ed) (2018). Proposal for a Council Recommendation on Key Competences for Lifelong Learning. New York: European Commission. [ Google Scholar ]
  • European Union (2013). Digital protest skills and online activism against copyright reform in France and the European Union . Policy Int. 5 , 27–55. [ Google Scholar ]
  • Fernández-de-la-Iglesia J. C., Fernández-Morante M. C., Cebreiro B., Soto-Carballo J., Martínez-Santos A. E., Casal-Otero L. (2020). Competencias y actitudes para el uso de las TIC de los estudiantes del grado de maestro de Galicia . Publicaciones 50 , 103–120. 10.30827/publicaciones.v50i1.11526 [ CrossRef ] [ Google Scholar ]
  • Fraser J., Atkins L., Richard H. (2013). Digilit Leicester. Supporting Teachers, Promoting Digital Literacy, Transforming Learning . Leicester: Leicester City Council. [ Google Scholar ]
  • * . Gabriele L., Bertacchini F., Tavernise A., Vaca-Cárdenas L., Pantano P., Bilotta E. (2019). Lesson planning by computational thinking skills in Italian pre-service teachers . Inform. Educ. 18 , 69–104. 10.15388/infedu.2019.04 [ CrossRef ] [ Google Scholar ]
  • Garcia-Martin J., Garcia-Sanchez J. N. (2017). Pre-service teachers' perceptions of the competence dimensions of digital literacy and of psychological and educational measures . Comp. Educ. 107 , 54–67. 10.1016/j.compedu.2016.12.010 [ CrossRef ] [ Google Scholar ]
  • * . Gill L., Dalgarno B., Carlson L. (2015). How does pre-service teacher preparedness to use ICTs for learning and teaching develop through their degree program? Austral. J. Teach. Educ. 40 , 36–59. Available online at: https://www.scopus.com/record/display.uri?eid=2-s2.0-84870756234andorigin=inwardandtxGid=927e8280a577067fddaa8df02fa0b665andfeatureToggles=FEATURE_NEW_DOC_DETAILS_EXPORT:1 [ Google Scholar ]
  • Gilster P. (1997). Digital Literacy (p. 1) . New York, NY: Wiley Computer Pub. [ Google Scholar ]
  • Gisbert M., Esteve F. (2011). Digital Leaners: la competencia digital de los estudiantes universitarios . La Cuestión Univer. 7 , 48–59. [ Google Scholar ]
  • Gisbert M., Esteve F., Lázaro J. (2016). La competencia digital de los futuros docentes: “cómo se ven los actuales estudiantes de educación” Perspect. Educ. 55 , 34–52. 10.4151/07189729-Vol.55-Iss.2-Art.412 [ CrossRef ] [ Google Scholar ]
  • Gómez-García G., Hinojo-Lucena F. J., Fernández-Martín F. D., Romero-Rodríguez J. M. (2021). Educational challenges of higher education: validation of the information competence scale for future teachers (ICS-FT) . Educ. Sci. 12, 14. 10.3390/educsci12010014 [ CrossRef ] [ Google Scholar ]
  • González M. J. M., Rivoir A., Lázaro-Cantabrana J. L., Gisbert-Cervera M. (2020). “Cuánto importa la competencia digital docente? Análisis de los programas de formación inicial docente en Uruguay . Int. J. Technol. Educ. Innov. 6 , 128–140. 10.24310/innoeduca.2020.v6i2.5601 [ CrossRef ] [ Google Scholar ]
  • González V., Román M., Prendes M. P. (2018). Formación en competenciasdigitales para estudiantes universitarios basada en el modelo DigComp . Rev. Electró. Tecnol. Educ. 65 , 1–15. [ Google Scholar ]
  • Gudmundsdottir G. B., Hatlevik O. E. (2018). Newly qualified teachers' professional digital competence: implications for teacher education . Euro. J. Teach. Educ. 41 , 214–231. 10.1080/02619768.2017.1416085 [ CrossRef ] [ Google Scholar ]
  • Gür D., Canan Ö., Hamutoglu N. B., Kaya G., Demirtas T. (2019). The Relationship between lifelong learning trends, digital literacy levels and usage of web 2.0 tools with social entrepreneurship characteristics . Croat. J. Educ. 21 , 45–76. Available online at: https://hrcak.srce.hr/220701 [ Google Scholar ]
  • * . Hamutoglu N. B., Savaşçi M., Sezen-Gültekin G. (2019). Digital literacy skills and attitudes towards e-learning . J. Educ. Fut. 16 , 93–107. 10.30786/jef.509293 [ CrossRef ] [ Google Scholar ]
  • Huang C. (2022). Self-Regulation of learning and EFL learners' hope and joy: a review of literature . Front. Psychol. 13 , 833279. 10.3389/fpsyg.2022.833279 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Indah R. N., Budhiningrum A. S., Afifi N. (2022). The research competence, critical thinking skills and digital literacy of Indonesian EFL students . J. Lang. Teach. Res. 13 , 315–324. 10.17507/jltr.1302.11 [ CrossRef ] [ Google Scholar ]
  • Instituto Nacional de Tecnologías Educativas y Formación del Profesorado (2017). Marco Común de Competencia Digital Docente Octubre 2017. Instituto Nacional de Tecnologías Educativas y Formación del Profesorado . Available online at: https://aprende.intef.es/sites/default/files/2018-05/2017_1020_Marco-Com%C3%BAn-de-Competencia-Digital-Docente.pdf
  • INTEF (Ed.) (2017). Marco Común de Competencia Digital Docente. INTEF . Available online at: https://aprende.intef.es/sites/default/files/2018-05/2017_1020_Marco-Com%C3%BAn-de-Competencia-Digital-Docente.pdf
  • * . Istenic A., Cotic M., Solomonides I., Volk M. (2016). Engaging preservice primary and preprimary school teachers in digital storytelling for the teaching and learning of mathematics . Br. J. Educ. Technol. 47 , 29–50. 10.1111/bjet.12253 [ CrossRef ] [ Google Scholar ]
  • * . Kajee L., Balfour R. (2011). Students' access to digital literacy at a South African university: privilege and marginalisation . S. Afr. Linguist. Appl. Lang. Stud. 29 , 187–196. 10.2989/16073614.2011.633365 [ CrossRef ] [ Google Scholar ]
  • Koehler M. J., Mishra P. (2008). “Introducing tpck,” in The Handbook of Technological Pedagogical Content Knowledge (tpck) for Educators , ed AACTE Committee on Innovation and Technology (Mahwah, NJ: Lawrence Erlbaum Associates; ), 3–29. [ Google Scholar ]
  • Krumsvik R. J. (2009). Situated learning in the network society and the digitised school . Euro. J. Teach. Educ. 32 , 167–185. 10.1080/02619760802457224 [ CrossRef ] [ Google Scholar ]
  • * . Kuhn C. (2017). Are students ready to (re)-design their personal learning environment? The case of the e-dynamic space . J. N. Approach. Educ. Res. 6 , 11–19. 10.7821/naer.2017.1.185 [ CrossRef ] [ Google Scholar ]
  • Kurbanoglu S. S., Akkoyunlu B., Umay A. (2006). Developing the information literacy scale . J. Doc. 62 , 730–743. 10.1108/00220410610714949 [ CrossRef ] [ Google Scholar ]
  • Larraz E. F. (2013). Competencia digital en la educación superior: instrumentos de evaluación y nuevos entornos. Enl@ ce: Revista Venezolana de Información. Tecnología y Conocimiento . 10 , 29–43. [ Google Scholar ]
  • Lázaro J. (2015). La Competència Digital Docent Com a Eina per Garantir la Qualitat en l'ús de les TIC en un Centre Escolar (Tesis doctoral: ), Tarragona: Universitat Rovira i Virgili. [ Google Scholar ]
  • Le B., Lawrie G. A., Wang J. T. (2022). Student self-perception on digital literacy in STEM blended learning environments . J. Sci. Educ. Technol. 31 , 303–321. 10.1007/s10956-022-09956-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • * . Lerdpornkulrat T., Poondej C., Koul R., Khiawrod G., Prasertsirikul P. (2019). The positive effect of intrinsic feedback on motivational engagement and self-efficacy in information literacy . J. Psychoeduc. Assess. 37 , 421–434. 10.1177/0734282917747423 [ CrossRef ] [ Google Scholar ]
  • Lim J., Newby T. J. (2021). Preservice teachers' attitudes toward Web 2.0 personal learning environments (PLEs): considering the impact of self-regulation and digital literacy . Educ. Inform. Technol. 26 , 3699–3720. 10.1007/s10639-021-10432-3 [ CrossRef ] [ Google Scholar ]
  • Liu W., Xu Y., Xu T., Ye Z., Yang J., Wang Y. (2022). Integration of neuroscience and entrepreneurship: a systematic review and bibliometric analysis . Front. Psychol . 13, 810550. 10.3389/fpsyg.2022.810550 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • López-Meneses E., Sirignano F. M., Vázquez-Cano E., Ramírez-Hurtado J. M. (2020). University students' digital competence in three areas of the DigCom 2.1 model: a comparative study at three European universities . Austral. J. Educ. Technol. 36 , 69–88. 10.14742/ajet.5583 [ CrossRef ] [ Google Scholar ]
  • Martín A., Tyner K. (2012). Media education, media literacy and digital competence . Co-municar 19 , 31–39. 10.3916/C38-2012-02-03 [ CrossRef ] [ Google Scholar ]
  • Miller D. M., Scott C. E., McTigue E. M. (2016). Writing in the secondary-level disciplines: a systematic review of context, cognition and content . Educ. Psychol. Rev. 30 , 83–120. 10.1007/s10648-016-9393-z [ CrossRef ] [ Google Scholar ]
  • Moher D. Liberati A. Tetzlaff J. Altman D.G. The PRISMA Group . (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement . PLoS Med . 6, e1000097. 10.1371/journal.pmed.1000097 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moreno A. J., Fernández M. A., Godino A. L. (2020). Información y alfabetización digital docente: influencia de la rama formativa . J. Educ. Teach. Train. 10 , 140–150. Available online at: https://digibug.ugr.es/handle/10481/59591 [ Google Scholar ]
  • Navarro J. A. M. (2020). La competencia digital de los estudiantes universitarios latinoamericanos . Int. J. Educ. Res. Innov. 14 , 276–289. 10.46661/ijeri.4387 [ CrossRef ] [ Google Scholar ]
  • Ng W. (2012). Can we teach digital natives digital literacy? Comput. Educ. 59 , 1065–1078. 10.1016/j.compedu.2012.04.016 [ CrossRef ] [ Google Scholar ]
  • Nguyen H. T. T., Hoang A. P., Do L. T. K., Schiffer S., Nguyen H. T. H. (2021). The rate and risk factors of postpartum depression in vietnam from 2010 to 2020: a literature review . Front. Psychol . 12, 731306. 10.3389/fpsyg.2021.731306 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nikou S., Aavakare M. (2021). An assessment of the interplay between literacy and digital technology in higher education . Educ. Inform. Technol. 26 , 3893–3915. 10.1007/s10639-021-10451-0 [ CrossRef ] [ Google Scholar ]
  • OECD (2011). Informe Habilidades y Competencias Del Siglo xxi Para Los Aprendices Del Nuevo Milenio En los Paí ses de la . OECD . Available online at: http://recursostic.educacion.es/blogs/europa/media/blogs/europa/informes/Habilidades_y_competencias_siglo21_OCDE.pdf
  • OECD (2012). Education at a Glance 2012. OECD Indicators. Panorama de la educación. Indicadores de la OCDE 2014. Informe español . Available online at: https://www.oecd-ilibrary.org/education/education-at-a-glance-2012_eag-2012-en
  • Page M. J., McKenzie J. E., Bossuyt P. M., Boutron I., Hoffmann T. C., Mulrow C. D., et al.. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews . BMJ 372 , n71. 10.1136/bmj.n71 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • * . Paige K., Bentley B., Dobson S. (2016). Slowmation: an innovative twenty-first century teaching and learning tool for science and mathematics pre-service teachers . Austral. J. Teach. Educ. 2 , 1–15. 10.14221/ajte.2016v41n2.1 [ CrossRef ] [ Google Scholar ]
  • * . Pequeño J. M. G., Rodríguez E. F., de la Iglesia Atienza L. (2017). Narratives transmèdia amb joves universitaris. Una etnografia digital en la societat hiperconnectada . Anàlisi 57 , 81–95. Available online at: https://raco.cat/index.php/Analisi/article/view/330248 [ Google Scholar ]
  • Pérez T. A., Nagata J. J. (2019). The digital culture of students of pedagogy specialising in the humanities in Santiago de Chile . Comput. Educ. 133 , 1–12. 10.1016/j.compedu.2019.01.002 [ CrossRef ] [ Google Scholar ]
  • Piscitelli A. (2009). Nativos Digitales . Mexico: Santillana. 10.26439/contratexto2008.n016.782 [ CrossRef ] [ Google Scholar ]
  • Recio F., Silva J., Marchant N. A. (2020). Análisis de la competencia digital en la formación inicial de estudiantes universitarios: un estudio de meta-análisis en la web of science . Rev. Med. Educ. 59 , 125–146. 10.12795/pixelbit.77759 [ CrossRef ] [ Google Scholar ]
  • Risko V. J., Roller C. M., Cummins C., Bean R. M., Block C. C., Anders P. L., et al.. (2008). A critical analysis of research on reading teacher education . Read. Res. Q. 43 , 252–288. 10.1598/RRQ.43.3.3 [ CrossRef ] [ Google Scholar ]
  • * . Robertson L., Hughes J., Smith S. (2012). “Thanks for the assignment!”: digital stories as a form of reflective practice . Lang. Liter. 14 , 78–90. 10.20360/G2S88D [ CrossRef ] [ Google Scholar ]
  • Rodríguez-García A. M., Raso F., Ruiz-Palmero J. (2019). Competencia digital, educación superior y formación del profesorado: un estudio de meta-análisis en la web of Science . Pixel Bit 54 , 65–81. 10.12795/pixelbit.2019.i54.04 [ CrossRef ] [ Google Scholar ]
  • Scott C. E., McTigue E. M., Miller D. M., Washburn E. K. (2018). The what, when, and how of preservice teachers and literacy across the disciplines: a systematic literature review of nearly 50 years of research . Teach. Teach. Educ. 73 , 1–13. 10.1016/j.tate.2018.03.010 [ CrossRef ] [ Google Scholar ]
  • * . Sharp L. A. (2018). Collaborative digital literacy practices among adult learners: levels of confidence and perceptions of importance . Int. J. Instruct. 11 , 153–166. 10.12973/iji.2018.11111a [ CrossRef ] [ Google Scholar ]
  • Shou S., Li Y., Fan G., Zhang Q., Yan Y., Lv T., et al.. (2022). The efficacy of cognitive behavioral therapy for tic disorder: a meta-analysis and a literature review . Front. Psychol . 13, 851250. 10.3389/fpsyg.2022.851250 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Siddiq F., Gochyyev P., Wilson M. (2017). Learning in digital networks – ICT literacy: a novel assessment of students' 21st century skills . Comput. Educ. 109 , 11–37. 10.1016/j.compedu.2017.01.014 [ CrossRef ] [ Google Scholar ]
  • Sujarwo S., Tristanti T., Kusumawardani E. (2022). Digital literacy model to empower women using community-based education approach . World J. Educ. Technol. Curr. Issues 14 , 175–188. 10.18844/wjet.v14i1.6714 [ CrossRef ] [ Google Scholar ]
  • Tarchi C., Ruffini C., Pecini C. (2021). The contribution of executive functions when reading multiple texts: a systematic literature review . Front. Psychol . 12:, 716463. 10.3389/fpsyg.2021.716463 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • * . Tomczyk Ł., Potyrała K., Włoch A., Wnek-Gozdek J., Demeshkant N. (2020). Evaluation of the functionality of a new e-learning platform vs. previous experiences in e-learning and the self-assessment of own digital literacy . Sustainability 12 , 10219. 10.3390/su122310219 [ CrossRef ] [ Google Scholar ]
  • Torgerson K. (2007). A genetic study of the acute anxious response to carbon dioxide stimulation in man . J. Psychiatric Res . 41 , 906–917. 10.1016/j.jpsychires.2006.12.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tourón J., Martín D., Navarro E., Pradas S., Íñigo V. (2018). Validación de constructo de un instrumento para medir la competencia digital docente de los profesores (CDD) . Rev. Española Pedag. 76 , 25–54. 10.22550/REP76-1-2018-02 [ CrossRef ] [ Google Scholar ]
  • UNESCO (2018). ICT Competency Framework for Teachers. UNESCO . Available online at: https://en.unesco.org/themes/ict-education/competency-framework-teachers
  • Unión Europea (2013). Comprender las Polí ticas de la Unión Europea: Una Nueva Revolución Industrial . Unión Europea . Available online at: https://www.europarl.europa.eu/factsheets/es/sheet/61/los-principios-generales-de-la-politica-industrial-de-la-union
  • Valverde J., González A., Acevedo J. (2022). Desinformación y multialfabetización: Una revisión sistemática de la literatura . Comunicar 7 , 97–110. 10.3916/C70-2022-08 [ CrossRef ] [ Google Scholar ]
  • * . Vinokurova N., Mazurenko O., Prikhodchenko T., Ulanova S. (2021). Digital transformation of educational content in the pedagogical higher educational institution . Rev. Invest. Apuntes Univer. 11 , 1–20. 10.17162/au.v11i3.713 [ CrossRef ] [ Google Scholar ]
  • Vodá A. I., Cautisanu C., Grádinaru C., Tánásescu C., de Moraes G. H. S. M. (2022). Exploring digital literacy skills in social sciences and humanities students . Sustainability 14 , 2483. 10.3390/su14052483 [ CrossRef ] [ Google Scholar ]
  • Walsh K., Pink E., Ayling N., Sondergeld A., Dallaston E., Tournas P., et al.. (2022). Best practice framework for online safety education: results from a rapid review of the international literature, expert review, and stakeholder consultation . Int. J. Child Comput. Interact. 33 , 100474. 10.1016/j.ijcci.2022.100474 [ CrossRef ] [ Google Scholar ]
  • Yeşilyurt E., Ulaş A. H., Akan D. (2016). Teacher self-efficacy, academic self-efficacy, and computer self-efficacy as predictors of attitude toward applying computer-supported education . Comput. Hum. Behav. 64 , 591–601. 10.1016/j.chb.2016.07.038 [ CrossRef ] [ Google Scholar ]
  • Zurkowski P.G. (1974). The information service environment relationships and priorities . Relat. Pap. 5, 27. [ Google Scholar ]

A systematic literature review of empirical research on ChatGPT in education

  • Open access
  • Published: 26 May 2024
  • Volume 3 , article number  60 , ( 2024 )

Cite this article

You have full access to this open access article

empirical studies of secretary and technology in literature review

  • Yazid Albadarin   ORCID: orcid.org/0009-0005-8068-8902 1 ,
  • Mohammed Saqr 1 ,
  • Nicolas Pope 1 &
  • Markku Tukiainen 1  

237 Accesses

Explore all metrics

Over the last four decades, studies have investigated the incorporation of Artificial Intelligence (AI) into education. A recent prominent AI-powered technology that has impacted the education sector is ChatGPT. This article provides a systematic review of 14 empirical studies incorporating ChatGPT into various educational settings, published in 2022 and before the 10th of April 2023—the date of conducting the search process. It carefully followed the essential steps outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) guidelines, as well as Okoli’s (Okoli in Commun Assoc Inf Syst, 2015) steps for conducting a rigorous and transparent systematic review. In this review, we aimed to explore how students and teachers have utilized ChatGPT in various educational settings, as well as the primary findings of those studies. By employing Creswell’s (Creswell in Educational research: planning, conducting, and evaluating quantitative and qualitative research [Ebook], Pearson Education, London, 2015) coding techniques for data extraction and interpretation, we sought to gain insight into their initial attempts at ChatGPT incorporation into education. This approach also enabled us to extract insights and considerations that can facilitate its effective and responsible use in future educational contexts. The results of this review show that learners have utilized ChatGPT as a virtual intelligent assistant, where it offered instant feedback, on-demand answers, and explanations of complex topics. Additionally, learners have used it to enhance their writing and language skills by generating ideas, composing essays, summarizing, translating, paraphrasing texts, or checking grammar. Moreover, learners turned to it as an aiding tool to facilitate their directed and personalized learning by assisting in understanding concepts and homework, providing structured learning plans, and clarifying assignments and tasks. However, the results of specific studies (n = 3, 21.4%) show that overuse of ChatGPT may negatively impact innovative capacities and collaborative learning competencies among learners. Educators, on the other hand, have utilized ChatGPT to create lesson plans, generate quizzes, and provide additional resources, which helped them enhance their productivity and efficiency and promote different teaching methodologies. Despite these benefits, the majority of the reviewed studies recommend the importance of conducting structured training, support, and clear guidelines for both learners and educators to mitigate the drawbacks. This includes developing critical evaluation skills to assess the accuracy and relevance of information provided by ChatGPT, as well as strategies for integrating human interaction and collaboration into learning activities that involve AI tools. Furthermore, they also recommend ongoing research and proactive dialogue with policymakers, stakeholders, and educational practitioners to refine and enhance the use of AI in learning environments. This review could serve as an insightful resource for practitioners who seek to integrate ChatGPT into education and stimulate further research in the field.

Similar content being viewed by others

empirical studies of secretary and technology in literature review

Empowering learners with ChatGPT: insights from a systematic literature exploration

empirical studies of secretary and technology in literature review

Incorporating AI in foreign language education: An investigation into ChatGPT’s effect on foreign language learners

empirical studies of secretary and technology in literature review

Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT

Avoid common mistakes on your manuscript.

1 Introduction

Educational technology, a rapidly evolving field, plays a crucial role in reshaping the landscape of teaching and learning [ 82 ]. One of the most transformative technological innovations of our era that has influenced the field of education is Artificial Intelligence (AI) [ 50 ]. Over the last four decades, AI in education (AIEd) has gained remarkable attention for its potential to make significant advancements in learning, instructional methods, and administrative tasks within educational settings [ 11 ]. In particular, a large language model (LLM), a type of AI algorithm that applies artificial neural networks (ANNs) and uses massively large data sets to understand, summarize, generate, and predict new content that is almost difficult to differentiate from human creations [ 79 ], has opened up novel possibilities for enhancing various aspects of education, from content creation to personalized instruction [ 35 ]. Chatbots that leverage the capabilities of LLMs to understand and generate human-like responses have also presented the capacity to enhance student learning and educational outcomes by engaging students, offering timely support, and fostering interactive learning experiences [ 46 ].

The ongoing and remarkable technological advancements in chatbots have made their use more convenient, increasingly natural and effortless, and have expanded their potential for deployment across various domains [ 70 ]. One prominent example of chatbot applications is the Chat Generative Pre-Trained Transformer, known as ChatGPT, which was introduced by OpenAI, a leading AI research lab, on November 30th, 2022. ChatGPT employs a variety of deep learning techniques to generate human-like text, with a particular focus on recurrent neural networks (RNNs). Long short-term memory (LSTM) allows it to grasp the context of the text being processed and retain information from previous inputs. Also, the transformer architecture, a neural network architecture based on the self-attention mechanism, allows it to analyze specific parts of the input, thereby enabling it to produce more natural-sounding and coherent output. Additionally, the unsupervised generative pre-training and the fine-tuning methods allow ChatGPT to generate more relevant and accurate text for specific tasks [ 31 , 62 ]. Furthermore, reinforcement learning from human feedback (RLHF), a machine learning approach that combines reinforcement learning techniques with human-provided feedback, has helped improve ChatGPT’s model by accelerating the learning process and making it significantly more efficient.

This cutting-edge natural language processing (NLP) tool is widely recognized as one of today's most advanced LLMs-based chatbots [ 70 ], allowing users to ask questions and receive detailed, coherent, systematic, personalized, convincing, and informative human-like responses [ 55 ], even within complex and ambiguous contexts [ 63 , 77 ]. ChatGPT is considered the fastest-growing technology in history: in just three months following its public launch, it amassed an estimated 120 million monthly active users [ 16 ] with an estimated 13 million daily queries [ 49 ], surpassing all other applications [ 64 ]. This remarkable growth can be attributed to the unique features and user-friendly interface that ChatGPT offers. Its intuitive design allows users to interact seamlessly with the technology, making it accessible to a diverse range of individuals, regardless of their technical expertise [ 78 ]. Additionally, its exceptional performance results from a combination of advanced algorithms, continuous enhancements, and extensive training on a diverse dataset that includes various text sources such as books, articles, websites, and online forums [ 63 ], have contributed to a more engaging and satisfying user experience [ 62 ]. These factors collectively explain its remarkable global growth and set it apart from predecessors like Bard, Bing Chat, ERNIE, and others.

In this context, several studies have explored the technological advancements of chatbots. One noteworthy recent research effort, conducted by Schöbel et al. [ 70 ], stands out for its comprehensive analysis of more than 5,000 studies on communication agents. This study offered a comprehensive overview of the historical progression and future prospects of communication agents, including ChatGPT. Moreover, other studies have focused on making comparisons, particularly between ChatGPT and alternative chatbots like Bard, Bing Chat, ERNIE, LaMDA, BlenderBot, and various others. For example, O’Leary [ 53 ] compared two chatbots, LaMDA and BlenderBot, with ChatGPT and revealed that ChatGPT outperformed both. This superiority arises from ChatGPT’s capacity to handle a wider range of questions and generate slightly varied perspectives within specific contexts. Similarly, ChatGPT exhibited an impressive ability to formulate interpretable responses that were easily understood when compared with Google's feature snippet [ 34 ]. Additionally, ChatGPT was compared to other LLMs-based chatbots, including Bard and BERT, as well as ERNIE. The findings indicated that ChatGPT exhibited strong performance in the given tasks, often outperforming the other models [ 59 ].

Furthermore, in the education context, a comprehensive study systematically compared a range of the most promising chatbots, including Bard, Bing Chat, ChatGPT, and Ernie across a multidisciplinary test that required higher-order thinking. The study revealed that ChatGPT achieved the highest score, surpassing Bing Chat and Bard [ 64 ]. Similarly, a comparative analysis was conducted to compare ChatGPT with Bard in answering a set of 30 mathematical questions and logic problems, grouped into two question sets. Set (A) is unavailable online, while Set (B) is available online. The results revealed ChatGPT's superiority in Set (A) over Bard. Nevertheless, Bard's advantage emerged in Set (B) due to its capacity to access the internet directly and retrieve answers, a capability that ChatGPT does not possess [ 57 ]. However, through these varied assessments, ChatGPT consistently highlights its exceptional prowess compared to various alternatives in the ever-evolving chatbot technology.

The widespread adoption of chatbots, especially ChatGPT, by millions of students and educators, has sparked extensive discussions regarding its incorporation into the education sector [ 64 ]. Accordingly, many scholars have contributed to the discourse, expressing both optimism and pessimism regarding the incorporation of ChatGPT into education. For example, ChatGPT has been highlighted for its capabilities in enriching the learning and teaching experience through its ability to support different learning approaches, including adaptive learning, personalized learning, and self-directed learning [ 58 , 60 , 91 ]), deliver summative and formative feedback to students and provide real-time responses to questions, increase the accessibility of information [ 22 , 40 , 43 ], foster students’ performance, engagement and motivation [ 14 , 44 , 58 ], and enhance teaching practices [ 17 , 18 , 64 , 74 ].

On the other hand, concerns have been also raised regarding its potential negative effects on learning and teaching. These include the dissemination of false information and references [ 12 , 23 , 61 , 85 ], biased reinforcement [ 47 , 50 ], compromised academic integrity [ 18 , 40 , 66 , 74 ], and the potential decline in students' skills [ 43 , 61 , 64 , 74 ]. As a result, ChatGPT has been banned in multiple countries, including Russia, China, Venezuela, Belarus, and Iran, as well as in various educational institutions in India, Italy, Western Australia, France, and the United States [ 52 , 90 ].

Clearly, the advent of chatbots, especially ChatGPT, has provoked significant controversy due to their potential impact on learning and teaching. This indicates the necessity for further exploration to gain a deeper understanding of this technology and carefully evaluate its potential benefits, limitations, challenges, and threats to education [ 79 ]. Therefore, conducting a systematic literature review will provide valuable insights into the potential prospects and obstacles linked to its incorporation into education. This systematic literature review will primarily focus on ChatGPT, driven by the aforementioned key factors outlined above.

However, the existing literature lacks a systematic literature review of empirical studies. Thus, this systematic literature review aims to address this gap by synthesizing the existing empirical studies conducted on chatbots, particularly ChatGPT, in the field of education, highlighting how ChatGPT has been utilized in educational settings, and identifying any existing gaps. This review may be particularly useful for researchers in the field and educators who are contemplating the integration of ChatGPT or any chatbot into education. The following research questions will guide this study:

What are students' and teachers' initial attempts at utilizing ChatGPT in education?

What are the main findings derived from empirical studies that have incorporated ChatGPT into learning and teaching?

2 Methodology

To conduct this study, the authors followed the essential steps of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) and Okoli’s [ 54 ] steps for conducting a systematic review. These included identifying the study’s purpose, drafting a protocol, applying a practical screening process, searching the literature, extracting relevant data, evaluating the quality of the included studies, synthesizing the studies, and ultimately writing the review. The subsequent section provides an extensive explanation of how these steps were carried out in this study.

2.1 Identify the purpose

Given the widespread adoption of ChatGPT by students and teachers for various educational purposes, often without a thorough understanding of responsible and effective use or a clear recognition of its potential impact on learning and teaching, the authors recognized the need for further exploration of ChatGPT's impact on education in this early stage. Therefore, they have chosen to conduct a systematic literature review of existing empirical studies that incorporate ChatGPT into educational settings. Despite the limited number of empirical studies due to the novelty of the topic, their goal is to gain a deeper understanding of this technology and proactively evaluate its potential benefits, limitations, challenges, and threats to education. This effort could help to understand initial reactions and attempts at incorporating ChatGPT into education and bring out insights and considerations that can inform the future development of education.

2.2 Draft the protocol

The next step is formulating the protocol. This protocol serves to outline the study process in a rigorous and transparent manner, mitigating researcher bias in study selection and data extraction [ 88 ]. The protocol will include the following steps: generating the research question, predefining a literature search strategy, identifying search locations, establishing selection criteria, assessing the studies, developing a data extraction strategy, and creating a timeline.

2.3 Apply practical screen

The screening step aims to accurately filter the articles resulting from the searching step and select the empirical studies that have incorporated ChatGPT into educational contexts, which will guide us in answering the research questions and achieving the objectives of this study. To ensure the rigorous execution of this step, our inclusion and exclusion criteria were determined based on the authors' experience and informed by previous successful systematic reviews [ 21 ]. Table 1 summarizes the inclusion and exclusion criteria for study selection.

2.4 Literature search

We conducted a thorough literature search to identify articles that explored, examined, and addressed the use of ChatGPT in Educational contexts. We utilized two research databases: Dimensions.ai, which provides access to a large number of research publications, and lens.org, which offers access to over 300 million articles, patents, and other research outputs from diverse sources. Additionally, we included three databases, Scopus, Web of Knowledge, and ERIC, which contain relevant research on the topic that addresses our research questions. To browse and identify relevant articles, we used the following search formula: ("ChatGPT" AND "Education"), which included the Boolean operator "AND" to get more specific results. The subject area in the Scopus and ERIC databases were narrowed to "ChatGPT" and "Education" keywords, and in the WoS database was limited to the "Education" category. The search was conducted between the 3rd and 10th of April 2023, which resulted in 276 articles from all selected databases (111 articles from Dimensions.ai, 65 from Scopus, 28 from Web of Science, 14 from ERIC, and 58 from Lens.org). These articles were imported into the Rayyan web-based system for analysis. The duplicates were identified automatically by the system. Subsequently, the first author manually reviewed the duplicated articles ensured that they had the same content, and then removed them, leaving us with 135 unique articles. Afterward, the titles, abstracts, and keywords of the first 40 manuscripts were scanned and reviewed by the first author and were discussed with the second and third authors to resolve any disagreements. Subsequently, the first author proceeded with the filtering process for all articles and carefully applied the inclusion and exclusion criteria as presented in Table  1 . Articles that met any one of the exclusion criteria were eliminated, resulting in 26 articles. Afterward, the authors met to carefully scan and discuss them. The authors agreed to eliminate any empirical studies solely focused on checking ChatGPT capabilities, as these studies do not guide us in addressing the research questions and achieving the study's objectives. This resulted in 14 articles eligible for analysis.

2.5 Quality appraisal

The examination and evaluation of the quality of the extracted articles is a vital step [ 9 ]. Therefore, the extracted articles were carefully evaluated for quality using Fink’s [ 24 ] standards, which emphasize the necessity for detailed descriptions of methodology, results, conclusions, strengths, and limitations. The process began with a thorough assessment of each study's design, data collection, and analysis methods to ensure their appropriateness and comprehensive execution. The clarity, consistency, and logical progression from data to results and conclusions were also critically examined. Potential biases and recognized limitations within the studies were also scrutinized. Ultimately, two articles were excluded for failing to meet Fink’s criteria, particularly in providing sufficient detail on methodology, results, conclusions, strengths, or limitations. The review process is illustrated in Fig.  1 .

figure 1

The study selection process

2.6 Data extraction

The next step is data extraction, the process of capturing the key information and categories from the included studies. To improve efficiency, reduce variation among authors, and minimize errors in data analysis, the coding categories were constructed using Creswell's [ 15 ] coding techniques for data extraction and interpretation. The coding process involves three sequential steps. The initial stage encompasses open coding , where the researcher examines the data, generates codes to describe and categorize it, and gains a deeper understanding without preconceived ideas. Following open coding is axial coding , where the interrelationships between codes from open coding are analyzed to establish more comprehensive categories or themes. The process concludes with selective coding , refining and integrating categories or themes to identify core concepts emerging from the data. The first coder performed the coding process, then engaged in discussions with the second and third authors to finalize the coding categories for the first five articles. The first coder then proceeded to code all studies and engaged again in discussions with the other authors to ensure the finalization of the coding process. After a comprehensive analysis and capturing of the key information from the included studies, the data extraction and interpretation process yielded several themes. These themes have been categorized and are presented in Table  2 . It is important to note that open coding results were removed from Table  2 for aesthetic reasons, as it included many generic aspects, such as words, short phrases, or sentences mentioned in the studies.

2.7 Synthesize studies

In this stage, we will gather, discuss, and analyze the key findings that emerged from the selected studies. The synthesis stage is considered a transition from an author-centric to a concept-centric focus, enabling us to map all the provided information to achieve the most effective evaluation of the data [ 87 ]. Initially, the authors extracted data that included general information about the selected studies, including the author(s)' names, study titles, years of publication, educational levels, research methodologies, sample sizes, participants, main aims or objectives, raw data sources, and analysis methods. Following that, all key information and significant results from the selected studies were compiled using Creswell’s [ 15 ] coding techniques for data extraction and interpretation to identify core concepts and themes emerging from the data, focusing on those that directly contributed to our research questions and objectives, such as the initial utilization of ChatGPT in learning and teaching, learners' and educators' familiarity with ChatGPT, and the main findings of each study. Finally, the data related to each selected study were extracted into an Excel spreadsheet for data processing. The Excel spreadsheet was reviewed by the authors, including a series of discussions to ensure the finalization of this process and prepare it for further analysis. Afterward, the final result being analyzed and presented in various types of charts and graphs. Table 4 presents the extracted data from the selected studies, with each study labeled with a capital 'S' followed by a number.

This section consists of two main parts. The first part provides a descriptive analysis of the data compiled from the reviewed studies. The second part presents the answers to the research questions and the main findings of these studies.

3.1 Part 1: descriptive analysis

This section will provide a descriptive analysis of the reviewed studies, including educational levels and fields, participants distribution, country contribution, research methodologies, study sample size, study population, publication year, list of journals, familiarity with ChatGPT, source of data, and the main aims and objectives of the studies. Table 4 presents a comprehensive overview of the extracted data from the selected studies.

3.1.1 The number of the reviewed studies and publication years

The total number of the reviewed studies was 14. All studies were empirical studies and published in different journals focusing on Education and Technology. One study was published in 2022 [S1], while the remaining were published in 2023 [S2]-[S14]. Table 3 illustrates the year of publication, the names of the journals, and the number of reviewed studies published in each journal for the studies reviewed.

3.1.2 Educational levels and fields

The majority of the reviewed studies, 11 studies, were conducted in higher education institutions [S1]-[S10] and [S13]. Two studies did not specify the educational level of the population [S12] and [S14], while one study focused on elementary education [S11]. However, the reviewed studies covered various fields of education. Three studies focused on Arts and Humanities Education [S8], [S11], and [S14], specifically English Education. Two studies focused on Engineering Education, with one in Computer Engineering [S2] and the other in Construction Education [S3]. Two studies focused on Mathematics Education [S5] and [S12]. One study focused on Social Science Education [S13]. One study focused on Early Education [S4]. One study focused on Journalism Education [S9]. Finally, three studies did not specify the field of education [S1], [S6], and [S7]. Figure  2 represents the educational levels in the reviewed studies, while Fig.  3 represents the context of the reviewed studies.

figure 2

Educational levels in the reviewed studies

figure 3

Context of the reviewed studies

3.1.3 Participants distribution and countries contribution

The reviewed studies have been conducted across different geographic regions, providing a diverse representation of the studies. The majority of the studies, 10 in total, [S1]-[S3], [S5]-[S9], [S11], and [S14], primarily focused on participants from single countries such as Pakistan, the United Arab Emirates, China, Indonesia, Poland, Saudi Arabia, South Korea, Spain, Tajikistan, and the United States. In contrast, four studies, [S4], [S10], [S12], and [S13], involved participants from multiple countries, including China and the United States [S4], China, the United Kingdom, and the United States [S10], the United Arab Emirates, Oman, Saudi Arabia, and Jordan [S12], Turkey, Sweden, Canada, and Australia [ 13 ]. Figures  4 and 5 illustrate the distribution of participants, whether from single or multiple countries, and the contribution of each country in the reviewed studies, respectively.

figure 4

The reviewed studies conducted in single or multiple countries

figure 5

The Contribution of each country in the studies

3.1.4 Study population and sample size

Four study populations were included: university students, university teachers, university teachers and students, and elementary school teachers. Six studies involved university students [S2], [S3], [S5] and [S6]-[S8]. Three studies focused on university teachers [S1], [S4], and [S6], while one study specifically targeted elementary school teachers [S11]. Additionally, four studies included both university teachers and students [S10] and [ 12 , 13 , 14 ], and among them, study [S13] specifically included postgraduate students. In terms of the sample size of the reviewed studies, nine studies included a small sample size of less than 50 participants [S1], [S3], [S6], [S8], and [S10]-[S13]. Three studies had 50–100 participants [S2], [S9], and [S14]. Only one study had more than 100 participants [S7]. It is worth mentioning that study [S4] adopted a mixed methods approach, including 10 participants for qualitative analysis and 110 participants for quantitative analysis.

3.1.5 Participants’ familiarity with using ChatGPT

The reviewed studies recruited a diverse range of participants with varying levels of familiarity with ChatGPT. Five studies [S2], [S4], [S6], [S8], and [S12] involved participants already familiar with ChatGPT, while eight studies [S1], [S3], [S5], [S7], [S9], [S10], [S13] and [S14] included individuals with differing levels of familiarity. Notably, one study [S11] had participants who were entirely unfamiliar with ChatGPT. It is important to note that four studies [S3], [S5], [S9], and [S11] provided training or guidance to their participants before conducting their studies, while ten studies [S1], [S2], [S4], [S6]-[S8], [S10], and [S12]-[S14] did not provide training due to the participants' existing familiarity with ChatGPT.

3.1.6 Research methodology approaches and source(S) of data

The reviewed studies adopted various research methodology approaches. Seven studies adopted qualitative research methodology [S1], [S4], [S6], [S8], [S10], [S11], and [S12], while three studies adopted quantitative research methodology [S3], [S7], and [S14], and four studies employed mixed-methods, which involved a combination of both the strengths of qualitative and quantitative methods [S2], [S5], [S9], and [S13].

In terms of the source(s) of data, the reviewed studies obtained their data from various sources, such as interviews, questionnaires, and pre-and post-tests. Six studies relied on interviews as their primary source of data collection [S1], [S4], [S6], [S10], [S11], and [S12], four studies relied on questionnaires [S2], [S7], [S13], and [S14], two studies combined the use of pre-and post-tests and questionnaires for data collection [S3] and [S9], while two studies combined the use of questionnaires and interviews to obtain the data [S5] and [S8]. It is important to note that six of the reviewed studies were quasi-experimental [S3], [S5], [S8], [S9], [S12], and [S14], while the remaining ones were experimental studies [S1], [S2], [S4], [S6], [S7], [S10], [S11], and [S13]. Figures  6 and 7 illustrate the research methodologies and the source (s) of data used in the reviewed studies, respectively.

figure 6

Research methodologies in the reviewed studies

figure 7

Source of data in the reviewed studies

3.1.7 The aim and objectives of the studies

The reviewed studies encompassed a diverse set of aims, with several of them incorporating multiple primary objectives. Six studies [S3], [S6], [S7], [S8], [S11], and [S12] examined the integration of ChatGPT in educational contexts, and four studies [S4], [S5], [S13], and [S14] investigated the various implications of its use in education, while three studies [S2], [S9], and [S10] aimed to explore both its integration and implications in education. Additionally, seven studies explicitly explored attitudes and perceptions of students [S2] and [S3], educators [S1] and [S6], or both [S10], [S12], and [S13] regarding the utilization of ChatGPT in educational settings.

3.2 Part 2: research questions and main findings of the reviewed studies

This part will present the answers to the research questions and the main findings of the reviewed studies, classified into two main categories (learning and teaching) according to AI Education classification by [ 36 ]. Figure  8 summarizes the main findings of the reviewed studies in a visually informative diagram. Table 4 provides a detailed list of the key information extracted from the selected studies that led to generating these themes.

figure 8

The main findings in the reviewed studies

4 Students' initial attempts at utilizing ChatGPT in learning and main findings from students' perspective

4.1 virtual intelligent assistant.

Nine studies demonstrated that ChatGPT has been utilized by students as an intelligent assistant to enhance and support their learning. Students employed it for various purposes, such as answering on-demand questions [S2]-[S5], [S8], [S10], and [S12], providing valuable information and learning resources [S2]-[S5], [S6], and [S8], as well as receiving immediate feedback [S2], [S4], [S9], [S10], and [S12]. In this regard, students generally were confident in the accuracy of ChatGPT's responses, considering them relevant, reliable, and detailed [S3], [S4], [S5], and [S8]. However, some students indicated the need for improvement, as they found that answers are not always accurate [S2], and that misleading information may have been provided or that it may not always align with their expectations [S6] and [S10]. It was also observed by the students that the accuracy of ChatGPT is dependent on several factors, including the quality and specificity of the user's input, the complexity of the question or topic, and the scope and relevance of its training data [S12]. Many students felt that ChatGPT's answers were not always accurate and most of them believed that it requires good background knowledge to work with.

4.2 Writing and language proficiency assistant

Six of the reviewed studies highlighted that ChatGPT has been utilized by students as a valuable assistant tool to improve their academic writing skills and language proficiency. Among these studies, three mainly focused on English education, demonstrating that students showed sufficient mastery in using ChatGPT for generating ideas, summarizing, paraphrasing texts, and completing writing essays [S8], [S11], and [S14]. Furthermore, ChatGPT helped them in writing by making students active investigators rather than passive knowledge recipients and facilitated the development of their writing skills [S11] and [S14]. Similarly, ChatGPT allowed students to generate unique ideas and perspectives, leading to deeper analysis and reflection on their journalism writing [S9]. In terms of language proficiency, ChatGPT allowed participants to translate content into their home languages, making it more accessible and relevant to their context [S4]. It also enabled them to request changes in linguistic tones or flavors [S8]. Moreover, participants used it to check grammar or as a dictionary [S11].

4.3 Valuable resource for learning approaches

Five studies demonstrated that students used ChatGPT as a valuable complementary resource for self-directed learning. It provided learning resources and guidance on diverse educational topics and created a supportive home learning environment [S2] and [S4]. Moreover, it offered step-by-step guidance to grasp concepts at their own pace and enhance their understanding [S5], streamlined task and project completion carried out independently [S7], provided comprehensive and easy-to-understand explanations on various subjects [S10], and assisted in studying geometry operations, thereby empowering them to explore geometry operations at their own pace [S12]. Three studies showed that students used ChatGPT as a valuable learning resource for personalized learning. It delivered age-appropriate conversations and tailored teaching based on a child's interests [S4], acted as a personalized learning assistant, adapted to their needs and pace, which assisted them in understanding mathematical concepts [S12], and enabled personalized learning experiences in social sciences by adapting to students' needs and learning styles [S13]. On the other hand, it is important to note that, according to one study [S5], students suggested that using ChatGPT may negatively affect collaborative learning competencies between students.

4.4 Enhancing students' competencies

Six of the reviewed studies have shown that ChatGPT is a valuable tool for improving a wide range of skills among students. Two studies have provided evidence that ChatGPT led to improvements in students' critical thinking, reasoning skills, and hazard recognition competencies through engaging them in interactive conversations or activities and providing responses related to their disciplines in journalism [S5] and construction education [S9]. Furthermore, two studies focused on mathematical education have shown the positive impact of ChatGPT on students' problem-solving abilities in unraveling problem-solving questions [S12] and enhancing the students' understanding of the problem-solving process [S5]. Lastly, one study indicated that ChatGPT effectively contributed to the enhancement of conversational social skills [S4].

4.5 Supporting students' academic success

Seven of the reviewed studies highlighted that students found ChatGPT to be beneficial for learning as it enhanced learning efficiency and improved the learning experience. It has been observed to improve students' efficiency in computer engineering studies by providing well-structured responses and good explanations [S2]. Additionally, students found it extremely useful for hazard reporting [S3], and it also enhanced their efficiency in solving mathematics problems and capabilities [S5] and [S12]. Furthermore, by finding information, generating ideas, translating texts, and providing alternative questions, ChatGPT aided students in deepening their understanding of various subjects [S6]. It contributed to an increase in students' overall productivity [S7] and improved efficiency in composing written tasks [S8]. Regarding learning experiences, ChatGPT was instrumental in assisting students in identifying hazards that they might have otherwise overlooked [S3]. It also improved students' learning experiences in solving mathematics problems and developing abilities [S5] and [S12]. Moreover, it increased students' successful completion of important tasks in their studies [S7], particularly those involving average difficulty writing tasks [S8]. Additionally, ChatGPT increased the chances of educational success by providing students with baseline knowledge on various topics [S10].

5 Teachers' initial attempts at utilizing ChatGPT in teaching and main findings from teachers' perspective

5.1 valuable resource for teaching.

The reviewed studies showed that teachers have employed ChatGPT to recommend, modify, and generate diverse, creative, organized, and engaging educational contents, teaching materials, and testing resources more rapidly [S4], [S6], [S10] and [S11]. Additionally, teachers experienced increased productivity as ChatGPT facilitated quick and accurate responses to questions, fact-checking, and information searches [S1]. It also proved valuable in constructing new knowledge [S6] and providing timely answers to students' questions in classrooms [S11]. Moreover, ChatGPT enhanced teachers' efficiency by generating new ideas for activities and preplanning activities for their students [S4] and [S6], including interactive language game partners [S11].

5.2 Improving productivity and efficiency

The reviewed studies showed that participants' productivity and work efficiency have been significantly enhanced by using ChatGPT as it enabled them to allocate more time to other tasks and reduce their overall workloads [S6], [S10], [S11], [S13], and [S14]. However, three studies [S1], [S4], and [S11], indicated a negative perception and attitude among teachers toward using ChatGPT. This negativity stemmed from a lack of necessary skills to use it effectively [S1], a limited familiarity with it [S4], and occasional inaccuracies in the content provided by it [S10].

5.3 Catalyzing new teaching methodologies

Five of the reviewed studies highlighted that educators found the necessity of redefining their teaching profession with the assistance of ChatGPT [S11], developing new effective learning strategies [S4], and adapting teaching strategies and methodologies to ensure the development of essential skills for future engineers [S5]. They also emphasized the importance of adopting new educational philosophies and approaches that can evolve with the introduction of ChatGPT into the classroom [S12]. Furthermore, updating curricula to focus on improving human-specific features, such as emotional intelligence, creativity, and philosophical perspectives [S13], was found to be essential.

5.4 Effective utilization of CHATGPT in teaching

According to the reviewed studies, effective utilization of ChatGPT in education requires providing teachers with well-structured training, support, and adequate background on how to use ChatGPT responsibly [S1], [S3], [S11], and [S12]. Establishing clear rules and regulations regarding its usage is essential to ensure it positively impacts the teaching and learning processes, including students' skills [S1], [S4], [S5], [S8], [S9], and [S11]-[S14]. Moreover, conducting further research and engaging in discussions with policymakers and stakeholders is indeed crucial for the successful integration of ChatGPT in education and to maximize the benefits for both educators and students [S1], [S6]-[S10], and [S12]-[S14].

6 Discussion

The purpose of this review is to conduct a systematic review of empirical studies that have explored the utilization of ChatGPT, one of today’s most advanced LLM-based chatbots, in education. The findings of the reviewed studies showed several ways of ChatGPT utilization in different learning and teaching practices as well as it provided insights and considerations that can facilitate its effective and responsible use in future educational contexts. The results of the reviewed studies came from diverse fields of education, which helped us avoid a biased review that is limited to a specific field. Similarly, the reviewed studies have been conducted across different geographic regions. This kind of variety in geographic representation enriched the findings of this review.

In response to RQ1 , "What are students' and teachers' initial attempts at utilizing ChatGPT in education?", the findings from this review provide comprehensive insights. Chatbots, including ChatGPT, play a crucial role in supporting student learning, enhancing their learning experiences, and facilitating diverse learning approaches [ 42 , 43 ]. This review found that this tool, ChatGPT, has been instrumental in enhancing students' learning experiences by serving as a virtual intelligent assistant, providing immediate feedback, on-demand answers, and engaging in educational conversations. Additionally, students have benefited from ChatGPT’s ability to generate ideas, compose essays, and perform tasks like summarizing, translating, paraphrasing texts, or checking grammar, thereby enhancing their writing and language competencies. Furthermore, students have turned to ChatGPT for assistance in understanding concepts and homework, providing structured learning plans, and clarifying assignments and tasks, which fosters a supportive home learning environment, allowing them to take responsibility for their own learning and cultivate the skills and approaches essential for supportive home learning environment [ 26 , 27 , 28 ]. This finding aligns with the study of Saqr et al. [ 68 , 69 ] who highlighted that, when students actively engage in their own learning process, it yields additional advantages, such as heightened motivation, enhanced achievement, and the cultivation of enthusiasm, turning them into advocates for their own learning.

Moreover, students have utilized ChatGPT for tailored teaching and step-by-step guidance on diverse educational topics, streamlining task and project completion, and generating and recommending educational content. This personalization enhances the learning environment, leading to increased academic success. This finding aligns with other recent studies [ 26 , 27 , 28 , 60 , 66 ] which revealed that ChatGPT has the potential to offer personalized learning experiences and support an effective learning process by providing students with customized feedback and explanations tailored to their needs and abilities. Ultimately, fostering students' performance, engagement, and motivation, leading to increase students' academic success [ 14 , 44 , 58 ]. This ultimate outcome is in line with the findings of Saqr et al. [ 68 , 69 ], which emphasized that learning strategies are important catalysts of students' learning, as students who utilize effective learning strategies are more likely to have better academic achievement.

Teachers, too, have capitalized on ChatGPT's capabilities to enhance productivity and efficiency, using it for creating lesson plans, generating quizzes, providing additional resources, generating and preplanning new ideas for activities, and aiding in answering students’ questions. This adoption of technology introduces new opportunities to support teaching and learning practices, enhancing teacher productivity. This finding aligns with those of Day [ 17 ], De Castro [ 18 ], and Su and Yang [ 74 ] as well as with those of Valtonen et al. [ 82 ], who revealed that emerging technological advancements have opened up novel opportunities and means to support teaching and learning practices, and enhance teachers’ productivity.

In response to RQ2 , "What are the main findings derived from empirical studies that have incorporated ChatGPT into learning and teaching?", the findings from this review provide profound insights and raise significant concerns. Starting with the insights, chatbots, including ChatGPT, have demonstrated the potential to reshape and revolutionize education, creating new, novel opportunities for enhancing the learning process and outcomes [ 83 ], facilitating different learning approaches, and offering a range of pedagogical benefits [ 19 , 43 , 72 ]. In this context, this review found that ChatGPT could open avenues for educators to adopt or develop new effective learning and teaching strategies that can evolve with the introduction of ChatGPT into the classroom. Nonetheless, there is an evident lack of research understanding regarding the potential impact of generative machine learning models within diverse educational settings [ 83 ]. This necessitates teachers to attain a high level of proficiency in incorporating chatbots, such as ChatGPT, into their classrooms to create inventive, well-structured, and captivating learning strategies. In the same vein, the review also found that teachers without the requisite skills to utilize ChatGPT realized that it did not contribute positively to their work and could potentially have adverse effects [ 37 ]. This concern could lead to inequity of access to the benefits of chatbots, including ChatGPT, as individuals who lack the necessary expertise may not be able to harness their full potential, resulting in disparities in educational outcomes and opportunities. Therefore, immediate action is needed to address these potential issues. A potential solution is offering training, support, and competency development for teachers to ensure that all of them can leverage chatbots, including ChatGPT, effectively and equitably in their educational practices [ 5 , 28 , 80 ], which could enhance accessibility and inclusivity, and potentially result in innovative outcomes [ 82 , 83 ].

Additionally, chatbots, including ChatGPT, have the potential to significantly impact students' thinking abilities, including retention, reasoning, analysis skills [ 19 , 45 ], and foster innovation and creativity capabilities [ 83 ]. This review found that ChatGPT could contribute to improving a wide range of skills among students. However, it found that frequent use of ChatGPT may result in a decrease in innovative capacities, collaborative skills and cognitive capacities, and students' motivation to attend classes, as well as could lead to reduced higher-order thinking skills among students [ 22 , 29 ]. Therefore, immediate action is needed to carefully examine the long-term impact of chatbots such as ChatGPT, on learning outcomes as well as to explore its incorporation into educational settings as a supportive tool without compromising students' cognitive development and critical thinking abilities. In the same vein, the review also found that it is challenging to draw a consistent conclusion regarding the potential of ChatGPT to aid self-directed learning approach. This finding aligns with the recent study of Baskara [ 8 ]. Therefore, further research is needed to explore the potential of ChatGPT for self-directed learning. One potential solution involves utilizing learning analytics as a novel approach to examine various aspects of students' learning and support them in their individual endeavors [ 32 ]. This approach can bridge this gap by facilitating an in-depth analysis of how learners engage with ChatGPT, identifying trends in self-directed learning behavior, and assessing its influence on their outcomes.

Turning to the significant concerns, on the other hand, a fundamental challenge with LLM-based chatbots, including ChatGPT, is the accuracy and quality of the provided information and responses, as they provide false information as truth—a phenomenon often referred to as "hallucination" [ 3 , 49 ]. In this context, this review found that the provided information was not entirely satisfactory. Consequently, the utilization of chatbots presents potential concerns, such as generating and providing inaccurate or misleading information, especially for students who utilize it to support their learning. This finding aligns with other findings [ 6 , 30 , 35 , 40 ] which revealed that incorporating chatbots such as ChatGPT, into education presents challenges related to its accuracy and reliability due to its training on a large corpus of data, which may contain inaccuracies and the way users formulate or ask ChatGPT. Therefore, immediate action is needed to address these potential issues. One possible solution is to equip students with the necessary skills and competencies, which include a background understanding of how to use it effectively and the ability to assess and evaluate the information it generates, as the accuracy and the quality of the provided information depend on the input, its complexity, the topic, and the relevance of its training data [ 28 , 49 , 86 ]. However, it's also essential to examine how learners can be educated about how these models operate, the data used in their training, and how to recognize their limitations, challenges, and issues [ 79 ].

Furthermore, chatbots present a substantial challenge concerning maintaining academic integrity [ 20 , 56 ] and copyright violations [ 83 ], which are significant concerns in education. The review found that the potential misuse of ChatGPT might foster cheating, facilitate plagiarism, and threaten academic integrity. This issue is also affirmed by the research conducted by Basic et al. [ 7 ], who presented evidence that students who utilized ChatGPT in their writing assignments had more plagiarism cases than those who did not. These findings align with the conclusions drawn by Cotton et al. [ 13 ], Hisan and Amri [ 33 ] and Sullivan et al. [ 75 ], who revealed that the integration of chatbots such as ChatGPT into education poses a significant challenge to the preservation of academic integrity. Moreover, chatbots, including ChatGPT, have increased the difficulty in identifying plagiarism [ 47 , 67 , 76 ]. The findings from previous studies [ 1 , 84 ] indicate that AI-generated text often went undetected by plagiarism software, such as Turnitin. However, Turnitin and other similar plagiarism detection tools, such as ZeroGPT, GPTZero, and Copyleaks, have since evolved, incorporating enhanced techniques to detect AI-generated text, despite the possibility of false positives, as noted in different studies that have found these tools still not yet fully ready to accurately and reliably identify AI-generated text [ 10 , 51 ], and new novel detection methods may need to be created and implemented for AI-generated text detection [ 4 ]. This potential issue could lead to another concern, which is the difficulty of accurately evaluating student performance when they utilize chatbots such as ChatGPT assistance in their assignments. Consequently, the most LLM-driven chatbots present a substantial challenge to traditional assessments [ 64 ]. The findings from previous studies indicate the importance of rethinking, improving, and redesigning innovative assessment methods in the era of chatbots [ 14 , 20 , 64 , 75 ]. These methods should prioritize the process of evaluating students' ability to apply knowledge to complex cases and demonstrate comprehension, rather than solely focusing on the final product for assessment. Therefore, immediate action is needed to address these potential issues. One possible solution would be the development of clear guidelines, regulatory policies, and pedagogical guidance. These measures would help regulate the proper and ethical utilization of chatbots, such as ChatGPT, and must be established before their introduction to students [ 35 , 38 , 39 , 41 , 89 ].

In summary, our review has delved into the utilization of ChatGPT, a prominent example of chatbots, in education, addressing the question of how ChatGPT has been utilized in education. However, there remain significant gaps, which necessitate further research to shed light on this area.

7 Conclusions

This systematic review has shed light on the varied initial attempts at incorporating ChatGPT into education by both learners and educators, while also offering insights and considerations that can facilitate its effective and responsible use in future educational contexts. From the analysis of 14 selected studies, the review revealed the dual-edged impact of ChatGPT in educational settings. On the positive side, ChatGPT significantly aided the learning process in various ways. Learners have used it as a virtual intelligent assistant, benefiting from its ability to provide immediate feedback, on-demand answers, and easy access to educational resources. Additionally, it was clear that learners have used it to enhance their writing and language skills, engaging in practices such as generating ideas, composing essays, and performing tasks like summarizing, translating, paraphrasing texts, or checking grammar. Importantly, other learners have utilized it in supporting and facilitating their directed and personalized learning on a broad range of educational topics, assisting in understanding concepts and homework, providing structured learning plans, and clarifying assignments and tasks. Educators, on the other hand, found ChatGPT beneficial for enhancing productivity and efficiency. They used it for creating lesson plans, generating quizzes, providing additional resources, and answers learners' questions, which saved time and allowed for more dynamic and engaging teaching strategies and methodologies.

However, the review also pointed out negative impacts. The results revealed that overuse of ChatGPT could decrease innovative capacities and collaborative learning among learners. Specifically, relying too much on ChatGPT for quick answers can inhibit learners' critical thinking and problem-solving skills. Learners might not engage deeply with the material or consider multiple solutions to a problem. This tendency was particularly evident in group projects, where learners preferred consulting ChatGPT individually for solutions over brainstorming and collaborating with peers, which negatively affected their teamwork abilities. On a broader level, integrating ChatGPT into education has also raised several concerns, including the potential for providing inaccurate or misleading information, issues of inequity in access, challenges related to academic integrity, and the possibility of misusing the technology.

Accordingly, this review emphasizes the urgency of developing clear rules, policies, and regulations to ensure ChatGPT's effective and responsible use in educational settings, alongside other chatbots, by both learners and educators. This requires providing well-structured training to educate them on responsible usage and understanding its limitations, along with offering sufficient background information. Moreover, it highlights the importance of rethinking, improving, and redesigning innovative teaching and assessment methods in the era of ChatGPT. Furthermore, conducting further research and engaging in discussions with policymakers and stakeholders are essential steps to maximize the benefits for both educators and learners and ensure academic integrity.

It is important to acknowledge that this review has certain limitations. Firstly, the limited inclusion of reviewed studies can be attributed to several reasons, including the novelty of the technology, as new technologies often face initial skepticism and cautious adoption; the lack of clear guidelines or best practices for leveraging this technology for educational purposes; and institutional or governmental policies affecting the utilization of this technology in educational contexts. These factors, in turn, have affected the number of studies available for review. Secondly, the utilization of the original version of ChatGPT, based on GPT-3 or GPT-3.5, implies that new studies utilizing the updated version, GPT-4 may lead to different findings. Therefore, conducting follow-up systematic reviews is essential once more empirical studies on ChatGPT are published. Additionally, long-term studies are necessary to thoroughly examine and assess the impact of ChatGPT on various educational practices.

Despite these limitations, this systematic review has highlighted the transformative potential of ChatGPT in education, revealing its diverse utilization by learners and educators alike and summarized the benefits of incorporating it into education, as well as the forefront critical concerns and challenges that must be addressed to facilitate its effective and responsible use in future educational contexts. This review could serve as an insightful resource for practitioners who seek to integrate ChatGPT into education and stimulate further research in the field.

Data availability

The data supporting our findings are available upon request.

Abbreviations

  • Artificial intelligence

AI in education

Large language model

Artificial neural networks

Chat Generative Pre-Trained Transformer

Recurrent neural networks

Long short-term memory

Reinforcement learning from human feedback

Natural language processing

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

AlAfnan MA, Dishari S, Jovic M, Lomidze K. ChatGPT as an educational tool: opportunities, challenges, and recommendations for communication, business writing, and composition courses. J Artif Intell Technol. 2023. https://doi.org/10.37965/jait.2023.0184 .

Article   Google Scholar  

Ali JKM, Shamsan MAA, Hezam TA, Mohammed AAQ. Impact of ChatGPT on learning motivation. J Engl Stud Arabia Felix. 2023;2(1):41–9. https://doi.org/10.56540/jesaf.v2i1.51 .

Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus. 2023. https://doi.org/10.7759/cureus.35179 .

Anderson N, Belavý DL, Perle SM, Hendricks S, Hespanhol L, Verhagen E, Memon AR. AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in sports & exercise medicine manuscript generation. BMJ Open Sport Exerc Med. 2023;9(1): e001568. https://doi.org/10.1136/bmjsem-2023-001568 .

Ausat AMA, Massang B, Efendi M, Nofirman N, Riady Y. Can chat GPT replace the role of the teacher in the classroom: a fundamental analysis. J Educ. 2023;5(4):16100–6.

Google Scholar  

Baidoo-Anu D, Ansah L. Education in the Era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. Soc Sci Res Netw. 2023. https://doi.org/10.2139/ssrn.4337484 .

Basic Z, Banovac A, Kruzic I, Jerkovic I. Better by you, better than me, chatgpt3 as writing assistance in students essays. 2023. arXiv preprint arXiv:2302.04536 .‏

Baskara FR. The promises and pitfalls of using chat GPT for self-determined learning in higher education: an argumentative review. Prosiding Seminar Nasional Fakultas Tarbiyah dan Ilmu Keguruan IAIM Sinjai. 2023;2:95–101. https://doi.org/10.47435/sentikjar.v2i0.1825 .

Behera RK, Bala PK, Dhir A. The emerging role of cognitive computing in healthcare: a systematic literature review. Int J Med Inform. 2019;129:154–66. https://doi.org/10.1016/j.ijmedinf.2019.04.024 .

Chaka C. Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: the case of five AI content detection tools. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.2.12 .

Chiu TKF, Xia Q, Zhou X, Chai CS, Cheng M. Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Comput Educ Artif Intell. 2023;4:100118. https://doi.org/10.1016/j.caeai.2022.100118 .

Choi EPH, Lee JJ, Ho M, Kwok JYY, Lok KYW. Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Educ Today. 2023;125:105796. https://doi.org/10.1016/j.nedt.2023.105796 .

Cotton D, Cotton PA, Shipway JR. Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov Educ Teach Int. 2023. https://doi.org/10.1080/14703297.2023.2190148 .

Crawford J, Cowling M, Allen K. Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). J Univ Teach Learn Pract. 2023. https://doi.org/10.53761/1.20.3.02 .

Creswell JW. Educational research: planning, conducting, and evaluating quantitative and qualitative research [Ebook]. 4th ed. London: Pearson Education; 2015.

Curry D. ChatGPT Revenue and Usage Statistics (2023)—Business of Apps. 2023. https://www.businessofapps.com/data/chatgpt-statistics/

Day T. A preliminary investigation of fake peer-reviewed citations and references generated by ChatGPT. Prof Geogr. 2023. https://doi.org/10.1080/00330124.2023.2190373 .

De Castro CA. A Discussion about the Impact of ChatGPT in education: benefits and concerns. J Bus Theor Pract. 2023;11(2):p28. https://doi.org/10.22158/jbtp.v11n2p28 .

Deng X, Yu Z. A meta-analysis and systematic review of the effect of Chatbot technology use in sustainable education. Sustainability. 2023;15(4):2940. https://doi.org/10.3390/su15042940 .

Eke DO. ChatGPT and the rise of generative AI: threat to academic integrity? J Responsib Technol. 2023;13:100060. https://doi.org/10.1016/j.jrt.2023.100060 .

Elmoazen R, Saqr M, Tedre M, Hirsto L. A systematic literature review of empirical research on epistemic network analysis in education. IEEE Access. 2022;10:17330–48. https://doi.org/10.1109/access.2022.3149812 .

Farrokhnia M, Banihashem SK, Noroozi O, Wals AEJ. A SWOT analysis of ChatGPT: implications for educational practice and research. Innov Educ Teach Int. 2023. https://doi.org/10.1080/14703297.2023.2195846 .

Fergus S, Botha M, Ostovar M. Evaluating academic answers generated using ChatGPT. J Chem Educ. 2023;100(4):1672–5. https://doi.org/10.1021/acs.jchemed.3c00087 .

Fink A. Conducting research literature reviews: from the Internet to Paper. Incorporated: SAGE Publications; 2010.

Firaina R, Sulisworo D. Exploring the usage of ChatGPT in higher education: frequency and impact on productivity. Buletin Edukasi Indonesia (BEI). 2023;2(01):39–46. https://doi.org/10.56741/bei.v2i01.310 .

Firat, M. (2023). How chat GPT can transform autodidactic experiences and open education.  Department of Distance Education, Open Education Faculty, Anadolu Unive .‏ https://orcid.org/0000-0001-8707-5918

Firat M. What ChatGPT means for universities: perceptions of scholars and students. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.1.22 .

Fuchs K. Exploring the opportunities and challenges of NLP models in higher education: is Chat GPT a blessing or a curse? Front Educ. 2023. https://doi.org/10.3389/feduc.2023.1166682 .

García-Peñalvo FJ. La percepción de la inteligencia artificial en contextos educativos tras el lanzamiento de ChatGPT: disrupción o pánico. Educ Knowl Soc. 2023;24: e31279. https://doi.org/10.14201/eks.31279 .

Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor A, Chartash D. How does ChatGPT perform on the United States medical Licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9: e45312. https://doi.org/10.2196/45312 .

Hashana AJ, Brundha P, Ayoobkhan MUA, Fazila S. Deep Learning in ChatGPT—A Survey. In   2023 7th international conference on trends in electronics and informatics (ICOEI) . 2023. (pp. 1001–1005). IEEE. https://doi.org/10.1109/icoei56765.2023.10125852

Hirsto L, Saqr M, López-Pernas S, Valtonen T. (2022). A systematic narrative review of learning analytics research in K-12 and schools.  Proceedings . https://ceur-ws.org/Vol-3383/FLAIEC22_paper_9536.pdf

Hisan UK, Amri MM. ChatGPT and medical education: a double-edged sword. J Pedag Educ Sci. 2023;2(01):71–89. https://doi.org/10.13140/RG.2.2.31280.23043/1 .

Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023. https://doi.org/10.1093/jncics/pkad010 .

Househ M, AlSaad R, Alhuwail D, Ahmed A, Healy MG, Latifi S, Sheikh J. Large Language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ. 2023;9: e48291. https://doi.org/10.2196/48291 .

Ilkka T. The impact of artificial intelligence on learning, teaching, and education. Minist de Educ. 2018. https://doi.org/10.2760/12297 .

Iqbal N, Ahmed H, Azhar KA. Exploring teachers’ attitudes towards using CHATGPT. Globa J Manag Adm Sci. 2022;3(4):97–111. https://doi.org/10.46568/gjmas.v3i4.163 .

Irfan M, Murray L, Ali S. Integration of Artificial intelligence in academia: a case study of critical teaching and learning in Higher education. Globa Soc Sci Rev. 2023;8(1):352–64. https://doi.org/10.31703/gssr.2023(viii-i).32 .

Jeon JH, Lee S. Large language models in education: a focus on the complementary relationship between human teachers and ChatGPT. Educ Inf Technol. 2023. https://doi.org/10.1007/s10639-023-11834-1 .

Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT—Reshaping medical education and clinical management. Pak J Med Sci. 2023. https://doi.org/10.12669/pjms.39.2.7653 .

King MR. A conversation on artificial intelligence, Chatbots, and plagiarism in higher education. Cell Mol Bioeng. 2023;16(1):1–2. https://doi.org/10.1007/s12195-022-00754-8 .

Kooli C. Chatbots in education and research: a critical examination of ethical implications and solutions. Sustainability. 2023;15(7):5614. https://doi.org/10.3390/su15075614 .

Kuhail MA, Alturki N, Alramlawi S, Alhejori K. Interacting with educational chatbots: a systematic review. Educ Inf Technol. 2022;28(1):973–1018. https://doi.org/10.1007/s10639-022-11177-3 .

Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. 2023. https://doi.org/10.1002/ase.2270 .

Li L, Subbareddy R, Raghavendra CG. AI intelligence Chatbot to improve students learning in the higher education platform. J Interconnect Netw. 2022. https://doi.org/10.1142/s0219265921430325 .

Limna P. A Review of Artificial Intelligence (AI) in Education during the Digital Era. 2022. https://ssrn.com/abstract=4160798

Lo CK. What is the impact of ChatGPT on education? A rapid review of the literature. Educ Sci. 2023;13(4):410. https://doi.org/10.3390/educsci13040410 .

Luo W, He H, Liu J, Berson IR, Berson MJ, Zhou Y, Li H. Aladdin’s genie or pandora’s box For early childhood education? Experts chat on the roles, challenges, and developments of ChatGPT. Early Educ Dev. 2023. https://doi.org/10.1080/10409289.2023.2214181 .

Meyer JG, Urbanowicz RJ, Martin P, O’Connor K, Li R, Peng P, Moore JH. ChatGPT and large language models in academia: opportunities and challenges. Biodata Min. 2023. https://doi.org/10.1186/s13040-023-00339-9 .

Mhlanga D. Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. Soc Sci Res Netw. 2023. https://doi.org/10.2139/ssrn.4354422 .

Neumann, M., Rauschenberger, M., & Schön, E. M. (2023). “We Need To Talk About ChatGPT”: The Future of AI and Higher Education.‏ https://doi.org/10.1109/seeng59157.2023.00010

Nolan B. Here are the schools and colleges that have banned the use of ChatGPT over plagiarism and misinformation fears. Business Insider . 2023. https://www.businessinsider.com

O’Leary DE. An analysis of three chatbots: BlenderBot, ChatGPT and LaMDA. Int J Intell Syst Account, Financ Manag. 2023;30(1):41–54. https://doi.org/10.1002/isaf.1531 .

Okoli C. A guide to conducting a standalone systematic literature review. Commun Assoc Inf Syst. 2015. https://doi.org/10.17705/1cais.03743 .

OpenAI. (2023). https://openai.com/blog/chatgpt

Perkins M. Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. J Univ Teach Learn Pract. 2023. https://doi.org/10.53761/1.20.02.07 .

Plevris V, Papazafeiropoulos G, Rios AJ. Chatbots put to the test in math and logic problems: A preliminary comparison and assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard. arXiv (Cornell University) . 2023. https://doi.org/10.48550/arxiv.2305.18618

Rahman MM, Watanobe Y (2023) ChatGPT for education and research: opportunities, threats, and strategies. Appl Sci 13(9):5783. https://doi.org/10.3390/app13095783

Ram B, Verma P. Artificial intelligence AI-based Chatbot study of ChatGPT, google AI bard and baidu AI. World J Adv Eng Technol Sci. 2023;8(1):258–61. https://doi.org/10.30574/wjaets.2023.8.1.0045 .

Rasul T, Nair S, Kalendra D, Robin M, de Oliveira Santini F, Ladeira WJ, Heathcote L. The role of ChatGPT in higher education: benefits, challenges, and future research directions. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.1.29 .

Ratnam M, Sharm B, Tomer A. ChatGPT: educational artificial intelligence. Int J Adv Trends Comput Sci Eng. 2023;12(2):84–91. https://doi.org/10.30534/ijatcse/2023/091222023 .

Ray PP. ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys Syst. 2023;3:121–54. https://doi.org/10.1016/j.iotcps.2023.04.003 .

Roumeliotis KI, Tselikas ND. ChatGPT and Open-AI models: a preliminary review. Future Internet. 2023;15(6):192. https://doi.org/10.3390/fi15060192 .

Rudolph J, Tan S, Tan S. War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.1.23 .

Ruiz LMS, Moll-López S, Nuñez-Pérez A, Moraño J, Vega-Fleitas E. ChatGPT challenges blended learning methodologies in engineering education: a case study in mathematics. Appl Sci. 2023;13(10):6039. https://doi.org/10.3390/app13106039 .

Sallam M, Salim NA, Barakat M, Al-Tammemi AB. ChatGPT applications in medical, dental, pharmacy, and public health education: a descriptive study highlighting the advantages and limitations. Narra J. 2023;3(1): e103. https://doi.org/10.52225/narra.v3i1.103 .

Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023. https://doi.org/10.1186/s13054-023-04380-2 .

Saqr M, López-Pernas S, Helske S, Hrastinski S. The longitudinal association between engagement and achievement varies by time, students’ profiles, and achievement state: a full program study. Comput Educ. 2023;199:104787. https://doi.org/10.1016/j.compedu.2023.104787 .

Saqr M, Matcha W, Uzir N, Jovanović J, Gašević D, López-Pernas S. Transferring effective learning strategies across learning contexts matters: a study in problem-based learning. Australas J Educ Technol. 2023;39(3):9.

Schöbel S, Schmitt A, Benner D, Saqr M, Janson A, Leimeister JM. Charting the evolution and future of conversational agents: a research agenda along five waves and new frontiers. Inf Syst Front. 2023. https://doi.org/10.1007/s10796-023-10375-9 .

Shoufan A. Exploring students’ perceptions of CHATGPT: thematic analysis and follow-up survey. IEEE Access. 2023. https://doi.org/10.1109/access.2023.3268224 .

Sonderegger S, Seufert S. Chatbot-mediated learning: conceptual framework for the design of Chatbot use cases in education. Gallen: Institute for Educational Management and Technologies, University of St; 2022. https://doi.org/10.5220/0010999200003182 .

Book   Google Scholar  

Strzelecki A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact Learn Environ. 2023. https://doi.org/10.1080/10494820.2023.2209881 .

Su J, Yang W. Unlocking the power of ChatGPT: a framework for applying generative AI in education. ECNU Rev Educ. 2023. https://doi.org/10.1177/20965311231168423 .

Sullivan M, Kelly A, McLaughlan P. ChatGPT in higher education: Considerations for academic integrity and student learning. J ApplLearn Teach. 2023;6(1):1–10. https://doi.org/10.37074/jalt.2023.6.1.17 .

Szabo A. ChatGPT is a breakthrough in science and education but fails a test in sports and exercise psychology. Balt J Sport Health Sci. 2023;1(128):25–40. https://doi.org/10.33607/bjshs.v127i4.1233 .

Taecharungroj V. “What can ChatGPT do?” analyzing early reactions to the innovative AI chatbot on Twitter. Big Data Cognit Comput. 2023;7(1):35. https://doi.org/10.3390/bdcc7010035 .

Tam S, Said RB. User preferences for ChatGPT-powered conversational interfaces versus traditional methods. Biomed Eng Soc. 2023. https://doi.org/10.58496/mjcsc/2023/004 .

Tedre M, Kahila J, Vartiainen H. (2023). Exploration on how co-designing with AI facilitates critical evaluation of ethics of AI in craft education. In: Langran E, Christensen P, Sanson J (Eds).  Proceedings of Society for Information Technology and Teacher Education International Conference . 2023. pp. 2289–2296.

Tlili A, Shehata B, Adarkwah MA, Bozkurt A, Hickey DT, Huang R, Agyemang B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn Environ. 2023. https://doi.org/10.1186/s40561-023-00237-x .

Uddin SMJ, Albert A, Ovid A, Alsharef A. Leveraging CHATGPT to aid construction hazard recognition and support safety education and training. Sustainability. 2023;15(9):7121. https://doi.org/10.3390/su15097121 .

Valtonen T, López-Pernas S, Saqr M, Vartiainen H, Sointu E, Tedre M. The nature and building blocks of educational technology research. Comput Hum Behav. 2022;128:107123. https://doi.org/10.1016/j.chb.2021.107123 .

Vartiainen H, Tedre M. Using artificial intelligence in craft education: crafting with text-to-image generative models. Digit Creat. 2023;34(1):1–21. https://doi.org/10.1080/14626268.2023.2174557 .

Ventayen RJM. OpenAI ChatGPT generated results: similarity index of artificial intelligence-based contents. Soc Sci Res Netw. 2023. https://doi.org/10.2139/ssrn.4332664 .

Wagner MW, Ertl-Wagner BB. Accuracy of information and references using ChatGPT-3 for retrieval of clinical radiological information. Can Assoc Radiol J. 2023. https://doi.org/10.1177/08465371231171125 .

Wardat Y, Tashtoush MA, AlAli R, Jarrah AM. ChatGPT: a revolutionary tool for teaching and learning mathematics. Eurasia J Math, Sci Technol Educ. 2023;19(7):em2286. https://doi.org/10.29333/ejmste/13272 .

Webster J, Watson RT. Analyzing the past to prepare for the future: writing a literature review. Manag Inf Syst Quart. 2002;26(2):3.

Xiao Y, Watson ME. Guidance on conducting a systematic literature review. J Plan Educ Res. 2017;39(1):93–112. https://doi.org/10.1177/0739456x17723971 .

Yan D. Impact of ChatGPT on learners in a L2 writing practicum: an exploratory investigation. Educ Inf Technol. 2023. https://doi.org/10.1007/s10639-023-11742-4 .

Yu H. Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Front Psychol. 2023;14:1181712. https://doi.org/10.3389/fpsyg.2023.1181712 .

Zhu C, Sun M, Luo J, Li T, Wang M. How to harness the potential of ChatGPT in education? Knowl Manag ELearn. 2023;15(2):133–52. https://doi.org/10.34105/j.kmel.2023.15.008 .

Download references

The paper is co-funded by the Academy of Finland (Suomen Akatemia) Research Council for Natural Sciences and Engineering for the project Towards precision education: Idiographic learning analytics (TOPEILA), Decision Number 350560.

Author information

Authors and affiliations.

School of Computing, University of Eastern Finland, 80100, Joensuu, Finland

Yazid Albadarin, Mohammed Saqr, Nicolas Pope & Markku Tukiainen

You can also search for this author in PubMed   Google Scholar

Contributions

YA contributed to the literature search, data analysis, discussion, and conclusion. Additionally, YA contributed to the manuscript’s writing, editing, and finalization. MS contributed to the study’s design, conceptualization, acquisition of funding, project administration, allocation of resources, supervision, validation, literature search, and analysis of results. Furthermore, MS contributed to the manuscript's writing, revising, and approving it in its finalized state. NP contributed to the results, and discussions, and provided supervision. NP also contributed to the writing process, revisions, and the final approval of the manuscript in its finalized state. MT contributed to the study's conceptualization, resource management, supervision, writing, revising the manuscript, and approving it.

Corresponding author

Correspondence to Yazid Albadarin .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Table  4

The process of synthesizing the data presented in Table  4 involved identifying the relevant studies through a search process of databases (ERIC, Scopus, Web of Knowledge, Dimensions.ai, and lens.org) using specific keywords "ChatGPT" and "education". Following this, inclusion/exclusion criteria were applied, and data extraction was performed using Creswell's [ 15 ] coding techniques to capture key information and identify common themes across the included studies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Albadarin, Y., Saqr, M., Pope, N. et al. A systematic literature review of empirical research on ChatGPT in education. Discov Educ 3 , 60 (2024). https://doi.org/10.1007/s44217-024-00138-2

Download citation

Received : 22 October 2023

Accepted : 10 May 2024

Published : 26 May 2024

DOI : https://doi.org/10.1007/s44217-024-00138-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Large language models
  • Educational technology
  • Systematic review

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

A systematic literature review of how cybersecurity-related behavior has been assessed

Information and Computer Security

ISSN : 2056-4961

Article publication date: 20 April 2023

Issue publication date: 30 October 2023

Cybersecurity attacks on critical infrastructures, businesses and nations are rising and have reached the interest of mainstream media and the public’s consciousness. Despite this increased awareness, humans are still considered the weakest link in the defense against an unknown attacker. Whatever the reason, naïve-, unintentional- or intentional behavior of a member of an organization, the result of an incident can have a considerable impact. A security policy with guidelines for best practices and rules should guide the behavior of the organization’s members. However, this is often not the case. This paper aims to provide answers to how cybersecurity-related behavior is assessed.

Design/methodology/approach

Research questions were formulated, and a systematic literature review (SLR) was performed by following the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. The SLR initially identified 2,153 articles, and the paper reviews and reports on 26 articles.

The assessment of cybersecurity-related behavior can be classified into three components, namely, data collection, measurement scale and analysis. The findings show that subjective measurements from self-assessment questionnaires are the most frequently used method. Measurement scales are often composed based on existing literature and adapted by the researchers. Partial least square analysis is the most frequently used analysis technique. Even though useful insight and noteworthy findings regarding possible differences between manager and employee behavior have appeared in some publications, conclusive answers to whether such differences exist cannot be drawn.

Research limitations/implications

Research gaps have been identified, that indicate areas of interest for future work. These include the development and employment of methods for reducing subjectivity in the assessment of cybersecurity-related behavior.

Originality/value

To the best of the authors’ knowledge, this is the first SLR on how cybersecurity-related behavior can be assessed. The SLR analyzes relevant publications and identifies current practices as well as their shortcomings, and outlines gaps that future research may bridge.

  • Cybersecurity
  • Human behavior
  • Assessment process

Kannelønning, K. and Katsikas, S.K. (2023), "A systematic literature review of how cybersecurity-related behavior has been assessed", Information and Computer Security , Vol. 31 No. 4, pp. 463-477. https://doi.org/10.1108/ICS-08-2022-0139

Emerald Publishing Limited

Copyright © 2023, Kristian Kannelønning and Sokratis K. Katsikas.

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode

1. Introduction

The importance of information systems (IS) security has increased because the number of unwanted incidents continues to rise in the last decades. Several avenues or paths can be taken by organizations to secure their IS. Technical solutions like whitelisting, firewalls and antivirus software enhance security, but research has shown that when people within the organization do not follow policies and guidelines these technical safeguards will be in vain.

1.1 Aims of the paper

Of the 26 articles included in this review, 10 used some variations of the phrase humans are the weakest link in cybersecurity in either the abstract or introduction. All articles cite multiple authors, accumulating a significant number of previous works, all claiming the same statement. One might agree with Kruger et al. (2020) that it is common knowledge that humans are the weakest link in information security.

Given the premise that humans are the weakest link and the acknowledgment that technology cannot be the single solution for security ( McCormac et al. , 2017 ), research should investigate how organizations can assess the cybersecurity-related behavior of their employees. Identifying, evaluating and summarizing the methods and findings of all relevant literature resources addressing the issue, thereby systematizing the available knowledge and making it more accessible to researchers, while also identifying relevant research gaps, are the aims of this systematic literature review (SLR).

1.2 Background

Recent years have shown that cyberattacks are a global issue, such as the extensive power outage causing a blackout across Argentina and Uruguay in 2019 ( Kilskar, 2020 ). In January 2018, nearly 3 million, or roughly 50% of the Norwegian population’s medical records, were compromised by a cyberattack. Threats can vary from viruses, worms, trojan horses, denial of service, botnets, man-in-the-middle and zero-day ones ( Pirbhulal et al. , 2021 ). The above-listed threats include technical terms with a distinctive flair and uniqueness that is hard to comprehend for employees without a technical background. Moreover, most information security issues are complicated and fully understanding them requires advanced technical knowledge.

Definition of information security;

information security objectives or the framework for setting information security objectives;

principles to guide all activities relating to information security;

commitment to satisfy applicable requirements related to information security;

commitment to continual improvement of the information security management system;

assignment of responsibilities for information security management to defined roles; and

procedures for handling exemptions and exceptions. ( ISO, 2022 )

The extent to which an employee is aware of and complies with information security policy defines the extent of their information security awareness (ISA). ISA is critical in mitigating the risks associated with cybersecurity and is defined by two components, namely, understanding and compliance . Compliance is the employees’ commitment to follow best-practice rules defined by the organization ( Reeves et al. , 2020 ). Ajzen (1991) defines a person’s intention to comply as the individual’s motivation to perform a described behavior. The intention to comply captures the motivational factors that influence behavior. As a general rule, the stronger the effort, the willingness to perform a behavior, the more likely it will be performed.

Several frameworks or theories can be applied to research human behavior. For cybersecurity, behavior can be viewed through lenses and theories borrowed from disciplines such as criminology (e.g. deterrence theory), psychology (e.g. theory of planned behavior) and health psychology (e.g. protection motivation theory) ( Moody et al. , 2018 ; Herath and Rao, 2009 ). The most commonly used models in the context of cybersecurity are the general deterrence theory, the theory of planned behavior and the protection motivation theory ( Alassaf and Alkhalifah, 2021 ).

Staff’s attitude and awareness can pose a security problem. In those settings, it is relevant to consider why the situation exists and what can be done about it. In many cases, a key reason will be the limited extent to which security is understood, accepted and practiced across the organization ( Furnell and Thomson, 2009 ). As a mitigating step toward compliance, decision-makers will need guidance on achieving compliance and discouraging misuse when developing information security policies ( Sommestad et al. , 2014 ). Therefore, the ability to assess behavior is a prerequisite for decision-makers in their quest to develop the organizations’ information security policies. The development and responsibility for implementing policies lie within the purview of management ( Höne and Eloff, 2002 ). Accordingly, understanding the differences in cybersecurity-related behavior between management and employees will benefit the development of more secure organizations.

1.3 Structure of the paper

The rest of this paper is organized as follows: Section 2 describes the methodology for conducting the SLR; the research questions; the record search process; and the assessment criteria. In Section 3, the results and the findings are presented. A discussion of the findings is presented in Section 4. Section 5 summarizes our conclusions and outlines directions for future research.

This section discusses the fundamental stages of conducting an SLR. The SLR constructs are obtained by following the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement ( Page et al. , 2021 ) and ( Fink, 2019 ; Weidt and Silva, 2016 ).

The foremost step is to investigate if a similar review has already been conducted. Searching for and studying other reviews help refine both research questions and search strings. The search did not discover any similar reviews. Keywords, search strings and research questions were collected and categorized in a literature index tool and used to optimize search strings and verify that this review’s chosen research questions are relevant and valuable to the body of knowledge.

A research review is explicit about the research questions, search strategy, inclusion and exclusion criteria, data extraction method and steps taken for analysis. Research reviews are, unlike subjective reviews, comprehensible and easily reproducible ( Fink, 2019 ). The remainder of this section elaborates on the components of the performed SLR.

2.1 Research questions

How is cybersecurity-related behavior assessed?

Are there differences between manager and employee behavior in a cybersecurity context?

2.2 Record searching process

Various search strings were used in this SLR, depending on the database. The keywords were kept unchanged, but the syntax of each database differs; hence, the search strings have minor differences. This study includes the following databases: Scopus, IEEE, Springer, Engineering Village, ScienceDirect and ACM. In some form of syntax, the keywords (exact and stemmed words) were used: Cyber, Security, Information, policy, compliance, measure, behavior. As an example, the following is the search used in Scopus: TITLE-ABS-KEY ((information AND security AND policy OR information AND security AND compliance OR policy AND compliance) AND (information AND security AND behavior)) AND PUBYEAR > 2001. To increase the precision of the searches, title, abstract and keywords were used as a limiter in all the databases.

2.3 Assessment criteria

studies from organization reports, guidelines, technical opinion reports;

research design – exclude reviews, editorials and testimonials, as using secondary data (data from other reviews, etc.) would make this review a tertiary one; and

nonresearch literature.

written in English;

published in 2001–2022;

original studies using theoretical or empirical data; and

studies published in Journals, Conference Proceedings and books/book sections.

2.4 Analysis of included articles

The result presented in this review is based on the abstraction of data from the articles. The descriptive synthesized results are based on the reviewers’ experience and the quality and content of the available literature ( Fink, 2019 ). All results are based on an abstraction of data except for those in Section 3.3.4, where the NVIVO software was used to uncover the most frequently used words from a compiled text of all analysis sections from each and every article in the review.

3.1 Identification, screening, eligibility and inclusion mechanism

This research returned 2,153 records. The first step before any analysis is to remove any duplicates. After removing duplicates, a total of 1,611 unique records remained. Following the recommendation from Weidt and Silva (2016) , the first analysis step is screening by title and abstract. A total of 1,517 records were found to be irrelevant for this review, leaving 94 articles for additional screening. The (optional) second screening, depending on the number of articles, involves an analysis of each article’s introduction and conclusion. For this study, an analysis of the method section was also included in the second screening step. This narrowed the number down to 28, where another 2 articles were excluded because of the lack of empirical data and irrelevance to the topic being reviewed, leaving the total number of 26 articles for complete text analysis. Figure 1 , adapted from Page et al. (2021) depicts the screening process.

3.2 Trend and classification of included studies

Of the 26 selected articles, 19 were published in journals, and the remaining 7 in conferences, or 73% and 27%, respectively (see Figure 2 ). The figure also demonstrates the increased interest in the subject in the past two years.

3.3 Findings

3.3.1 how is cybersecurity-related behavior assessed.

Of the selected 26 articles in this review, 24 or 92% provide insight into how cybersecurity-related behavior is assessed. A three-step process emerges as the way to assess such behavior: First, information from subjects needs to be collected. This is referred to as data collection . Second, a measurement scale is deployed to ensure that the data collected is relevant and encompasses the research topic. The final step is the data analysis.

3.3.2 Data collection.

Two forms of data can be collected, qualitative or quantitative. Both of these types of data can be subjective or objective; neither is exclusive to the other. The most common way to collect subjective data is using a questionnaire with questions whose answers fit into a five- or seven-point Likert scale. Within a survey, questions may be asked that are subjective, biased or misleading when viewed alone, but the results can easily be used quantitatively ( O'Brien, 1999 ). With the ubiquity of qualitative data, the interest in quantifying and being able to assign “good” numerical values and make the data susceptible to more meaningful analysis has been a topic for research since the first methods for quantification first began to appear around 1940 ( Young, 1981 ).

Subjective data can lead to inaccurate or skewed results. In contrast, objective data are free from the subject’s opinions. This can be, for example, the number of attacks prevented or the number of employees clicking the link in a phishing campaign ( Black et al. , 2008 ).

The SLR revealed six types of data collection methods, namely, self-assessment questionnaire (SAQ); interview; vignette; experiment with vignettes; affective computing and sentiment analysis; and clicking data from a phishing campaign. An overview of all articles and the data collection method used in each is presented in Table 1 .

The most prominent form of data collection is self-assessment (SA). This subjective data collection method is defined by Boekaerts (1991) as a form of appraisal that compares one’s behavioral outcomes to an internal or external standard. In total, 22 of the 24 articles used SA as the primary data collection method. The most common way to collect data is through a questionnaire (SAQ). A total of 17 or 71% of the articles used an SAQ as their sole method for data collection.

Of the remaining five articles with results stemming from subjective data, two used vignettes in combination with a regular SAQ. Vignettes are hypothetical scenarios in which the subject reads and forms an opinion based on the information. Barlow et al. (2013) performed a factorial survey method (FSM) experiment with vignettes by using randomly manipulated elements into sentences in the scenarios instead of static text. Both regular questionnaires and vignettes use the same Likert scale.

The average number of respondents in the included papers is n = 356, with 52% males and 48% females. The most common way to deploy the SAQ is through online Web platforms, e.g. a by-invitation-only webpage at a market research company. Pen and paper were only used twice. Market research companies and management distribution are the two most used recruitment strategies. The two methods are used in 73% of the papers, or 84% of the time, if articles that did not specify recruitment are excluded.

Two studies used interviews to collect information: one used interviews with an SAQ, and the other used interviews as the sole input. Interviews provide in-depth information and are suitable for uncovering the “how” and “why” of critical events as well as the insights reflecting the participants’ relativist perspectives ( Yin, 2018 ).

Only two studies used objective, quantitative data: Kruger et al. (2020) used affective computing and sentiment analysis. With the help of a deep learning neural network, the study accurately classified opinions as positive, neutral or negative based on facial expressions. Jalali et al. (2020) used a phishing campaign in conjunction with an SAQ to investigate whether there were any differences between intention to comply and actual compliance.

3.3.3 Measurement scale.

A measurement scale ensures that the collected data encompass a topic or subject and do not miss any crucial facets. The role of a measurement scale is to ensure that the data collected is holistic and reproducible. Researchers can use predefined scales developed by others or self-developed ones. Those of the reviewed articles that use the latter form of scale are often not fully transparent about the content of the scale.

This SLR shows that 13 of the 22 articles that used a measurement scale used an unspecified scale. The most frequently (in seven papers) used specified scale is the Human Aspect of Information Security Questionnaire (HAIS-Q), developed by Parsons et al. (2014) . When used in conjunction with other scales, HAIS-Q is often the most prominent.

Several pitfalls exist and must be considered when researchers select their measurement scale. If choosing to develop an unspecified scale, as found to be the most deployed alternative in this SLR, length, wording, familiarity with the topic, natural sequence of time and questions in a logical order are some of the topics that researchers should be mindful of ( Fink, 2015 ). Especially the length of the questionnaire is significant; how much time do the respondents have to spend answering the survey? Another critical element when designing a measurement scale instead of using an existing one is validity and reliability. Proper pilot testing is required when choosing not to use an already-validated survey ( Fink, 2015 ).

The HAIS-Q is designed to measure information security awareness related to information security in the workplace ( McCormac et al. , 2017 ). The Knowledge, Attitude and Behavior (KAB) model is at the center of HAIS-Q. The hypothesis is that when computer users gain more knowledge, their attitude toward policies will improve, translating into more risk-averse behavior ( Pollini et al. , 2021 ). The HAIS-Q comprises 63 questions covering 7 focus areas (internet use, email use, social networking site use, password management, incident reporting, information handling and mobile computing). Each focus area is divided into equal parts for KAB, resulting in 21 questions for each KAB element divided by the seven focus areas. For a detailed overview of the other scales used in conjunction with HAIS-Q, see the last column in Table 1 .

The KAB model that underpins HAIS-Q has been criticized by researchers when used in, e.g. health and climate research. Both Parsons et al. (2014) and McCormac et al. (2016) cite McGuire (1969) who suggest that the problem is not with the model itself but with how it is applied. Parsons et al. (2014) highlight essential differences between environmental and health studies and the field of information security. Much ambiguity and unclear or contradictory information exist in the two former topics, while most organizations have an information security policy, either written or informal, indicating what is expected from employees ( Parsons et al. , 2014 ). Barlow et al. (2013) advocate using scenarios instead of direct questions, like in HAIS-Q, because it is difficult to assess actual deviant behavior by observation or direct questioning.

Another critique of the HAIS-Q is the length of the questionnaire. With 63 questions, respondents might lose interest, be inattentive to the questions and sometimes give false answers ( Velki et al. , 2019 ). On the contrary, Parsons et al. (2017) show that the HAIS-Q questionnaire is a reliable and validated measurement scale and accommodates some of the concerns raised by Fink (2015) .

Pollini et al. (2021) advise that, when using one, the questionnaire only considers the individual level and may not capture a holistic and accurate measurement of the organizations. Therefore, in their study, HAIS-Q questionnaires were deployed at the individual level, and interviews were used to assess the organizational level.

3.3.4 Analysis.

To uncover how the included articles had analyzed their results, NVIVO, a qualitative data analysis software, was used to identify the most frequently used words in each article. An accumulative document from each article’s analysis section was analyzed in NVIVO. All articles use some sort of validation and statistical verification of the collected data. The use of word count provides both a structured presentation and an unbiased account of how often keywords affiliated with the technical part of the analysis are used. The result from NVIVO shows that partial least square (PLS) is the most frequently used method. Herman Wold first coined PLS in 1975; it can be preferable in cases where constructs are measured primarily by formative indicators, e.g. managerial research, or when the sample size is small ( Haenlein and Kaplan, 2004 ). This result is also in line with the finding in Kurowski (2019) : “Most of policy compliance research uses partial least squares, regression modeling or correlation analyses.”

3.3.4.1 Are there differences between manager and employee intention and behavior in a cybersecurity context?

Only five articles, or 19%, provide insight into the second research question. However, none provides a clear-cut response to this research question. There is a consensus in all five articles that organizational culture is a cornerstone for security and policy-compliant behavior ( Reeves et al. , 2020 ; Hwang et al. , 2017 ; Alzahrani, 2021 ; Parsons et al. , 2015 ; Li et al. , 2019 ).

Among the articles, there is also a broad agreement that peers’ behavior, the influence that peers have on our behavior, is vital for a positive cybersecurity outcome ( Li et al. , 2019 ; Alzahrani, 2021 ; Hwang et al. , 2017 ). Peer- and policy-compliant behavior can only be achieved when the organization has a positive cybersecurity culture. The development of organizational culture often comes from the top management; hence, the development and continued improvement of culture will be assigned to management ( Li et al. , 2019 ; Reeves et al. , 2020 ). One interesting finding in the context of developing or harnessing a security culture is that managers have a much lower information security awareness; Reeves et al. (2020) therefore recommend that future training should be targeted to management. This small paradox is at least something to dwell on, given that culture is built from the top.

All the articles provide reasons for noncompliance in their findings. In a hectic environment, employee workload has been shown to negatively impact compliance ( Jalali et al. , 2020 ). Connected to workload are work goals. Security will draw the shortest straw when goals and security do not align. If security is viewed as a hindrance, noncompliant behavior will arise ( Reeves et al. , 2020 ; Hwang et al. , 2017 ; Alzahrani, 2021 ; Parsons et al. , 2015 ). Also, when employees lack knowledge or have not been given sufficient information about the organization’s security policies, compliant behavior will be impacted ( Hwang et al. , 2017 ; Alzahrani, 2021 ; Parsons et al. , 2015 ; Li et al. , 2019 ).

4. Discussion

The findings of this SLR have shown that there is an overweight of subjective data collected to measure cybersecurity. Over 90% of the included articles use subjective data to measure behavior. Only one article relies solely on objective measurements. The availability and ease of use regarding subjective methods might be the reason. An interview can be done without much cost or planning, whereas using objective methods will require more resources, e.g. a phishing campaign.

However, the use of subjective data can lead to biased responses from the subjects. This bias can be problematic. According to Kurowski (2019) , “For instance, survey reports of church attendance and rates of exercise are found to be double the actual frequency when self-reported.” Almost all articles address the issue of biased measurement. Many refer to Podsakoff et al. (2003) and the recommendation therein to assure respondents that their identity will be kept anonymous. It seems like anonymization is an acceptable way to remove the risk of bias for several researchers. However, as Kurowski (2019) finds, there does exist bias in today’s research. In his paper, to test for a biased response, two questionnaires were used, one using standard, straightforward compliance questions and one using vignettes, see Table 1 . Kurowski (2019) found that generic questionnaires may capture biased policy compliance measures. If an individual reports policy compliance on the literature-based scale, it may mean any of the following: An individual is indeed compliant; an individual does not know the policy and does not act compliant; or an individual thinks they are compliant with the policy because they behave securely, but do not know the policy. This does not imply that existing research fails to measure policy compliance entirely, but it fails to measure it reliably ( Kurowski, 2019 ).

Jalali et al. (2020) included objective and subjective measurements. They compared the employees’ intention to comply with their actual compliance by examining whether the employees had clicked the link in the phishing campaign or not. They found no significant relationship between the intention to comply and the actual behavior. This result is not in line with previous studies that used self-reported data, a method that leaves room for socially desirable answers ( Podsakoff et al. , 2003 ), or previous answers could influence later answers ( Jalali, 2014 ).

Even the HAIS-Q, the single most used questionnaire, used seven times in this SLR, does not refrain from biased responses. Even though the questionnaire was validated and tested by Parsons et al. (2017) , when researched to uncover biased responses by McCormac et al. (2017) , showed that social desirability bias can be present. This means that further research is needed to exclude biased responses from HAIS-Q.

5. Conclusion

This SLR, which started with 2,153 unique articles and was reduced during several analysis steps to 26 articles, provides insights into the predefined research questions.

When excluding all preparational work before a study is performed, the assessment of behavior can be classified into three components: data collection , measurement scale and lastly, analysis . This research found that subjective data are collected to a much larger extent than objective data, in the context of cybersecurity, with online SAQ as the most prominent way to collect data. Measurement scales are often composed based on existing literature and adapted by the researchers. The most commonly used questionnaire is HAIS-Q, developed by Parsons et al. (2014) . Finally, an analysis is performed to test for internal and external validation of the collected data. PLS analysis is the most frequent technique in selected articles. Although a clear path to assess behavior is uncovered, the proposed self-assessment method can produce biased data. Thus, future research should address the problem of objectively assessing cybersecurity-related behavior and the factors affecting it.

The second research question, i.e. whether there exist differences between manager and employee behavior, was not conclusively answered. Of the relatively small number of articles, several provide insights and noteworthy findings but not conclusive answers to this research question. In light of the significance of the matter for improving the cybersecurity culture in an organization, this constitutes another interesting research gap.

Future research should bridge the above research gaps, and studies should include employees and management from the same organization. This will require more planning and coordination than simply deploying a questionnaire online. Extra effort in anonymizing personal data must be in place because subjects come from the same organization. The uncertainty surrounding anonymization and the risk of biased responses concerning anonymization must be mitigated. This can be obtained by, e.g. using a hybrid method consisting of objective and subjective data collection, e.g. self-assessment questionnaires and phishing campaigns. Future research should collect holistic data within a market, country, segment or similar, as research into compliance is context-dependable ( Jalali et al. , 2020 ).

empirical studies of secretary and technology in literature review

The SLR screening process

empirical studies of secretary and technology in literature review

Trend and classification of included studies

Overview of reviewed articles

Ajzen , I. ( 1991 ), “ The theory of planned behavior ”, Organizational Behavior and Human Decision Processes , Vol. 50 No. 2 , pp. 179 - 211 .

Alassaf , M. and Alkhalifah , A. ( 2021 ), “ Exploring the influence of direct and indirect factors on information security policy compliance: a systematic literature review ”, IEEE Access , Vol. 9 , pp. 162687 - 162705 .

Al-Omari , A. , El-Gayar , O. and Deokar , A. ( 2012 ), “ Security policy compliance: user acceptance perspective ”, 2012 45th HI International Conference on System Sciences , IEEE , pp. 3317 - 3326 .

Alzahrani , L. ( 2021 ), “ Factors impacting users’ compliance with information security policies: an empirical study ”, International Journal of Advanced Computer Science and Applications , Vol. 12 No. 10 .

Ameen , N. , Tarhini , A. , Shah , M.H. , Madichie , N. , Paul , J. and Choudrie , J. ( 2021 ), “ Keeping customers’ data secure: a cross-cultural study of cybersecurity compliance among the Gen-Mobile workforce ”, Computers in Human Behavior , Vol. 114 , p. 106531 , doi: 10.1016/j.chb.2020.106531 .

Barlow , J.B. , Warkentin , M. , Ormond , D. and Dennis , A.R. ( 2013 ), “ Don’t make excuses! Discouraging neutralization to reduce IT policy violation ”, Computers and Security , Vol. 39 , pp. 145 - 159 , doi: 10.1016/j.cose.2013.05.006 .

Black , P.E. , Scarfone , K. and Souppaya , M. ( 2008 ), “ Cyber security metrics and measures ”, Wiley Handbook of Science and Technology for Homeland Security , Wiley , NH , pp. 1 - 15 .

Boekaerts , M. ( 1991 ), “ Subjective competence, appraisals and self-assessment ”, Learning and Instruction , Vol. 1 No. 1 , pp. 1 - 17 , doi: 10.1016/0959-4752(91)90016-2 .

Chen , Y. , Galletta , D.F. , Lowry , P.B. , Luo , X. , Moody , G.D. and Willison , R. ( 2021a ), “ Understanding inconsistent employee compliance with information security policies through the lens of the extended parallel process model ”, Information Systems Research , Vol. 32 No. 3 , pp. 1043 - 1065 , doi: 10.1287/isre.2021.1014 .

Chen , Y. , Xia , W. and Cousins , K. ( 2021b ), “ Voluntary and instrumental information security policy compliance: an integrated view of prosocial motivation, self-regulation and deterrence ”, Computers and Security , Vol. 113 , p. 102568 , doi: 10.1016/j.cose.2021.102568 .

Cindana , A. and Ruldeviyani , Y. ( 2018 ), “ Measuring information security awareness on employee using HAIS-Q: case study at XYZ firm ”, 2018 International Conference on Advanced Computer Science and Information Systems (ICACSIS) , pp. 289 - 294 .

Fink , A. ( 2015 ), How to Conduct Surveys: A Step-by-Step Guide , Sage Publications , London .

Fink , A. ( 2019 ), Conducting Research Literature Reviews: From the Internet to Paper , Sage publications , London .

Furnell , S. and Thomson , K.L. ( 2009 ), “ From culture to disobedience: recognising the varying user acceptance of IT security ”, Computer Fraud and Security , Vol. 2009 No. 2 , pp. 5 - 10 , doi: 10.1016/S1361-3723(09)70019-3 .

Gangire , Y. , Da Veiga , A. and Herselman , M. ( 2020 ), “ Information security behavior: development of a measurement instrument based on the self-determination theory ”, International Symposium on Human Aspects of Information Security and Assurance , Springer , Cham , pp. 144 - 157 .

Goo , J. , Yim , M. and Kim , D.J. ( 2014 ), “ A path to successful management of employee security compliance: an empirical study of information security climate ”, IEEE Transactions on Professional Communication , Vol. 57 No. 4 , pp. 286 - 308 , doi: 10.1109/TPC.2014.2374011 .

Guhr , N. , Lebek , B. and Breitner , M.H. ( 2018 ), “ The impact of leadership on employees’ intended information security behaviour: an examination of the full-range leadership theory ”, Information Systems Journal , Vol. 29 No. 2 , pp. 340 - 362 , doi: 10.1111/isj.12202 .

Haenlein , M. and Kaplan , A.M. ( 2004 ), “ A beginner’s guide to partial least squares analysis ”, Understanding Statistics , Vol. 3 No. 4 , pp. 283 - 297 .

Herath , T. and Rao , H.R. ( 2009 ), “ Protection motivation and deterrence: a framework for security policy compliance in organisations ”, European Journal of Information Systems , Vol. 18 No. 2 , pp. 106 - 125 .

Höne , K. and Eloff , J.H.P. ( 2002 ), “ Information security policy – what do international information security standards say ?”, Computers and Security , Vol. 21 No. 5 , pp. 402 - 409 , doi: 10.1016/S0167-4048(02)00504-7 .

Hwang , I. , Kim , D. , Kim , T. and Kim , S. ( 2017 ), “ Why not comply with information security? An empirical approach for the causes of non-compliance ”, Online Information Review , Vol. 41 No. 1 , pp. 2 - 18 .

International Standardization Organization ( 2022 ), “ ISO/IEC 27002:2022, information security, cybersecurity and privacy protection – information security controls ”.

Jalali , M.S. ( 2014 ), “ How individuals weigh their previous estimates to make a new estimate in the presence or absence of social influence ”, International Social Computing, Behavioral-Cultural Modeling and Prediction , Springer , Cham , pp. 67 - 74 .

Jalali , M.S. , Bruckes , M. , Westmattelmann , D. and Schewe , G. ( 2020 ), “ Why employees (still) click on phishing links: investigation in hospitals ”, Journal of Medical Internet Research , Vol. 22 No. 1 , p. E16775 , doi: 10.2196/16775 .

Kilskar , S.S. ( 2020 ), “ Socio-technical perspectives on cyber security and definitions of digital transformation – a literature review ”, Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference , Venice .

Kruger , H. , Du Toit , T. , Drevin , L. and Maree , N. ( 2020 ), “ Acquiring sentiment towards information security policies through affective computing ”, 2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC) , 25-27 Nov. 2020 , pp. 1 - 6 .

Kurowski , S. ( 2019 ), “ Response biases in policy compliance research ”, Information and Computer Security , Vol. 28 No. 3 , pp. 445 - 465 , doi: 10.1108/ICS-02-2019-0025 .

Li , L. , He , W. , Xu , L. , Ash , I. , Anwar , M. and Yuan , X. ( 2019 ), “ Investigating the impact of cybersecurity policy awareness on employees’ cybersecurity behavior ”, International Journal of Information Management , Vol. 45 , pp. 13 - 24 , doi: 10.1016/j.ijinfomgt.2018.10.017 .

Liu , C. , Wang , N. and Liang , H. ( 2020 ), “ Motivating information security policy compliance: the critical role of supervisor-subordinate guanxi and organizational commitment ”, International Journal of Information Management , Vol. 54 , p. 102152 , doi: 10.1016/j.ijinfomgt.2020.102152 .

McCormac , A. , Calic , D. , Butavicius , M.A. , Parsons , K. , Zwaans , T. and Pattinson , M.R. ( 2017 ), “ A reliable measure of information security awareness and the identification of bias in responses ”, Australian Journal of Information Systems , Vol. 21 .

McCormac , A. , Zwaans , T. , Parsons , K. , Calic , D. , Butavicius , M. and Pattinson , M. ( 2016 ), “ Individual differences and information security awareness ”, Computers in Human Behavior , Vol. 69 , pp. 151 - 156 , doi: 10.1016/j.chb.2016.11.065 .

McGuire , W. ( 1969 ), ‘The Nature of Attitudes and Attitude Change , Vol. 3 , Addison-Wesley , Reading .

Merhi , M. and Ahluwalia , P. ( 2019 ), “ Examining the impact of deterrence factors and norms on resistance to information systems security ”, Computers in Human Behavior , Vol. 92 , pp. 37 - 46 , doi: 10.1016/j.chb.2018.10.031 .

Moody , G.D. , Siponen , M. and Pahnila , S. ( 2018 ), “ Toward a unified model of information security policy compliance ”, MIS Quarterly , Vol. 42 No. 1 .

Niemimaa , M. , Laaksonen , A.E. and Harnesk , D. ( 2013 ), “ Interpreting information security policy outcomes: a frames of reference perspective ”, 2013 46th HI International Conference on System Sciences , IEEE , pp. 4541 - 4550 .

O’Brien , D.P. ( 1999 ), “ Quantitative vs Subjective ”, Business Measurements for Safety Performance , CRC Press , Boca Raton , p. 51 .

Page , M. , McKenzie , J. , Bossuyt , P. , Boutron , I. , Hoffmann , T. , Mulrow , C. , Shamseer , L. , Tetzlaff , J. , Akl , E. , Brennan , S. , Chou , R. , Glanville , J. , Grimshaw , J. , Hróbjartsson , A. , Lalu , M. , Li , T. , Loder , E. , Mayo-Wilson , E. , McDonald , S. and Moher , D. ( 2021 ), “ The PRISMA 2020 statement: an updated guideline for reporting systematic reviews ”, Bmj , Vol. 372 , p. N71 , doi: 10.1136/bmj.n71 .

Parsons , K. , Calic , D. , Pattinson , M. , Butavicius , M. , McCormac , A. and Zwaans , T. ( 2017 ), “ The human aspects of information security questionnaire (HAIS-Q): two further validation studies ”, Computers and Security , Vol. 66 , pp. 40 - 51 .

Parsons , K. , McCormac , A. , Butavicius , M. , Pattinson , M. and Jerram , C. ( 2014 ), “ Determining employee awareness using the human aspects of information security questionnaire (HAIS-Q ’ ) ”, Computers and Security , Vol. 42 , pp. 165 - 176 , doi: 10.1016/j.cose.2013.12.003 .

Parsons , K.M. , Young , E. , Butavicius , M.A. , McCormac , A. , Pattinson , M.R. and Jerram , C. ( 2015 ), “ The influence of organizational information security culture on information security decision making ”, Journal of Cognitive Engineering and Decision Making , Vol. 9 No. 2 , pp. 117 - 129 , doi: 10.1177/1555343415575152 .

Pirbhulal , S. , Gkioulos , V. and Katsikas , S. ( 2021 ), “ A systematic literature review on RAMS analysis for critical infrastructures protection ”, International Journal of Critical Infrastructure Protection , Vol. 33 , p. 100427 .

Podsakoff , P.M. , MacKenzie , S.B. , Lee , J.-Y. and Podsakoff , N.P. ( 2003 ), “ Common method biases in behavioral research: a critical review of the literature and recommended remedies ”, Journal of Applied Psychology , Vol. 88 No. 5 , pp. 879 - 903 .

Pollini , A. , Callari , T.C. , Tedeschi , A. , Ruscio , D. , Save , L. , Chiarugi , F. and Guerri , D. ( 2021 ), “ Leveraging human factors in cybersecurity: an integrated methodological approach ”, Cognition, Technology and Work , Vol. 24 No. 2 , pp. 371 - 390 , doi: 10.1007/s10111-021-00683-y .

Reeves , A. , Parsons , K. and Calic , D. ( 2020 ), “ Whose risk Is it anyway: how do risk perception and organisational commitment affect employee information security awareness? ”, International Conference on Human-Computer Interaction , Springer , Cham , pp. 232 - 249 .

Sommestad , T. , Hallberg , J. , Lundholm , K. and Bengtsson , J. ( 2014 ), “ Variables influencing information security policy compliance: a systematic review of quantitative studies ”, Information Management and Computer Security , Vol. 22 No. 1 , pp. 42 - 75 .

Velki , T. , Mayer , A. and Norget , J. ( 2019 ), “ Development of a new international behavioral-cognitive internet security questionnaire: preliminary results from Croatian and German samples ”, 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) , IEEE , pp. 1209 - 1212 .

Weidt , F. and Silva , R. ( 2016 ), “ Systematic literature review in computer science-a practical ‘guide ”, Relatórios Técnicos Do DCC/UFJF , Vol. 1 No. 8 , doi: 10.13140/RG.2.2.35453.87524 .

Yin , R.K. ( 2018 ), Case Study Research and Applications , 6th ed ., Sage , London .

Young , F.W. ( 1981 ), “ Quantitative analysis of qualitative data ”, Psychometrika , Vol. 46 No. 4 , pp. 357 - 388 , doi: 10.1007/BF02293796 .

Further readings

Bulgurcu , B. , Cavusoglu , H. and Benbasat , I. ( 2010 ), “ Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness ”, MIS Quarterly , Vol. 34 No. 3 , pp. 523 - 548 .

Pahnila , S. , Siponen , M. and Mahmood , A. ( 2007 ), “ Employees’ behavior towards IS security policy compliance ”, 2007 40th Annual HI International Conference on System Sciences (HICSS’07) , IEEE , pp. 156b - 156b .

Acknowledgements

This work was supported by the Research Council of Norway under Project nr 323131 “How to improve Cyber Security performance by researching human behavior and improve processes in an industrial environment” and Project nr 310105 “Norwegian Centre for Cyber Security in Critical Sectors (NORCICS).”

Corresponding author

Related articles, we’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

Banner

  • Getting Started
  • Topic Overviews
  • Other Databases
  • Tests and Measures

What is an Empirical Study?

Literature review.

  • Organize Your Sources
  • NEW: LibKey Nomad

An empirical article reports the findings of a study conducted by the authors and uses data gathered from an experiment or observation. An empirical study is verifiable and "based on facts, systematic observation, or experiment, rather than theory or general philosophical principle" ( APA Databases Methodology Field Values ).  In other words, it tells the story of a research conducted, doing it in great detail. The study may utilize quantitative research methods to produce numerical data and seek to find a causal relationship between two or more variables. Conversely, it may use qualitative research methods, which involves collecting non-numerical data to analyze concepts, opinions, or experiences.

Key parts of an empirical article:

  • Abstract  - Provides a brief overview of the research.
  • Introduction  - The introduction provides a review of previous research on the topic and states the hypothesis. 
  • Methods  - The methods area describes how the research was conducted, identifies the design of the study, the participants, and any measurements that were taken during the study.
  • Results  - The results section describes the outcome of the study. 
  • Discussion (or conclusion)  - The discussion section addresses the researchers' interpretations of their study and any future implications from their findings.
  • References  - A list of works that were cited in the study.
  • What is a Lit. Review?
  • Purpose of a Lit. Review
  • Limitations
  • Non-Empirical Research
  • Useful Links/Additional Info

A review of the published resources related to a specific issue, area of research, or specific theory. It provides a summary, description, and critical evaluation of each resource.

A literature review:

  •  Synthesizes and places into context the research and scholarly literature relevant to the topic.
  • Maps the different approaches to a given question and reveals patterns.
  • Forms the foundation for subsequent research 
  • Justifies the significance of the new investigation.
  • Contains the most pertinent studies and points to important past and current research and practices.

A Lit. Review provides background and context; it shows how your research will contribute to the field. 

There are generally five parts to a literature review:

  • Introduction
  • Bibliography

A literature review should: 

  • Provide a comprehensive and updated review of the literature
  • Explain why this review has taken place
  • Articulate a position or hypothesis
  • Acknowledge and account for conflicting and corroborating points of view

 Add / Reorder  

A lit. review's purpose is to offer an overview of the significant works published on a topic. It can be written as an introduction to a study in order to:

  • Demonstrate how a study fills a gap in research
  • Compare a study with other research that's been done

It could be a separate work (a research article on its own) that:

  • Organizes or describes a topic
  • Describes variables within a particular issue/problem

Some limitations of a literature review include:

  • It's a snapshot in time. Unlike other reviews, this one has beginning, a middle and an end. Future developments could make your work less relevant.
  • It may be too focused. Some niche studies may miss the bigger picture.
  • It can be difficult to be comprehensive. There is no way to ensure that all the literature on a topic was considered.
  • It is easy to be biased if you stick to top tier journals. There may be other places where people are publishing exemplary research. Look to open access publications and conferences to reflect a more inclusive collection. Also, make sure to include opposing views (and not just supporting evidence).

Non-Empirical Research articles focus more on theories, methods and their implications for research. Non-Empirical Research can include comprehensive reviews and articles that focus on methodology. They rely on empirical research literature as well but does not need to be essentially data-driven.

Write a Literature Review (UCSC)

  • Literature Review (Purdue)
  • Overview: Lit Reviews (UNC)
  • Review of Literature (UW-Madison)
  • << Previous: Tests and Measures
  • Next: Organize Your Sources >>
  • Last Updated: Apr 9, 2024 10:58 AM
  • URL: https://libguides.macalester.edu/psyc

IMAGES

  1. Notable Differences between Empirical Review and Literature Review

    empirical studies of secretary and technology in literature review

  2. Summary of empirical literature review

    empirical studies of secretary and technology in literature review

  3. Notable Differences between Empirical Review and Literature Review

    empirical studies of secretary and technology in literature review

  4. Differences Between Empirical Research and Literature Review

    empirical studies of secretary and technology in literature review

  5. empirical study literature review

    empirical studies of secretary and technology in literature review

  6. Review of empirical studies in the literature

    empirical studies of secretary and technology in literature review

VIDEO

  1. Literature & Legal Studies

  2. Literature review structure and AI tools

  3. Educational Technology Literature Review

  4. Chapter Two(Theoretical literature review, Empirical literature review and Conceptual framework )

  5. Literature Review Hacks using #ai || Connected Papers

  6. How to Set Password on Laptop │ English for Secretary

COMMENTS

  1. Technology and jobs: A systematic literature review

    Abstract. We systematically review the empirical literature on the past four decades of technological change and its impact on employment, distinguishing between five broad technology categories (ICT, Robots, Innovation, TFP-style, Other). We find across studies that the labor displacing effect of technology appears to be more than offset by ...

  2. (PDF) Technology and jobs: A systematic literature review

    Source: Author's calculations based on 127 studies collected from systematic literature review. Notes: Panels (a)-(d) present the share of studies by each type of result reported for the ...

  3. Guidance on Conducting a Systematic Literature Review

    This article is organized as follows: The next section presents the methodology adopted by this research, followed by a section that discusses the typology of literature reviews and provides empirical examples; the subsequent section summarizes the process of literature review; and the last section concludes the paper with suggestions on how to improve the quality and rigor of literature ...

  4. (PDF) The Impact of Modern Office Technology on the Secretary's

    The study specifically sought to ascertain whether the relevant technology/equipment were available, and to determine the extent to which the secretary's knowledge and usage of available office ...

  5. PDF Technology Integration: A Review of the Literature.

    technology and the culture of learning exists, the review focuses solely on technology and teaching and learning within the context of K-12 schools in the United States. The focus is purposefully on U.S. K-12 schools to allow a thorough synthesis of the immense body of literature related to computers in U.S. schools.

  6. A Literature Review on the Impact of Modern ...

    Abstract. In this paper, two systematic literature searches are used to pursue questions about digitization in management reporting on the one hand and role-specific questions on the other. We assume that greater digitization in terms of efficiency and effectiveness will change the role of management accountants in general and in reporting in ...

  7. Reviewing the research methods literature: principles and strategies

    A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal. The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e ...

  8. A systematic literature review of empirical research on technology

    Technology education in early childhood education (ECE) has only recently been established internationally as a curriculum content area. The interdisciplinary character of technology education and its status as a field under development occasion a need to distinguish and define technology in the merging of disciplines. This literature review presents an overview of technology education in ECE ...

  9. A literature review exploring the role of technology in business

    This study implemented the concept of a systematic review approach to review the literature that has been conducted in the business field during the Covid-19 crisis in general. Additionally, it looks into the research examining the role of technology in business survival in the Covid-19 crisis specifically. All studies were conducted in 2020.

  10. Literature Review Research

    Literature Review is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.. Also, we can define a literature review as the collected body of scholarly works related to a topic:

  11. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  12. A literature review on the empirical studies of technology-based

    The relationship between any two of sample group, sample size, duration, subject distribution, research design, and measurement instrument, etc. is revealed and TBEL is helpful for students to enhance the knowledge comprehension and skills, improve long-term retention and transfer, and decrease cognitive load. BRIEF SUMMARY Technology-Based Embodied Learning (TBEL) is a hotspot in learning ...

  13. empirical studies of secretary and technology in literature review

    Have a language expert improve your writing. Run a free plagiarism check in 10 minutes, generate accurate citations for free. Knowledge Base; Methodology. How to Write a Literatur

  14. A literature review on the empirical studies of technology-based

    A literature review on the empirical studies of technology-based embodied learning. Baichang Zhong a School of Information Technology in Education, South China Normal University, Guangzhou, ... Technology-Based Embodied Learning (TBEL) is a hotspot in learning science. By systematically reviewing 49 SSCI journal articles, this paper revealed ...

  15. A Critical Review of Empirical Research Examining SMEs ...

    The selected research papers for reviewing are accessed from only high-ranking journals. The review addressed technology adoption in the context of SMEs. The paper attempts to review the studies based on technology-organisation-environment (TOE) framework to identify the relevant set of variables for technology adoption in SMEs.

  16. Corporate Governance: A Literature Review with a Focus on the

    Thus far, however, little attention is given to the expropriation issue in technology firms. In addition, our review shows empirical studies about the effect of corporate governance mechanisms on firm decision-making in technology firms have mainly focused in the context of advanced economies. There is a lack of similar studies on emerging ...

  17. Digital literacy in the university setting: A literature review of

    Digital literacy in the university setting: A literature review of empirical studies between 2010 and 2021. ... and effective use of technology in personal, social, and professional areas. It is also considered inseparable from the social and educational needs of the society in which we live (Larraz, 2013; Brata et al., 2022). Therefore, we ...

  18. Assessing the Impact on the Job of Secretaries on the Use of ...

    nation. In view of the aforementioned problems the study tends to study the effect of information communication technology on secretaries job in public institutions in Ghana using Bolgatanga Polytechnic as a case study. 2. 0 METHODOLOGY The study adopted descriptive research design. In this study, the target population was all the Senior Clerks,

  19. A systematic literature review of empirical research on ChatGPT in

    Over the last four decades, studies have investigated the incorporation of Artificial Intelligence (AI) into education. A recent prominent AI-powered technology that has impacted the education sector is ChatGPT. This article provides a systematic review of 14 empirical studies incorporating ChatGPT into various educational settings, published in 2022 and before the 10th of April 2023—the ...

  20. A systematic literature review of how cybersecurity-related behavior

    The idea of such review studies is to broaden and get a deeper understanding of where the edge of current knowledge resides. The research questions should be broad enough to include relevant literature and be precise enough to guide the review . Research questions are tailored to a topic and to a context; in this instance, in the context of the ...

  21. PDF Fintech and Financial Inclusion: a Review of The Empirical Literature

    The financial technology industry, or "fintech," has experienced rapid growth within recent years. Between 2015 and 2019, global fintech adoption among consumers rose from 16% to 64%.1 Adoption of fintech services has continued to rise and further accelerated during the COVID-19 pandemic.2 An emerging field of research highlights the ...

  22. All Guides: Psychology: Empirical Study & Literature Review

    An empirical article reports the findings of a study conducted by the authors and uses data gathered from an experiment or observation. An empirical study is verifiable and "based on facts, systematic observation, or experiment, rather than theory or general philosophical principle" (APA Databases Methodology Field Values).In other words, it tells the story of a research conducted, doing it in ...

  23. FinTech and SMEs financing: A systematic literature review and

    Second, the review shows that the empirical studies on emerging alternative financing, such as crowd-funding and marketplace lending to SMEs (89%), are more than those on FinTech and bank lending to SMEs (11%). ... Financial technology: A review of extant literature. Studies in Economics and Finance, 37 (1) (2020), pp. 71-88. View in Scopus ...

  24. Board approves 14 new faculty appointments

    Stout, who specializes in medieval French literature and culture, has a Ph.D. and B.A. from the University of Montreal and an M.A. from McGill University. Lilia S. Xie, in chemistry and the Princeton Materials Institute, joins the faculty in January 2025. Xie specializes in materials chemistry and holds a Ph.D. from the Massachusetts Institute ...