Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Illustration of human hand connecting with computer cursor hand. (Nick Shepherd/Ikon Images)

‘Harvard Thinking’: Is AI friend or foe? Wrong question.

Megan Loh and Nadine Gaab show a model of an MRI machine they use to acclimate young study participants.

Getting ahead of dyslexia

Naomi Saphra, Lawrence Weru, and Maitreya Shah.

Why AI fairness conversations must include disabled people

Lessons in learning.

Sean Finamore ’22 (left) and Xaviera Zime ’22 study during a lecture in the Science Center.

Photos by Kris Snibbe/Harvard Staff Photographer

Peter Reuell

Harvard Staff Writer

Study shows students in ‘active learning’ classrooms learn more than they think

For decades, there has been evidence that classroom techniques designed to get students to participate in the learning process produces better educational outcomes at virtually all levels.

And a new Harvard study suggests it may be important to let students know it.

The study , published Sept. 4 in the Proceedings of the National Academy of Sciences, shows that, though students felt as if they learned more through traditional lectures, they actually learned more when taking part in classrooms that employed so-called active-learning strategies.

Lead author Louis Deslauriers , the director of science teaching and learning and senior physics preceptor, knew that students would learn more from active learning. He published a key study in Science in 2011 that showed just that. But many students and faculty remained hesitant to switch to it.

“Often, students seemed genuinely to prefer smooth-as-silk traditional lectures,” Deslauriers said. “We wanted to take them at their word. Perhaps they actually felt like they learned more from lectures than they did from active learning.”

In addition to Deslauriers, the study is authored by director of sciences education and physics lecturer Logan McCarty , senior preceptor in applied physics Kelly Miller, preceptor in physics Greg Kestin , and Kristina Callaghan, now a physics lecturer at the University of California, Merced.

The question of whether students’ perceptions of their learning matches with how well they’re actually learning is particularly important, Deslauriers said, because while students eventually see the value of active learning, initially it can feel frustrating.

“Deep learning is hard work. The effort involved in active learning can be misinterpreted as a sign of poor learning,” he said. “On the other hand, a superstar lecturer can explain things in such a way as to make students feel like they are learning more than they actually are.”

To understand that dichotomy, Deslauriers and his co-authors designed an experiment that would expose students in an introductory physics class to both traditional lectures and active learning.

For the first 11 weeks of the 15-week class, students were taught using standard methods by an experienced instructor. In the 12th week, half the class was randomly assigned to a classroom that used active learning, while the other half attended highly polished lectures. In a subsequent class, the two groups were reversed. Notably, both groups used identical class content and only active engagement with the material was toggled on and off.

Following each class, students were surveyed on how much they agreed or disagreed with statements such as “I feel like I learned a lot from this lecture” and “I wish all my physics courses were taught this way.” Students were also tested on how much they learned in the class with 12 multiple-choice questions.

When the results were tallied, the authors found that students felt as if they learned more from the lectures, but in fact scored higher on tests following the active learning sessions. “Actual learning and feeling of learning were strongly anticorrelated,” Deslauriers said, “as shown through the robust statistical analysis by co-author Kelly Miller, who is an expert in educational statistics and active learning.”

Those results, the study authors are quick to point out, shouldn’t be interpreted as suggesting students dislike active learning. In fact, many studies have shown students quickly warm to the idea, once they begin to see the results. “In all the courses at Harvard that we’ve transformed to active learning,” Deslauriers said, “the overall course evaluations went up.”

bar chart

Co-author Kestin, who in addition to being a physicist is a video producer with PBS’ NOVA, said, “It can be tempting to engage the class simply by folding lectures into a compelling ‘story,’ especially when that’s what students seem to like. I show my students the data from this study on the first day of class to help them appreciate the importance of their own involvement in active learning.”

McCarty, who oversees curricular efforts across the sciences, hopes this study will encourage more of his colleagues to embrace active learning.

“We want to make sure that other instructors are thinking hard about the way they’re teaching,” he said. “In our classes, we start each topic by asking students to gather in small groups to solve some problems. While they work, we walk around the room to observe them and answer questions. Then we come together and give a short lecture targeted specifically at the misconceptions and struggles we saw during the problem-solving activity. So far we’ve transformed over a dozen classes to use this kind of active-learning approach. It’s extremely efficient — we can cover just as much material as we would using lectures.”

A pioneer in work on active learning, Balkanski Professor of Physics and Applied Physics Eric Mazur hailed the study as debunking long-held beliefs about how students learn.

“This work unambiguously debunks the illusion of learning from lectures,” he said. “It also explains why instructors and students cling to the belief that listening to lectures constitutes learning. I recommend every lecturer reads this article.”

Dean of Science Christopher Stubbs , Samuel C. Moncher Professor of Physics and of Astronomy, was an early convert. “When I first switched to teaching using active learning, some students resisted that change. This research confirms that faculty should persist and encourage active learning. Active engagement in every classroom, led by our incredible science faculty, should be the hallmark of residential undergraduate education at Harvard.”

Ultimately, Deslauriers said, the study shows that it’s important to ensure that neither instructors nor students are fooled into thinking that lectures are the best learning option. “Students might give fabulous evaluations to an amazing lecturer based on this feeling of learning, even though their actual learning isn’t optimal,” he said. “This could help to explain why study after study shows that student evaluations seem to be completely uncorrelated with actual learning.”

This research was supported with funding from the Harvard FAS Division of Science.

Share this article

You might like.

In podcast, a lawyer, computer scientist, and statistician debate ethics of artificial intelligence

Megan Loh and Nadine Gaab show a model of an MRI machine they use to acclimate young study participants.

Harvard lab’s research suggests at-risk kids can be identified before they ever struggle in school

Naomi Saphra, Lawrence Weru, and Maitreya Shah.

Tech offers promise to help yet too often perpetuates ableism, say researchers. It doesn’t have to be this way.

Harvard announces return to required testing

Leading researchers cite strong evidence that testing expands opportunity

For all the other Willie Jacks

‘Reservation Dogs’ star Paulina Alexis offers behind-the-scenes glimpse of hit show, details value of Native representation

When will patients see personalized cancer vaccines?

Sooner than you may think, says researcher who recently won Sjöberg Prize for pioneering work in field

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Enhancing research and scholarly experiences based on students’ awareness and perception of the research-teaching nexus: A student-centred approach

Katherine howell.

School of Medicine, University College Dublin, Belfield, Dublin, Ireland

Associated Data

All relevant data are within the paper.

Research is a core competency of a modern-day doctor and evidence-based practice underpins a career in medicine. Early exposure encourages graduates to embed research in their medical career and improves graduate attributes and student experience. However, there is wide variability of research and scholarly experiences offered in medicals schools, many developed with a significant degree of pragmatism based on resources and financial and time constraints. We examined undergraduate medical students’ awareness and experience of research throughout their degree to provide recommendations for implementation and improvement of research and scholarly experiences.

Focus groups were conducted with medical students at all five stages of the medical degree programme. Data was coded to facilitate qualitative analysis for identification of important themes from each stage.

Students reported positive impacts of research on undergraduate experience, future career and society in general. Two important themes emerged from the data, the opportunity for research and timing of research experiences. Early-stage students were concerned by their lack of experience and opportunity, whereas later-stage students identified the importance of research to employability, personal development and good medical practice, but ironically suggested it should be integrated in early stages of the course due to limitations of time.

Conclusions

Students provided feedback for improving research and scholarly experiences, ideally involving early exposure, a clear programme overview, with equality of access and a longitudinal approach. An emerging framework is proposed summarising the important issues identified by students and the positive impacts research experiences provide for them. These recommendations can be applied to both existing and new research programmes to provide a student-centred approach designed to augment the students’ critical analysis, inspire life-long learning, enhance the student experience and inevitably train better physicians.

Introduction

The question of how a research-intensive university can integrate and embed research into the curriculum to enhance student learning and improve graduate attributes is a topic of immense importance. The Boyer Commission Report—Reinventing Undergraduate Education: A Blueprint for America’s Research Universities (1998) stimulated debate about the nature of an undergraduate student’s experience at a research university. The value of research in education has been further emphasised in recent Irish reports such as the Hunt report in 2011 (National Strategy for Higher Education to 2030—Report of the Strategy Group). This report highlighted the intimate relationship between research and teaching, and strongly encouraged the integration of research-led teaching in Irish universities at both undergraduate and postgraduate levels.

This research-teaching nexus is particularly relevant in professions such as medicine, where evidence-based practice is essential for enhancing quality of patient care [ 1 – 3 ], however, a diminishing clinical scientist cohort interested in pursuing a career in academic medicine has been observed [ 4 , 5 ]. The clinical scientist is widely viewed as playing a critical role in medical research [ 6 ]. Consequently, this disquieting situation has prompted the implementation of a number of initiatives including the development of a specific Academic Track scheme for medical internships in Ireland, which began in 2017 (Health Services Executive National Doctors Training and Planning Unit). This programme enables medical interns to undertake a fully supported research project with protected time in the areas of medical education, clinical research or healthcare leadership and management, to encourage an increase in clinical scientist numbers.

Although the new academic intern programme has not been fully evaluated, a review of late stage undergraduate medical students in another Irish university, revealed their significant concerns that lack of prior undergraduate research may hinder their ability to be competitive in this programme [ 7 ] and over half of students did not think their application would be successful. The impact of early research opportunities during undergraduate medical training strongly encourages doctors to pursue a career embedded in research [ 8 , 9 ]. Furthermore, exposure of undergraduate students to research opportunities has been suggested to enhance effective student engagement [ 10 ] and encourage deeper learning [ 11 ]. Immersing students in a research-intensive setting improves disciplinary learning, and inculcates both discipline-specific and more generic research skills in graduates. These extensive skills are key for enhancing employability and for the ability to adapt to complexity and rapid change in modern knowledge-based economies.

Research is currently not compulsory for medical licensure, although universities should encourage students to engage in scholarship throughout their degree programme. Consequently, most medical schools are choosing to implement a range of research and scholarship components into their curriculum [ 12 , 13 ]. Although some elements of these scholarship or research programmes are consistent across medical schools, the specific format, content and delivery appear unique to each institution with limited cohesion at a national or global level. Components may be compulsory or optional, delivered in self-contained units of varying length, at different stages of the degree or in some cases longitudinally throughout the curriculum [ 12 , 13 ]. The establishment of such programmes within medical schools, is likely based on a range of pragmatic considerations such as resources, availability of mentors and time constraints, rather than a thorough evaluation or understanding of whether they genuinely meet the needs of students, whether they have a tangible impact on career pathways, or whether they ultimately enhance patient care [ 12 – 14 ]. Proposals for implementation and developing longitudinal scholarly experience projects have concentrated on the logistical difficulties and practical considerations rather than necessarily the needs of the students [ 14 ] and most programmes have not been appropriately evaluated to assess the impact they have [ 15 ].

Despite the consensus of the value of embedding research and scholarship into education, there is limited information from specific evaluation of Research-Teaching linkages in the medical curriculum, Given that delivery of these scholarly experiences varies enormously between institutions, there is little direct evidence evaluating the impact of implementing such diverse approaches [ 15 , 16 ]. Therefore a thorough understanding of the needs of the students is an important consideration when planning to implement successful programmes with tangible long-term benefits.

In this study, the student perspective is evaluated in University College Dublin (UCD) a large research-intensive university, which has defined a commitment to student-focused, ‘research-led’ education in a community based on strong research-intensive disciplines. UCD Medical School provides a 6-year undergraduate medicine programme with an intake of approximately 240 students per year, including up to 70 affiliated with Penang Medical College (PMC). The programme also includes other international students (E.U. and non-E.U.), who complete the full 6-year programme in UCD, and may remain in Ireland for subsequent employment and training. The undergraduate 6-year course includes five stages; Stage 5 incorporates the final two years of clinical training in the UCD network of teaching hospitals. UCD medical school also offers a 4-year Graduate Entry Medicine (GEM) course with over 120 students in each of the 4 stages, bringing the total number of full-time medical students to over 2000.

There is no compulsory substantial research project embedded in the undergraduate medical programme, however medical students can take an optional 8-week research elective module in the summer trimester at any stage, known as Summer Student Research Awards (SSRA). These research experiences can be taken as a module for 5 credits, or simply for audit, meaning the students complete the module in addition to their normal credits. Elective modules are available to students in stage 1–4 of the undergraduate degree programme and approximately one third of undergraduate students complete this module at some stage in their undergraduate degree. A wide variety of projects are offered, including laboratory-based and hospital-based research projects, community-based projects with patient groups or charities, biomedical engineering or veterinary projects and clinical audits or observerships. A selection of the projects are carried out abroad in other institutions, and these are often competitively attained through rigorous selection processes. This programme broadly offers a significant degree of flexibility for students who choose to participate, and fundamental aspects will be similar to programmes offered within other medical schools.

In order to ensure such experiences are effective for students, it is important to understand the medical students’ perspective on the research-teaching nexus. The development of students’ awareness and perception of research throughout the medical degree is also unknown. Identifying opportunities and barriers, and defining examples of best practice, will allow us to tailor our approach to maximise the benefits for medical education.

The aim of this study was therefore to evaluate these important issues in a cohort of undergraduate medical students in UCD, to provide insight and considerations for the development of integrated research and scholarship programmes in medical schools at national and international levels.

Study design

All students registered to the undergraduate medicine programme from Stage 1 to Stage 5 were eligible to participate in the study. The UCD Human Research Ethics Committee granted ethical approval for the study and permission to access students was confirmed from Head of School, Dean of Medicine (Ref# LS-17-106-Howell).

An email was sent to all undergraduate medical students explaining the aims of the project, and informing students that focus groups would be carried out for each stage during the semester. For early pre-clinical stages, a brief overview of the aims of the project was explained to the class at the start of a lecture and the students volunteered to attend the focus group immediately after the lecture, with refreshments provided. In the later clinical stages of the degree, where students are based in the teaching hospitals, students were emailed and requested to voluntarily attend a focus group by specialty coordinators. The focus groups were subsequently carried out in the teaching hospitals. Five focus groups, one per stage, were facilitated by an independent research assistant and limited to 10 students per focus group (n = 7 to 10 per focus group).

Methodological rationale and study procedure

Focus groups are a methodological approach utilising group discussion to gather data from a number of people simultaneously. Although not without limitations, they are a particularly useful tool to collect data from a representative selection of a population to identify group attitudes and experiences [ 17 , 18 ]. A central characteristic of focus groups is that rather than inviting individual responses for each question, they capitalise on the interaction and communication between participants to facilitate an understanding not just of the opinions of participants, but also how those opinions were formed. Focus groups thus encourage participation and interaction, and consequently provide rich content, otherwise difficult to obtain using alternative methods [ 18 ]. In this study, the undergraduate medical cohort was considered to be a relatively homogenous population, despite potential differences between the perceived awareness and experience of early-stage and late-stage students. Participants were not pre-defined to specifically represent, for example, those who had an interest in embedding research in their future career, those who had completed research projects or students who had clinical experience and may have a different view of the relevance of research to their clinical career. Rather the random nature of participant recruitment should give a more varied set of responses, pertinent to the undergraduate medical student cohort in general and thus provide a basis for enhancing research and scholarly experiences for all students, not just those with an interest in research.

At the beginning of the focus group, students were given the focus group schedule, project information leaflet and consent form. The research assistant recorded the focus groups on two separate devices, and each focus group lasted approximately 50–60 minutes. The same questions were posed to each of the five focus groups, to ensure comparisons could be made between students’ perceptions and opinions at different stages of their medical degree. The content evolved organically through interactive discussion, meaning that not all students contributed to all questions. Rather, if the group considered their opinions had already been discussed, the research assistant moved to the next question. This approach allows the identification of emerging themes relevant to all medical students, but moreover facilitates the identification of whether these themes are more or less applicable relative to stage, gender or nationality. The focus group schedule used in this project was adapted from one used previously in a large-scale UCD fellowship project evaluating research-teaching linkages across other degree programmes [ 11 ].

Research questions

The study attempted to address the following broad research questions:

  • What do medical students understand about doing research in Medicine?
  • Are undergraduate medical students aware of research in the university and how has this awareness developed?
  • What research experiences do medical students have and what worked well?
  • How have research experiences, if any, impacted their learning?
  • Do they perceive research to be important in undergraduate medicine, are there sufficient opportunities and how can we improve this?

The full focus group schedule is included.

Data analysis

Audio files from each of the five focus groups were transcribed, and the text was imported into NVivo software for qualitative analysis (QSR International). In total, approximately 5 hours of discussion was transcribed and evaluated. NVivo facilitates organisation of qualitative data in an advanced format that permits cross- referencing, queries and visualisation of data to identify patterns and themes. Students remained anonymous throughout the focus group, however identified themselves by number prior to each dialogue.

Thematic analysis is a method designed to identify and analyse patterns or themes which emerge from qualitative data [ 19 ] using the principles defined by Morse (2015) [ 20 ]. Each focus group’s transcribed file was coded for thematic analysis by both the author and research assistant independently. Each of the five focus groups was analysed within NVivo as a separate file, allowing identification of comments relative to stage. Each broad question formed a ‘parent node’, and the answers coded within specific ‘child nodes’ according to similar recurring themes. For example, identification of how students were aware of research carried out in UCD (parent node) revealed broad themes such as the ‘built environment’, ‘information from lecturers’, or ‘school emails’, with each of these categories forming a separate child node within the parent node.

Following analysis, each node included a list of linked comments, recognisable by stage. Student answers could be categorised in more than one node depending on the content of the comments. Following the initial analysis, the data was re-evaluated to combine or condense similar nodes or re-categorise if appropriate. Following the second analysis, each node was reviewed to ensure consistency of responses. Recurring themes evident throughout the focus group also emerged during the initial coding process. These nodes were defined and amalgamated during the second analysis phase. The analysis was integrated by incorporating illustrative examples of extracts from the data with the analytical narrative of the coded responses.

Data was thus examined for recurring themes within broad questions and qualitative data was expressed as the number of responses or where appropriate as percentage of total answers in each parent node. NVivo facilitates analysis of responses across stages so that any changes in students’ awareness or perception as they progressed through their degree could be identified. Differences between stages were analysed by performing matrix coding using the nodes as the matrix item and stage as the attribute. Following analysis of the focus groups, the dimensions of research-teaching linkages perceived by the student to be important were identified.

Demographic characteristics

Students participating in the study were all recruited voluntarily and randomly across each of the five stages of undergraduate medicine. Each focus group had 7–10 participants and included 21 men (50%) and 21 women (50%) ( Table 1 ). Penang Medical School (PMC) students, who are awarded a UCD degree but undertake a 5-year degree with their final 2.5 years of clinical training in Malaysia, accounted for four of the nine Stage 2 students but were not represented in the other stages. Students taking part in the focus groups were further categorised based on their nationality. Approximately two thirds (69%) of participants were Irish, 2 students (5%) were from the E.U. namely France and the remaining students represented 7 other countries including Malaysia, Canada, USA, Singapore, Nigeria, Botswana and Australia. This represents the multicultural nature of the course, the university and Ireland in general.

Table showing demographics of the focus group participants. Students identified as Males or Females, and either Irish, E.U., in both cases French, or Non- E.U. from Malaysia, Canada, USA, Singapore, Nigeria, Botswana and Australia. Focus group number ranged from 7–10 per group and 42 students were included in total.

Medical students’ awareness of research

Students were firstly asked whether they were aware that research was carried out in UCD and how that awareness developed. All participants indicated they were aware that research was carried out. There were 69 instances in total where students described how the awareness of research originated, with some students providing more than one example. This awareness stemmed predominantly (30 of the 69 responses) from information imparted by educators associated with the course. Lecturers, and to a lesser extent, demonstrators (often PhD students involved in delivery of practical classes), were mentioned by students across all stages, whereas later stage students, immersed in a clinical setting, were more likely to discuss the influence that clinical tutors had on their awareness of research. Although the lecturer may not have provided sufficient information regarding the precise nature of the research carried out, it made students aware that research was ongoing in the university.

Students’ awareness of research also arose from information sent to them from the school, particularly regarding the SSRA programme (11 of the 69 responses); this peaked at Stage 3 students which corresponds to the most likely stage that undergraduates undertake an SSRA project. Stage 4 and 5 students also discussed an intercalated MSc programme option (6 responses), and the final year medical elective (6 responses), which can potentially be a research project, although this was not widely known. The built environment surrounding the students also heightened their awareness of research activity (6 responses); specifically, students discussed a biomedical research centre adjacent to the Medical School building, but felt somewhat detached from activities within. Other minor influences included information from peers, university reputation and social media.

Medical students’ understanding of research in medicine

Students were then asked about their understanding of what it means to do research in medicine. There were 30 responses in total. Students from all stages referred to improving our understanding of medicine (one third of all responses) and working in a laboratory as examples of what medical research means to them. As the students progressed through the course, their ability to articulate a deeper understanding of what it means to do research in medicine became apparent. Early-stage students refer to medical research as something that increases our understanding of the human body, finding new cures and advancing therapeutics. However, once students have been exposed to a clinical setting, from late Stage 3 onwards, their concept of research in medicine expands to recognise the importance of evidence-based practice, and an understanding of the valuable contribution that clinicians make to medical research and society.

Stage 1, Female

“ Erm , I guess it’s about contributing to the field , erm trying to advance it . You know , clinical trials , looking for new drugs to cure diseases that aren’t curable . Trying to progress the drugs and treatments that are out there . ”

Stage 5, Male

“ I guess my understanding of research in medicine has come on a lot in the last year since we have lectures in hospital with the consultants we see out on the wards , but who also who talk about their research interests . I think that reinforced the idea that medicine is evidence-based and research has to play a key part in it . I feel like when we were book-learning in college and stuff , it didn’t seem… it wasn’t as tangible the link between medicine and research , whereas when you are in hospital you can see that much more clearly , especially when the people you are learning from are talking about it . And erm… I guess the clinicians are best placed to see where improvements could be made . I feel like more so in the last year than in my pre-clinical years I have a gained an understanding of the importance of research . ”

Medical students’ exposure to research in their medical degree

Students were next asked to describe instances where they had learned about research, been taught about research, or had any research experiences. Responses were coded as ‘learning about others’ research’ ( research-led ), ‘learning about research’ ( research-tutored ), ‘learning by doing research’ ( research-based ) and ‘learning to do research’ ( research-orientated ) based on the framework of Healey [ 11 ].

Broadly, the undergraduate medical students perception of their experience of research was fairly limited. Early Stage 1 and 2 students in particular articulated that they had little or no research experience. Despite their awareness of research predominantly emanating from staff discussing their research, students rarely described ‘learning about others’ research’ as a research experience. Where students described their research experiences, it was associated with describing research they had carried out i.e. ‘learning by doing research’.

Interviewer:

“ Can you identify any instances where you have learned about research , been taught about research , or had any research experiences during your studies ? ”

Stage 2: Female

“ I wouldn’t personally count lectures as like significant contact , so I would answer no to this question . ”

Stage 1: Female

“ I wouldn’t say we had research experiences . Erm , I don’t know what is available in the school of medicine as it is very far removed from us . Kinda what participant 5 was saying there , I am not sure we can have research at the moment , we are not really sure what is involved . We are not sure what we could add , who is involved in the research . As in , is it to lead research , what knowledge do you need to have ? If it is research assistants what do they have to do . We don’t know how able we have to be to actually get involved in SSRA or anything , ‘cos we don’t know what that would mean . ”

Stage 4: Male

“ In terms of learning about research or being taught about research , we have a lot of lectures across multiple modules across multiple years on research methods and statistics and epidemiology as well . They are not particularly practical , but they give you a good sort of basis in that you emerge with an awareness of what research is , what kind of research exists but it always seems a little bit more theoretical than any sort of practical day to day how to go about it and one thing about these modules is they never include any sort of opportunities–it’s almost like you are studying about research but they don’t seem to presume that you are ever going to be doing research rather that you have an awareness of it so when you are reading a paper you can understand the terminology . ”

From Stage 3 onwards, the proportion of students who discussed their personal experiences of doing research, particularly the SSRA, increased. In some cases, late-stage students had undertaken more than one SSRA project or had independently acquired research experiences outside of the university. Approximately one third of the students in the focus group had experience of research through doing an SSRA project. This correlates closely with the number of undergraduate students completing an SSRA project in the medical school.

The impact of research experiences on students’ learning

The impact of research experiences on students’ learning could be categorised broadly as negative or no impact, potential/perceived impact, or positive impact ( Fig 1 ). Approximately one third of responses (18 from 53 comments) stated that research had no impact on their learning, mostly because they had no research experience or occasionally because they did not perceive a relationship between research and learning outcomes or educational experience. In some cases, students did not see a benefit to doing research.

An external file that holds a picture, illustration, etc.
Object name is pone.0257799.g001.jpg

Across all 5 stages, there were 53 references or responses to the impact of research. Over half (53% of 53 responses) suggested that research had a positive impact. These positive impacts are represented in the second smaller pie chart. Approximately one third (34% of 53 responses) indicated no impact of research, predominantly because of lack of research experiences or occasionally because students did not see a role for research in their career.

Stage 3 Male:

“ Erm , I think , for the vast majority of people who I would be talking to in my year would have a very practical approach to the medical degree . Erm , that I think the majority of people will be expected to work in medical practice and not in research . The people I hang out with generally would be focusing towards that and maybe research would be easier to avoid , if that is not going to be part of your career . ”

Some students without research experience appreciated the potential that research could have on enhancing their learning experience. Over half of responses (35 of 53 statements) described the positive effect that research had on their learning. These benefits including making subjects more relevant or enhancing their understanding or interest in a topic.

Stage 1 Female:

“ Even like the tiny bits , you know some lecturers would mention , especially in the biology ones that they are doing some research . It just make it more relevant , even regardless of what we have to do in the future it makes it easier to connect what’s going on . Just so , you know if you are just given the material and it might be , I don’t know , some material and you are told to go learn it , you don’t really know why you are doing . Whereas when they talk about the research you understand why you are being taught it . ”

Stage 5 Male

“ I don’t know if it’s impacted learning but more impacted your interests . So say like if you did a research project in a certain area , like , depending on whether you like the project or not , you may have an increased interest in that area . So it might propel you to study that topic a bit more or look into it in a bit more detail . But I don’t think it impacts your learning overall . ”

Medical students were aware of the potential impact research experiences would have on their career progression, such as enhancement of their curriculum vitae or an achievement of fulfilling an expectation. The impact on career progression was almost exclusively reported by Stage 5 students.

Stage 5 Female

“ That being said , I think I got a better appreciation for the fact that people within medicine are very well respected if they are researchers , in a lot of ways . So like , they might be clinicians by day but then , you know , researchers by night , but they’ll have publications and the more publications the more prestigious or like there is kind of , there is a respect for researchers in medicine and I think I noticed that a lot more when I was involved in the SSRA . ”

Emerging themes: Opportunity and timing of research

Two specific thematic areas emerged following coding of the focus group transcripts–‘opportunity for research’ and ‘timing of research’. Students reported a lack of opportunity to undertake research, particularly in early Stages 1 and 2. More importantly, students described how a lack of research experience hindered the opportunity to undertake research projects ( Fig 2 ). This was a recurring theme throughout all stages of the programme.

An external file that holds a picture, illustration, etc.
Object name is pone.0257799.g002.jpg

In total, 56 responses or references to research opportunity were discussed during the focus groups. Lack of experiences being a barrier to research was discussed by students in all 5 stages. Early stage students described a lack of opportunity for research, whereas later stage students were more aware of a variety of research opportunities, however considered there was an inequality of opportunity to undertake research.

“ I would add that , if applying to the SSRA because a lot of them are so specialised , you do need to have very specific skills if you want to do the research properly , so I definitely feel that is a barrier because I don’t have my research skills at this point and I feel there are very little opportunities to gain them”

Late-stage students reported that research experiences were available, however they felt that there was an inequality of access to research opportunities, particularly if students were not available in the summer to complete an SSRA ( Fig 2 ).

Stage 4 Female

“ […] the SSRA projects it’s a great initiative and it has tons of projects for people to do but its I think the engagement is probably low . The only way I wanted to do research last summer is if I got paid for it and I ended up getting some money and so I was just very lucky that everything fell together and while I did have a great experience and I am doing research again this summer I think it was just everything falling in to place–the opportunities are sometimes hard to find . ”

Despite not being specifically addressed in the focus group schedule, the timing of research experiences was discussed extensively by medical students throughout the focus group session. A substantial cohort of Stage 1 students suggested that research opportunities should be available early in the course ( Fig 3 ), however this was tempered by a consideration that lack of experience hinders their opportunity to be competitive for research projects available ( Fig 3 ) and consequently research opportunities were more likely to occur later in their course.

An external file that holds a picture, illustration, etc.
Object name is pone.0257799.g003.jpg

Each of the five stages of undergraduate medicine is represented individually, to facilitate an understanding of when research experiences are considered most appropriate. Students largely believed that research opportunities should be available early in the curriculum, although some later-stage students perceived that their enhanced understanding of curricular content would mean later research experiences would be more relevant.

‘ At this stage I don’t want a research role as that would be a lot of responsibility but any sort of lab work would be helpful in the future because really I have no experience and I am sure everyone would agree that we have no experience in lab work and I am sure that would help us in further years in applying and getting these opportunities . And also I had work experience in a clinic where they were doing clinical trials and were doing research and I can definitely see how that would transfer into our professional careers in the future . So, it is important to start as early as possible ”

The remainder of early-stage medical students had limited interest in early research opportunities, however the majority were acutely aware that research would be necessary later in the course.

Stage 1 Male

“ I think there are very little incentives to get involved in the early stages . So , pre-med (stage 1) students wouldn’t be particularly interested in getting involved in various different types of research but I think as the years go on , it is not just an expectation , it is a necessity for us to get involved in terms of where we want to go after we graduate . ”

A substantial cohort of late-stage students also suggested that early opportunities for research projects would be beneficial, particularly from availability of time perspective. Elective modules are available in the earlier years, giving students the opportunity to potentially incorporate research into their curriculum. Late-stage students also acknowledged that their advanced clinical knowledge made later-stage research experiences more relevant.

Stage 4: Female

“ I think as well you need to look at the curriculum in medicine . I mean at the end of pre- med and with no discredit to the course you haven’t actually learned a lot about medicine and in first med you are just getting to grips with the topics and then you do pharmacology and you are getting a broader understanding of medicine so then maybe that enables you or you feel more equipped to carry out a project but then you are like … oh actually I have learned about this and you can relate to it better because often times as I have said before the topics for SSRA or other research projects were quite complex , and maybe you are like … . oh I understood that word but I don’t necessarily know what that means but later you are like oh I remember that from that lecture or we learned about that here”

Recommendations for improvements

There was an overwhelming assertion that it is valuable to include experiences of research and/or learning about research skills in the undergraduate medical programme, however students asserted that there were insufficient research opportunities currently available. The students were subsequently asked for recommendations to rectify this situation ( Table 2 ).

The number of responses for recommendations for improvement of research-teaching linkages in undergraduate medical curriculum across all five stages. Final column shows the percentage of responses for each recommendation.

However almost half of the 110 responses to this question in the focus group recommended that research be embedded within the curriculum, either as a core component or as an elective module, and particularly around Stage 2.

“ I think it would be better if it was included and didn’t involve giving up 2 months of your summer , ‘cos there are loads of people who feel they have to earn money or want to travel and I think if we were able to do even 3 weeks then we have time to do other stuff as well instead of taking up the whole summer and being in UCD for another 2 months . ”

Stage 1 Female

“ […]we kinda have just done science this year and so we wouldn’t be able to contribute to research . So then we have a disinterest , but maybe if the opportunity was presented to us to even observe research being done . Just because it will benefit us in the future , then if we can have that exposure we might realise how interested we are in the research . You know , it could follow from there , like even if we had a research elective or module where you go and watch others do research and if it was built in . ”

Stage 3 Male

“ […] I see how important research is but I feel like , for most hospital jobs , you need to have done research at some stage . It would be great to have an introduction to it in college . If we were going to do research at some stage it would be good to get some introduction to research . ”

Stage 5: Female

“ In my mind it’s obviously a question of how much UCD is prioritising research for medical graduates to take part in . Because obviously it is very important to be involved in research for evidence-based medicine but , our only exposure to it is really through anecdotal stuff in lectures and through the SSRA and that’s like another elective five credit module . Whereas if there was say , a five or ten credit module , that was mandatory that focused on research , then we might have more of an incentive to try and get ourselves involved in research and then it would also be a UCD statement saying that we think that research is very , very important and so important that its worth mandatory credits . ”

Approximately 10% of the 110 responses requested an improvement to the SSRA programme, namely a more structured approach, more variety of projects and more information. Students also described how early exposure to researchers, peers, clinical role models was inspirational. This was linked to a request for improved research information, more research opportunity in general and specifically more information about the importance of research to a career in medicine.

“ …we are seeing in the journal clubs here and the grand rounds these people that we could be in their position and they think research is really important , so if we had role models–I don’t know if you know Prof H ? She gave the key note address at the student medical summit last year , just talking about how to integrate research into a clinical career . I think everyone came out of that thinking like , oh wow yeah that’s really cool and these are the steps she took and that’s something I could definitely do if I had to go down one route or another . It’s something to do with having role models . ”

Focus group schedule.

  • Tell me a little about that.
  • How did that awareness develop?
  • How did that understanding develop?
  • Can you talk about what you know of their research?
  • Can you explain how that knowledge developed?
  • Can you outline any specific examples?
  • What worked well and what did not work so well? Why was that?
  • Would you consider that you had research experiences other than the SSRA, and if so, how well did they work?
  • Did that change over the course of your degree? (Stage 2 onwards)
  • When did that change /those changes happen?
  • Why did that change /those changes happen?
  • In what way?
  • Do you think that your programme has provided adequate experience of, and training in, research skills? Explain.
  • We have come to the end now of the focus group. Before we finish up, is there anything that you would like to add?

The intimate relationship between research and teaching is now considered to be core to the effective functioning of research-intensive universities. This is particularly important in disciplines reliant on evidence-based practice such as medicine, which benefits greatly from the valuable insight provided by clinical scientists and their unique perspective from interactions with patients. The nature of the research-teaching nexus is constantly adapting to the ever-changing landscape of the educator-student dynamic [ 21 ]. The perceptions and experiences of the academic on research-teaching linkages are well-documented [ 22 , 23 ], however there are obvious disciplinary and institutional contexts.

A clear inconsistency of research opportunities offered during the medical degree persists at a global level. The development of these programmes is likely driven by an element of pragmatism, coupled with a consideration of the educational ethos of the institution. These fundamental, but potentially important differences such as duration of research experiences, extent of integration, availability, content and variety of projects, assessment, governance and stage at which they are available, generate a significant variance in programmes and consequently student experience. An ability to tailor research and teaching to maximise the benefit to students and enhance graduate attributes and outcomes relies on an understanding of the students’ perception.

This study evaluated the undergraduate medical student awareness of and exposure to research in a research-intensive university. It further examined whether research experience impacted student learning, whether current research opportunities were sufficient, identified examples of best practice and sought recommendations for improvements from students. The data was analysed across the five stages of undergraduate medicine to evaluate any changes that developed throughout the course.

The demographics of the participants reflected the multi-cultural diversity of the nature of a modern Irish medical school, including the connection with Penang Medical College (PMC) in Malaysia. Not all focus groups were an exact representation of the specific demographics of that stage. For example, Stage 2 participants were all non-E.U students, including 4 from Malaysia, who were potentially associated with PMC and therefore not represented in Stage 4 and 5 because of their return to clinical training in Penang. Stage 3 participants were all male, clearly not representative of the student cohort in that year. Overall the 42 participants were reflective of the undergraduate population at the time of the study and it is likely that a sufficient number of focus groups were performed to capture the important themes [ 17 , 24 ]. It has been suggested that 3 to 6 focus groups, with a homogenous population and a semi-structured discussion guide such as the focus group schedule used in this study, will likely capture 90% of all themes, including the most important ones [ 24 ]. Striking the balance between too few and too many focus groups is always open to discussion, and retrospectively it could be argued that more focus groups in each stage, or grouping pre-clinical and clinical students may strengthen the overall quality of the data.

Whilst it is possible that students from every individual medical school may also have unique perceptions on individual aspects of the study, dependent on the specific research experiences available to them, the overall themes that emerged from the data are highly likely to be relevant to the majority of medical students. The consistency of education governed by global standards determined by the World Federation for Medical Education (WFME) suggests that students are likely to have shared perceptions and opinions. Hence data presented here may be transferable and applicable to a wider international setting.

The first question in the focus group addressed whether students were aware of research and how that awareness had developed. Although all students were aware of research ongoing in the university in general, almost 45% of the responses described how their awareness of research in Medicine developed from lecturers, clinical educators and, to a lesser extent, demonstrators, who are mostly active researcher students.

Research-intensive universities have achieved a dominant position within the third-level education system, and the impact of educating students in such an environment, despite the obvious added cost, is considered valuable to the student, researchers and institution alike. Inspired by the recommendations of the Boyer Commission on Educating Undergraduates in the Research University [ 25 ], and with the growing awareness of the benefits of incorporating research experiences into undergraduate curricula, there was an explosion of interest in this area [ 26 ]. Although there was an understanding of the link between teaching and research, not all supported the concept that they were mutually interdependent (Future of Higher Education White paper UK 2003), advancing the concept of teaching-only institutions in the UK. However, many case studies have been reported and reviews have concluded that the benefits are real and substantial [ 27 – 32 ], albeit when care is taken to avoid potential pitfalls [ 33 ]. This growing awareness of the positive influence on the student experience and graduate attributes has narrowed the gap between research and teaching in the academic setting, encouraging academics to attempt to incorporate their research into their lectures and creating scholarly research experience programmes such as the SSRA programme described here.

Incorporating research-led experiences [ 11 ] for students in this study has a positive impact on students’ awareness of the research ongoing in the university, however some students articulated a disconnect, either because these discussions of research were not assessed, or because it was not relevant to their studies. This is perhaps unsurprising given the suggestion that active involvement in research by students i.e. research-based experiences are the most effective form of research in terms of maximising depth of learning [ 11 ]. Moreover, despite the good intentions of staff to incorporate their research into their teaching, students did not report these circumstances of ‘learning about others’ research’ or research-led as a research experience.

This study also highlighted the impact of the built environment on students’ awareness of research in medicine. The presence of research centres on campus inculcates an awareness from as early as recruitment days in secondary schools, and some students iterated the positive influence this had on university choice. Surrounding the medical students in an environment of research can potentially stimulate research-mindedness, however most early-stage students in this study were unaware of the research carried out, further precipitating a sense of disconnect.

This disconnect between early-stage students and their comprehension of research was evidenced in terms of their verbalisation of understanding of what it meant to carry out research in medicine. All students appeared to understand that doing research in medicine furthered our understanding of clinical medicine and potentially contributed to improving society. However, later-stage students had a greater appreciation for the relevance, importance and clinical applicability that research served, discussing evidence-based practice and how their understanding of what research means has changed after doing research or as they progress through their course and experience how research impacts on clinical practice.

Addressing this disconnect between students and staff and research and teaching at an early stage must be priority in all research-intensive institutions. A number of models have been proposed to address these issues, however, student engagement must be at the heart of any proposals [ 11 , 34 , 35 ]. This is likely to involve a significant shift in how we structure and deliver the undergraduate curriculum, not just at a modular, programme or institutional level but at national and international levels.

This study also evaluated the impact that research had on students’ learning throughout their degree. Unsurprisingly, the later-stage students who were more likely to have completed a research project, recognised the impact of research on learning. Whilst some students, particularly early-stage students, had no experience of doing research, they could still articulate the potential positive impact that doing research may provide. Approximately two thirds of responses relating to this question were positive, and referred to benefits such as career enhancement and improved knowledge and skills. Of particular significance were the comments that research was simply interesting and made learning more relevant, but did not necessarily impact on learning.

It is not uncommon for students to underestimate the impact that research has on their education [ 36 ], however, it is also likely that the delivery of a coherent structured research experience, potentially embedded in the curriculum, would permit the student to reflect on their experience and evaluate the impact more cohesively. As academics, we frequently witness a transformative effect of completion of significant independent research projects on the confidence and capabilities of students. In the absence of formal reflection, it is probable that students do not appreciate or recognise this flourishing effect on their educational journey.

One of the main themes that emerged from the data was the issue of opportunity. Students across all stages, but particularly Stage 1 and 2, described a lack of opportunities for research despite the availability of a research module. Students have the opportunity to take a research elective module in the summer, the SSRA scheme, which involves an 8-week project supervised by a mentor, culminating in the submission of an abstract to the Irish Journal of Medical Sciences, and an oral presentation of the project in poster form. Each summer over one hundred national and international SSRA projects are completed, of which just over half are undertaken by undergraduate medicine students. Typically, the undergraduate medical students choose to do this module at the end of Stage 3 and approximately a third of undergraduate students would complete the module during their undergraduate course.

From the focus group analysis it was clear that students generally choose to wait until Stage 3 to complete this research project because they perceive that a lack of experience hinders their competitiveness. Students are permitted to do an SSRA every summer if they choose to, although they can only take it for 5 credits on one occasion. It was reassuring to see a few students describe completing two or more SSRAs in different areas of research, indicating a desire to pursue research within their course.

However, there was criticism of the scheme, particularly from later-stage students, who describe an inequality of opportunity for students who do not have the ability to do research in the summer, due to inexperience or financial or personal reasons. Although some of the projects, both national and international, are formally advertised, and can be applied for by any student, many projects are sought independently by students actively contacting researchers in other institutions who work in a field that is of interest to the student, or through personal contacts. This creates a somewhat ad hoc system of projects, which in many ways brings a unique variety to the programme. However, the lack of structure, consistent opportunity and equality is off-putting to some students.

The second theme to emerge from the data was the issue of timing of research opportunities. Although some later-stage students suggested that research was more relevant in later stages due to their superior knowledge, there was a consistent opinion across all stages that early research opportunities would be ideal. The motivation for early introduction to research was either to enhance competitiveness later in the course to overcome lack of experience, because they had more flexibility, time or less pressure in the early part of the course, or because they could take the SSRA for credits in the first three stages to contribute to the next stage GPA.

The evidence to support the benefits of incorporating research experiences into a medical curriculum is extensive [ 37 , 38 ] however much of the impetus for stimulating research predominantly focussed on MD or PhD programmes rather than the undergraduate experience [ 9 , 39 ]. More recently, the emphasis has somewhat shifted to research experiences for medical students throughout their course, whether these are embedded within the curriculum or as voluntary electives [ 8 , 9 , 37 , 39 ]. A number of large-scale funded programmes such as the Medical Student Research Fellowship Programme in the U.S. [ 9 ] and Medical Student Research projects in Norway [ 16 ] and the Netherlands [ 40 ] have been introduced to engage students at this crucially influential stage of their training and try to introduce a degree of consistency in student experience.

Research experiences provide a context for students’ learning and augment the understanding of the importance of research in their future careers. The data presented here demonstrate how understanding of research in undergraduate medical students evolves based on experience, and underlines the importance of early research opportunities to maximise the progression of this research journey. However this journey must surely not only be structured in nature but also mutually beneficial for both staff and students.

Most literature in this area looks at how research can impact on teaching and student engagement rather than the impact of teaching on research [ 41 ]. However, it has been suggested that not only does research have the ability to enhance teaching, but furthermore that teaching has the potential to enrich research [ 23 , 42 ] creating a dynamic relationship between academics and students. Nurturing of this important relationship has the potential to bridge the gap between research and teaching, and also staff and students, particularly by encouraging research-intensive staff to actively become involved in partnerships with students in research. A recent study by Fanghanel et al. (2016) [ 43 ] emphasised that the engagement of students is essential for the scholarship of teaching and learning, and recommended that institutions should provide sustained undergraduate research opportunities through staff-student partnership in order to develop meaningful student engagement.

Proposal for enhancement—Considerations for optimising the impact of research experiences for medical students

The recommendations of the students and important dimensions were encompassed into an emerging framework ( Fig 4 ), which was used as a basis for suggesting enhancement to research programmes. In this study, students overwhelmingly recommended early research opportunities embedded within the course, ideally in the form of structured research electives delivered longitudinally through the course, with clear programme overview and delivered at appropriate times during the course. This would facilitate all students potentially having equal access to basic research or scholarly experiences, with the opportunity to create a significant portfolio of sequential experiences, each building on previous skills and knowledge. Students suggest that research experiences should be recorded and verified to provide a useful mechanism to substantiate students’ appropriateness for future research opportunities, suggesting a passport style portfolio may be useful. Furthermore students require valuable research techniques to enhance their CV, meaning, where possible, students should have the opportunity to complete a module on relevant research skills.

An external file that holds a picture, illustration, etc.
Object name is pone.0257799.g004.jpg

The conceptualisation of an emerging framework places the student at the central character, identifies issues important to students (inner circle), and defines their perceived positive impacts in terms of their educational experience and future professional career (outer circle). This framework places the student at the central character, identifies issues important to students, and defines their perceived positive impacts in terms of their educational experience and future professional career.

Students consistently described how naïve and inexperienced they perceive themselves to be, lacking even a basic understanding of research. Hence, an early module in the fundamentals of research, available to a large cohort of medical students, is likely to be useful in terms of enhancing student basic knowledge and experience in research. This module could include input from senior clinical scientists, acting as role models to facilitate an early understanding of the benefits of research to the medical student. Fundamental skills such as hypothesis generation, critical analysis of published articles, how to find appropriate resources to support our discussion of data or even the ability to ask pertinent questions should be incorporated into early research modules.

Subsequent modules would ideally build on this fundamental research module, potentially incorporating small research projects, exploring more detailed research topics including for example qualitative and quantitative data analysis. Given the number of students who perceive medical research to be about ‘working in a lab’, coupled with the fact that this prospect does not appeal to all students, suggests that increasing the variety of projects offered to students may be crucial to improving the student uptake. Green et al., published a compendium of examples of scholarly concentration programmes, including detailed concentration areas. Whilst biomedical sciences make up the large proportion of research projects, there are examples of some very creative non-medical projects, such as creating art programmes for patients [ 12 ].

The constraints of fulfilling academic requirements from professional bodies may provide barriers for large-scale longitudinal research experiences in the absence of significant re-structuring of existing timetables. However a number of medical schools, particularly in the U.S., have successfully incorporated longitudinal research programmes across the duration of the course culminating in the production of a dissertation. The positive impact of such programmes have been successfully evaluated [ 8 , 12 – 14 ].

It is well-documented that the impact of students tangibly carrying out research projects is likely to be the most transformative [ 11 ], suggesting that any implementation of recommendations should, where possible, include a capstone project. This capstone project could potentially include, projects of limited duration (6–12 weeks), or more substantial such as an intercalated masters or PhD, or an M.D or clinical internship following graduation. An early opportunity to complete medicine-specific research elective modules is likely to have a significant impact on the undergraduate research journey and potentially encourage an increase in clinical scientist roles.

Limitations of the study and future research

The use of focus groups in Healthcare and Medical education has increased exponentially over the past few decades, mostly due to the ability to gain understanding not simply what people think, but importantly why they think that way. However it is still clear that more stringent guidelines are required to help define appropriate sampling strategies, focus group number, homogenous versus heterogenous sampling balance, with the aim to maximise the methodological approach and ensure the approach is fit for purpose. In this study, it could be argued that the opinions and experiences of first year and final students may vary quite differently and therefore the undergraduate medical student cohort is not completely homogenous. Moving forward, it may be more appropriate to increase the number of focus groups from early and late- stage students, in order to analyse differences in opinions between these more homogenous groups of students and strengthen the quality of the data obtained.

The approach taken in this study was to avoid pre-conceptions during sampling, and these differences emerged naturally from the data, with early-stage (1–3) and later-stage (4–5) students expressing divergent opinions on some aspects of the discussion. This corresponded to exposure to the clinical environment, where the impact, usefulness and relevance of research could more easily be appreciated. It may also have coincided with the point at which students were more likely to have experience of independent research and scholarly experiences, giving them a more informed opinion of the value of research. However, it was also reassuring to see that although there were differences of opinion and awareness between early and later-stage students, there was also consistency across all students, particularly in their recommendations for enhancement of scholarly experiences. Furthermore, the experiences of all undergraduate students, regardless of stage, research or clinical experience were captured.

In summary, this data provides an insight into medical students perception, awareness and impact of research-teaching linkages and the opportunity to undertake scholarly activity and research as part of their medical education. Research opportunities vary considerably between medical schools, however, the goal of these experiences is to augment the students’ critical analysis, improve communication skills, inculcate a curiosity to inspire life-long learning, enhance the student experience and inevitably train better physicians. Ideally, this will increase the number of clinical scientists, a measure which will undoubtable have positive impacts on patient outcomes. Whilst pragmatic issues will inevitably dictate elements of scholarly programmes, this framework places the student at the central character, identifies issues important to students, and defines their perceived positive impacts in terms of their educational experience and future professional career.

Acknowledgments

The author wishes to thank Ms. Rachel Niland, research assistant on the project for her outstanding contribution to the focus groups and analysis.

Funding Statement

KH R17781 Irish Network of Medical Educators (INMED) now name changed to INHED https://www.inhed.ie/ The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability

  • Review article
  • Open access
  • Published: 22 January 2020

Mapping research in student engagement and educational technology in higher education: a systematic evidence map

  • Melissa Bond   ORCID: orcid.org/0000-0002-8267-031X 1 ,
  • Katja Buntins 2 ,
  • Svenja Bedenlier 1 ,
  • Olaf Zawacki-Richter 1 &
  • Michael Kerres 2  

International Journal of Educational Technology in Higher Education volume  17 , Article number:  2 ( 2020 ) Cite this article

119k Accesses

252 Citations

58 Altmetric

Metrics details

Digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience. It has also been linked to an increase in behavioural, affective and cognitive student engagement, the facilitation of which is a central concern of educators. In order to delineate the complex nexus of technology and student engagement, this article systematically maps research from 243 studies published between 2007 and 2016. Research within the corpus was predominantly undertaken within the United States and the United Kingdom, with only limited research undertaken in the Global South, and largely focused on the fields of Arts & Humanities, Education, and Natural Sciences, Mathematics & Statistics. Studies most often used quantitative methods, followed by mixed methods, with little qualitative research methods employed. Few studies provided a definition of student engagement, and less than half were guided by a theoretical framework. The courses investigated used blended learning and text-based tools (e.g. discussion forums) most often, with undergraduate students as the primary target group. Stemming from the use of educational technology, behavioural engagement was by far the most often identified dimension, followed by affective and cognitive engagement. This mapping article provides the grounds for further exploration into discipline-specific use of technology to foster student engagement.

Introduction

Over the past decade, the conceptualisation and measurement of ‘student engagement’ has received increasing attention from researchers, practitioners, and policy makers alike. Seminal works such as Astin’s ( 1999 ) theory of involvement, Fredricks, Blumenfeld, and Paris’s ( 2004 ) conceptualisation of the three dimensions of student engagement (behavioural, emotional, cognitive), and sociocultural theories of engagement such as Kahu ( 2013 ) and Kahu and Nelson ( 2018 ), have done much to shape and refine our understanding of this complex phenomenon. However, criticism about the strength and depth of student engagement theorising remains e.g. (Boekaerts, 2016 ; Kahn, 2014 ; Zepke, 2018 ), the quality of which has had a direct impact on the rigour of subsequent research (Lawson & Lawson, 2013 ; Trowler, 2010 ), prompting calls for further synthesis (Azevedo, 2015 ; Eccles, 2016 ).

In parallel to this increased attention on student engagement, digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience (Barak, 2018 ; Henderson, Selwyn, & Aston, 2017 ; Selwyn, 2016 ). International recognition of the importance of ICT skills and digital literacy has been growing, alongside mounting recognition of its importance for active citizenship (Choi, Glassman, & Cristol, 2017 ; OECD, 2015a ; Redecker, 2017 ), and the development of interdisciplinary and collaborative skills (Barak & Levenberg, 2016 ; Oliver, & de St Jorre, Trina, 2018 ). Using technology has the potential to make teaching and learning processes more intensive (Kerres, 2013 ), improve student self-regulation and self-efficacy (Alioon & Delialioğlu, 2017 ; Bouta, Retalis, & Paraskeva, 2012 ), increase participation and involvement in courses as well as the wider university community (Junco, 2012 ; Salaber, 2014 ), and predict increased student engagement (Chen, Lambert, & Guidry, 2010 ; Rashid & Asghar, 2016 ). There is, however, no guarantee of active student engagement as a result of using technology (Kirkwood, 2009 ), with Tamim, Bernard, Borokhovski, Abrami, and Schmid’s ( 2011 ) second-order meta-analysis finding only a small to moderate impact on student achievement across 40 years. Rather, careful planning, sound pedagogy and appropriate tools are vital (Englund, Olofsson, & Price, 2017 ; Koehler & Mishra, 2005 ; Popenici, 2013 ), as “technology can amplify great teaching, but great technology cannot replace poor teaching” (OECD, 2015b ), p. 4.

Due to the nature of its complexity, educational technology research has struggled to find a common definition and terminology with which to talk about student engagement, which has resulted in inconsistency across the field. For example, whilst 77% of articles reviewed by Henrie, Halverson, and Graham ( 2015 ) operationalised engagement from a behavioural perspective, most of the articles did not have a clearly defined statement of engagement, which is no longer considered acceptable in student engagement research (Appleton, Christenson, & Furlong, 2008 ; Christenson, Reschly, & Wylie, 2012 ). Linked to this, educational technology research has, however, lacked theoretical guidance (Al-Sakkaf, Omar, & Ahmad, 2019 ; Hew, Lan, Tang, Jia, & Lo, 2019 ; Lundin, Bergviken Rensfeldt, Hillman, Lantz-Andersson, & Peterson, 2018 ). A review of 44 random articles published in 2014 in the journals Educational Technology Research & Development and Computers & Education, for example, revealed that more than half had no guiding conceptual or theoretical framework (Antonenko, 2015 ), and only 13 out of 62 studies in a systematic review of flipped learning in engineering education reported theoretical grounding (Karabulut-Ilgu, Jaramillo Cherrez, & Jahren, 2018 ). Therefore, calls have been made for a greater understanding of the role that educational technology plays in affecting student engagement, in order to strengthen teaching practice and lead to improved outcomes for students (Castañeda & Selwyn, 2018 ; Krause & Coates, 2008 ; Nelson Laird & Kuh, 2005 ).

A reflection upon prior research that has been undertaken in the field is a necessary first step to engage in meaningful discussion on how to foster student engagement in the digital age. In support of this aim, this article provides a synthesis of student engagement theory research, and systematically maps empirical higher education research between 2007 and 2016 on student engagement in educational technology. Synthesising the vast body of literature on student engagement (for previous literature and systematic reviews, see Additional file  1 ), this article develops “a tentative theory” in the hopes of “plot[ting] the conceptual landscape…[and chart] possible routes to explore it” (Antonenko, 2015 , pp. 57–67) for researchers, practitioners, learning designers, administrators and policy makers. It then discusses student engagement against the background of educational technology research, exploring prior literature and systematic reviews that have been undertaken. The systematic review search method is then outlined, followed by the presentation and discussion of findings.

Literature review

What is student engagement.

Student engagement has been linked to improved achievement, persistence and retention (Finn, 2006 ; Kuh, Cruce, Shoup, Kinzie, & Gonyea, 2008 ), with disengagement having a profound effect on student learning outcomes and cognitive development (Ma, Han, Yang, & Cheng, 2015 ), and being a predictor of student dropout in both secondary school and higher education (Finn & Zimmer, 2012 ). Student engagement is a multifaceted and complex construct (Appleton et al., 2008 ; Ben-Eliyahu, Moore, Dorph, & Schunn, 2018 ), which some have called a ‘meta-construct’ (e.g. Fredricks et al., 2004 ; Kahu, 2013 ), and likened to blind men describing an elephant (Baron & Corbin, 2012 ; Eccles, 2016 ). There is ongoing disagreement about whether there are three components e.g., (Eccles, 2016 )—affective/emotional, cognitive and behavioural—or whether there are four, with the recent suggested addition of agentic engagement (Reeve, 2012 ; Reeve & Tseng, 2011 ) and social engagement (Fredricks, Filsecker, & Lawson, 2016 ). There has also been confusion as to whether the terms ‘engagement’ and ‘motivation’ can and should be used interchangeably (Reschly & Christenson, 2012 ), especially when used by policy makers and institutions (Eccles & Wang, 2012 ). However, the prevalent understanding across the literature is that motivation is an antecedent to engagement; it is the intent and unobservable force that energises behaviour (Lim, 2004 ; Reeve, 2012 ; Reschly & Christenson, 2012 ), whereas student engagement is energy and effort in action; an observable manifestation (Appleton et al., 2008 ; Eccles & Wang, 2012 ; Kuh, 2009 ; Skinner & Pitzer, 2012 ), evidenced through a range of indicators.

Whilst it is widely accepted that no one definition exists that will satisfy all stakeholders (Solomonides, 2013 ), and no one project can be expected to possibly examine every sub-construct of student engagement (Kahu, 2013 ), it is important for each research project to begin with a clear definition of their own understanding (Boekaerts, 2016 ). Therefore, in this project, student engagement is defined as follows:

Student engagement is the energy and effort that students employ within their learning community, observable via any number of behavioural, cognitive or affective indicators across a continuum. It is shaped by a range of structural and internal influences, including the complex interplay of relationships, learning activities and the learning environment. The more students are engaged and empowered within their learning community, the more likely they are to channel that energy back into their learning, leading to a range of short and long term outcomes, that can likewise further fuel engagement.

Dimensions and indicators of student engagement

There are three widely accepted dimensions of student engagement; affective, cognitive and behavioural. Within each component there are several indicators of engagement (see Additional file  2 ), as well as disengagement (see Additional file 2 ), which is now seen as a separate and distinct construct to engagement. It should be stated, however, that whilst these have been drawn from a range of literature, this is not a finite list, and it is recognised that students might experience these indicators on a continuum at varying times (Coates, 2007 ; Payne, 2017 ), depending on their valence (positive or negative) and activation (high or low) (Pekrun & Linnenbrink-Garcia, 2012 ). There has also been disagreement in terms of which dimension the indicators align with. For example, Järvelä, Järvenoja, Malmberg, Isohätälä, and Sobocinski ( 2016 ) argue that ‘interaction’ extends beyond behavioural engagement, covering both cognitive and/or emotional dimensions, as it involves collaboration between students, and Lawson and Lawson ( 2013 ) believe that ‘effort’ and ‘persistence’ are cognitive rather than behavioural constructs, as they “represent cognitive dispositions toward activity rather than an activity unto itself” (p. 465), which is represented in the table through the indicator ‘stay on task/focus’ (see Additional file 2 ). Further consideration of these disagreements represent an area for future research, however, as they are beyond the scope of this paper.

Student engagement within educational technology research

The potential that educational technology has to improve student engagement, has long been recognised (Norris & Coutas, 2014 ), however it is not merely a case of technology plus students equals engagement. Without careful planning and sound pedagogy, technology can promote disengagement and impede rather than help learning (Howard, Ma, & Yang, 2016 ; Popenici, 2013 ). Whilst still a young area, most of the research undertaken to gain insight into this, has been focused on undergraduate students e.g., (Henrie et al., 2015 ; Webb, Clough, O’Reilly, Wilmott, & Witham, 2017 ), with Chen et al. ( 2010 ) finding a positive relationship between the use of technology and student engagement, particularly earlier in university study. Research has also been predominantly STEM and medicine focused (e.g., Li, van der Spek, Feijs, Wang, & Hu, 2017 ; Nikou & Economides, 2018 ), with at least five literature or systematic reviews published in the last 5 years focused on medicine, and nursing in particular (see Additional file  3 ). This indicates that further synthesis is needed of research in other disciplines, such as Arts & Humanities and Education, as well as further investigation into whether research continues to focus on undergraduate students.

The five most researched technologies in Henrie et al.’s ( 2015 ) review were online discussion boards, general websites, learning management systems (LMS), general campus software and videos, as opposed to Schindler, Burkholder, Morad, and Marsh’s ( 2017 ) literature review, which concentrated on social networking sites (Facebook and Twitter), digital games, wikis, web-conferencing software and blogs. Schindler et al. found that most of these technologies had a positive impact on multiple indicators of student engagement across the three dimensions of engagement, with digital games, web-conferencing software and Facebook the most effective. However, it must be noted that they only considered seven indicators of student engagement, which could be extended by considering further indicators of student engagement. Other reviews that have found at least a small positive impact on student engagement include those focused on audience response systems (Hunsu, Adesope, & Bayly, 2016 ; Kay & LeSage, 2009 ), mobile learning (Kaliisa & Picard, 2017 ), and social media (Cheston, Flickinger, & Chisolm, 2013 ). Specific indicators of engagement that increased as a result of technology include interest and enjoyment (Li et al., 2017 ), improved confidence (Smith & Lambert, 2014 ) and attitudes (Nikou & Economides, 2018 ), as well as enhanced relationships with peers and teachers e.g., (Alrasheedi, Capretz, & Raza, 2015 ; Atmacasoy & Aksu, 2018 ).

Literature and systematic reviews focused on student engagement and technology do not always include information on where studies have been conducted. Out of 27 identified reviews (see Additional file 3 ), only 14 report the countries included, and two of these were explicitly focused on a specific region or country, namely Africa and Turkey. Most of the research has been conducted in the USA, followed by the UK, Taiwan, Australia and China. Table  1 depicts the three countries from which most studies originated from in the respective reviews, and highlights a clear lack of research conducted within mainland Europe, South America and Africa. Whilst this could be due to the choice of databases in which the literature was searched for, this nevertheless highlights a substantial gap in the literature, and to that end, it will be interesting to see whether this review is able to substantiate or contradict these trends.

Research into student engagement and educational technology has predominantly used a quantitative methodology (see Additional file 3 ), with 11 literature and systematic reviews reporting that surveys, particularly self-report Likert-scale, are the most used source of measurement (e.g. Henrie et al., 2015 ). Reviews that have included research using a range of methodologies, have found a limited number of studies employing qualitative methods (e.g. Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012 ; Kay & LeSage, 2009 ; Lundin et al., 2018 ). This has led to a call for further qualitative research to be undertaken, exploring student engagement and technology, as well as more rigorous research designs e.g., (Li et al., 2017 ; Nikou & Economides, 2018 ), including sampling strategies, data collection, and in experimental studies in particular (Cheston et al., 2013 ; Connolly et al., 2012 ). However, not all reviews included information on methodologies used. Crook ( 2019 ), in his recent editorial in the British Journal of Educational Technology , stated that research methodology is a “neglected topic” (p. 487) within educational technology research, and stressed its importance in order to conduct studies delving deeper into phenomena (e.g. longitudinal studies).

Therefore, this article presents an initial “evidence map” (Miake-Lye, Hempel, Shanman, & Shekelle, 2016 ), p. 19 of systematically identified literature on student engagement and educational technology within higher education, undertaken through a systematic review, in order to address the issues raised by prior research, and to identify research gaps. These issues include the disparity between field of study and study levels researched, the geographical distribution of studies, the methodologies used, and the theoretical fuzziness surrounding student engagement. This article, however, is intended to provide an initial overview of the systematic review method employed, as well as an overview of the overall corpus. Further synthesis of possible correlations between student engagement and disengagement indicators with the co-occurrence of technology tools, will be undertaken within field of study specific articles (e.g., Bedenlier, 2020b ; Bedenlier 2020a ), allowing more meaningful guidance on applying the findings in practice.

The following research questions guide this enquiry:

How do the studies in the sample ground student engagement and align with theory?

Which indicators of cognitive, behavioural and affective engagement were identified in studies where educational technology was used? Which indicators of student disengagement?

What are the learning scenarios, modes of delivery and educational technology tools employed in the studies?

Overview of the study

With the intent to systematically map empirical research on student engagement and educational technology in higher education, we conducted a systematic review. A systematic review is an explicitly and systematically conducted literature review, that answers a specific question through applying a replicable search strategy, with studies then included or excluded, based on explicit criteria (Gough, Oliver, & Thomas, 2012 ). Studies included for review are then coded and synthesised into findings that shine light on gaps, contradictions or inconsistencies in the literature, as well as providing guidance on applying findings in practice. This contribution maps the research corpus of 243 studies that were identified through a systematic search and ensuing random parameter-based sampling.

Search strategy and selection procedure

The initial inclusion criteria for the systematic review were peer-reviewed articles in the English language, empirically reporting on students and student engagement in higher education, and making use of educational technology. The search was limited to records between 1995 and 2016, chosen due to the implementation of the first Virtual Learning Environments and Learning Management Systems within higher education see (Bond, 2018 ). Articles were limited to those published in peer-reviewed journals, due to the rigorous process under which they are published, and their trustworthiness in academia (Nicholas et al., 2015 ), although concerns within the scientific community with the peer-review process are acknowledged e.g. (Smith, 2006 ).

Discussion arose on how to approach the “hard-to-detect” (O’Mara-Eves et al., 2014 , p. 51) concept of student engagement in regards to sensitivity versus precision (Brunton, Stansfield, & Thomas, 2012 ), particularly in light of engagement being Henrie et al.’s ( 2015 ) most important search term. The decision was made that the concept ‘student engagement’ would be identified from titles and abstracts at a later stage, during the screening process. In this way, it was assumed that articles would be included, which indeed are concerned with student engagement, but which use different terms to describe the concept. Given the nature of student engagement as a meta-construct e.g. (Appleton et al., 2008 ; Christenson et al., 2012 ; Kahu, 2013 ) and by limiting the search to only articles including the term engagement , important research on other elements of student engagement might be missed. Hence, we opted for recall over precision. According to Gough et al. ( 2012 ), p. 13 “electronic searching is imprecise and captures many studies that employ the same terms without sharing the same focus”, or would lead to disregarding studies that analyse the construct but use different terms to describe it.

With this in mind, the search strategy to identify relevant studies was developed iteratively with support from the University Research Librarian. As outlined in O’Mara-Eves et al. ( 2014 ) as a standard approach, we used reviewer knowledge—in this case strongly supported through not only reviewer knowledge but certified expertise—and previous literature (e.g. Henrie et al., 2015 ; Kahu, 2013 ) to elicit concepts with potential importance under the topics student engagement, higher education and educational technology . The final search string (see Fig.  1 ) encompasses clusters of different educational technologies that were searched for separately in order to avoid an overly long search string. It was decided not to include any brand names, e.g. Facebook, Twitter, Moodle etc. because it was again reasoned that in scientific publication, the broader term would be used (e.g. social media). The final search string was slightly adapted, e.g. the format required for truncations or wildcards, according to the settings of each database being used Footnote 1 .

figure 1

Final search terms used in the systematic review

Four databases (ERIC, Web of Science, Scopus and PsycINFO) were searched in July 2017 and three researchers and a student assistant screened abstracts and titles of the retrieved references between August and November 2017, using EPPI Reviewer 4.0. An initial 77,508 references were retrieved, and with the elimination of duplicate records, 53,768 references remained (see Fig.  2 ). A first cursory screening of records revealed that older research was more concerned with technologies that are now considered outdated (e.g. overhead projectors, floppy disks). Therefore, we opted to adjust the period to include research published between 2007 and 2016, labeled as a phase of research and practice, entitled ‘online learning in the digital age’ (Bond, 2018 ). Whilst we initially opted for recall over precision, the decision was then made to search for specific facets of the student engagement construct (e.g. deep learning, interest and persistence) within EPPI-Reviewer, in order to further refine the corpus. These adaptations led to a remaining 18,068 records.

figure 2

Systematic review PRISMA flow chart (slightly modified after Brunton et al., 2012 , p. 86; Moher, Liberati, Tetzlaff, & Altman, 2009 ), p. 8

Four researchers screened the first 150 titles and abstracts, in order to iteratively establish a joint understanding of the inclusion criteria. The remaining references were distributed equally amongst the screening team, which resulted in the inclusion of 4152 potentially relevant articles. Given the large number of articles for screening on full text, whilst facing restrained time as a condition in project-based and funded work, it was decided that a sample of articles would be drawn from this corpus for further analysis. With the intention to draw a sample that estimates the population parameters with a predetermined error range, we used methods of sample size estimation in the social sciences (Kupper & Hafner, 1989 ). To do so, the R Package MBESS (Kelley, Lai, Lai, & Suggests, 2018 ) was used. Accepting a 5% error range, a percentage of a half and an alpha of 5%, 349 articles were sampled, with this sample being then stratified by publishing year, as student engagement has become much more prevalent (Zepke, 2018 ) and educational technology has become more differentiated within the last decade (Bond, 2018 ). Two researchers screened the first 100 articles on full text, reaching an agreement of 88% on inclusion/exclusion. The researchers then discussed the discrepancies and came to an agreement on the remaining 12%. It was decided that further comparison screening was needed, to increase the level of reliability. After screening the sample on full text, 232 articles remained for data extraction, which contained 243 studies.

Data extraction process

In order to extract the article data, an extensive coding system was developed, including codes to extract information on the set-up and execution of the study (e.g. methodology, study sample) as well as information on the learning scenario, the mode of delivery and educational technology used. Learning scenarios included broader pedagogies, such as social collaborative learning and self-determined learning, but also specific pedagogies such as flipped learning, given the increasing number of studies and interest in these approaches (e.g., Lundin et al., 2018 ). Specific examples of student engagement and/or disengagement were coded under cognitive, affective or behavioural (dis)engagement. The facets of student (dis)engagement were identified based on the literature review undertaken (see Additional file 2 ), and applied in this detailed manner to not only capture the overarching dimensions of the concept, but rather their diverse sub-meanings. New indicators also emerged during the coding process, which had not initially been identified from the literature review, including ‘confidence’ and ‘assuming responsibility’. The 243 studies were coded with this extensive code set and any disagreements that occurred between the coders were reconciled. Footnote 2

As a plethora of over 50 individual educational technology applications and tools were identified in the 243 studies, in line with results found in other large-scale systematic reviews (e.g., Lai & Bower, 2019 ), concerns were raised over how the research team could meaningfully analyse and report the results. The decision was therefore made to employ Bower’s ( 2016 ) typology of learning technologies (see Additional file  4 ), in order to channel the tools into groups that share the same characteristics or “structure of information” (Bower, 2016 ), p. 773. Whilst it is acknowledged that some of the technology could be classified into more than one type within the typology, e.g. wikis can be used in individual composition, for collaborative tasks, or for knowledge organisation and sharing, “the type of learning that results from the use of the tool is dependent on the task and the way people engage with it rather than the technology itself” therefore “the typology is presented as descriptions of what each type of tool enables and example use cases rather than prescriptions of any particular pedagogical value system” (Bower, 2016 ), p. 774. For further elaboration on each category, please see Bower ( 2015 ).

Study characteristics

Geographical characteristics.

The systematic mapping reveals that the 243 studies were set in 33 different countries, whilst seven studies investigated settings in an international context, and three studies did not indicate their country setting. In 2% of the studies, the country was allocated based on the author country of origin, if the two authors came from the same country. The top five countries account for 158 studies (see Fig.  3 ), with 35.4% ( n  = 86) studies conducted in the United States (US), 10.7% ( n  = 26) in the United Kingdom (UK), 7.8% ( n  = 19) in Australia, 7.4% ( n  = 18) in Taiwan, and 3.7% ( n  = 9) in China. Across the corpus, studies from countries employing English as the official or one of the official languages total up to 59.7% of the entire sample, followed by East Asian countries that in total account for 18.8% of the sample. With the exception of the UK, European countries are largely absent from the sample, only 7.3% of the articles originate from this region, with countries such as France, Belgium, Italy or Portugal having no studies and countries such as Germany or the Netherlands having one respectively. Thus, with eight articles, Spain is the most prolific European country outside of the UK. The geographical distribution of study settings also clearly shows an almost complete absence of studies undertaken within African contexts, with five studies from South Africa and one from Tunisia. Studies from South-East Asia, the Middle East, and South America are likewise low in number this review. Whilst the global picture evokes an imbalance, this might be partially due to our search and sampling strategy, having focused on English language journals, indexed in four primarily Western-focused databases.

figure 3

Percentage deviation from the average relative frequencies of the different data collection formats per country (≥ 3 articles). Note. NS = not stated; AUS = Australia; CAN = Canada; CHN = China; HKG = Hong Kong; inter = international; IRI = Iran; JAP = Japan; MYS = Malaysia; SGP = Singapore; ZAF = South Africa; KOR = South Korea; ESP = Spain; SWE = Sweden; TWN = Taiwan; TUR = Turkey; GBR = United Kingdom; USA = United States of America

Methodological characteristics

Within this literature corpus, 103 studies (42%) employed quantitative methods, 84 (35%) mixed methods, and 56 (23%) qualitative. Relating these numbers back to the contributing countries, different preferences for and frequencies of methods used become apparent (see Fig. 3 ). As a general tendency, mixed methods and qualitative research occurs more often in Western countries, whereas quantitative research is the preferred method in East Asian countries. For example, studies originating from Australia employ mixed methods research 28% more often than the average, whereas Singapore is far below average in mixed methods research, with 34.5% less than the other countries in the sample. In Taiwan, on the other hand, mixed methods studies are being conducted 23.5% below average and qualitative research 6.4% less often than average. However, quantitative research occurs more often than in other countries, with 29.8% above average.

Amongst the qualitative studies, qualitative content analysis ( n  = 30) was the most frequently used analysis approach, followed by thematic analysis ( n  = 21) and grounded theory ( n  = 12). However, a lot of times ( n  = 37) the exact analysis approach was not reported, could not be allocated to a specific classification ( n  = 22), or no method of analysis was identifiable ( n  = 11). Within studies using quantitative methods, mean comparison was used in 100 studies, frequency data was collected and analysed in 83 studies, and in 40 studies regression models were used. Furthermore, looking at the correlation between the different analysis approaches, only one significant correlation can be identified, this being between mean comparison and frequency data (−.246). Besides that, correlations are small, for example, in only 14% of the studies both mean comparisons and regressions models are employed.

Study population characteristics

Research in the corpus focused on universities as the prime institution type ( n  = 191, 79%), followed by 24 (10%) non-specified institution types, and colleges ( n  = 21, 8.2%) (see Fig.  4 ). Five studies (2%) included institutions classified as ‘other’, and two studies (0.8%) included both college and university students. The most frequently studied student population was undergraduate students (60%, n  = 146), as opposed to 33 studies (14%) focused on postgraduate students (see Fig.  6 ). A combination of undergraduate and postgraduate students were the subject of interest in 23 studies (9%), with 41 studies (17%) not specifying the level of study of research participants.

figure 4

Relative frequencies of study field in dependence of countries with ≥3 articles. Note. Country abbreviations are as per Figure 4. A&H = Arts & Humanities; BA&L = Business, Administration and Law; EDU = Education; EM&C = Engineering, Manufacturing & Construction; H&W = Health & Welfare; ICT = Information & Communication Technologies; ID = interdisciplinary; NS,M&S = Natural Science, Mathematics & Statistics; NS = Not specified; SoS = Social Sciences, Journalism & Information

Based on the UNESCO (2015) ISCED classification, eight broad study fields are covered in the sample, with Arts & Humanities (42 studies), Education (42 studies), and Natural Sciences, Mathematics & Statistics (37) being the top three study fields, followed by Health & Welfare (30 studies), Social Sciences, Journalism & Information (22), Business, Administration & Law (19 studies), Information & Communication Technologies (13), Engineering, Manufacturing & Construction (11), and another 26 studies of interdisciplinary character. One study did not specify a field of study.

An expectancy value was calculated, according to which, the distribution of studies per discipline should occur per country. The actual deviation from this value then showed that several Asian countries are home to more articles in the field of Arts & Humanities than was expected: Japan with 3.3 articles more, China with 5.4 and Taiwan with 5.9. Furthermore, internationally located research also shows 2.3 more interdisciplinary studies than expected, whereas studies on Social Sciences occur more often than expected in the UK (5.7 more articles) and Australia (3.3 articles) but less often than expected across all other countries. Interestingly, the USA have 9.9 studies less in Arts & Humanities than was expected but 5.6 articles more than expected in Natural Science.

Question One: How do the studies in the sample ground student engagement and align with theory?

Defining student engagement.

It is striking that almost all of the studies ( n  = 225, 93%) in this corpus lack a definition of student engagement, with only 18 (7%) articles attempting to define the concept. However, this is not too surprising, as the search strategy was set up with the assumption that researchers investigating student engagement (dimensions and indicators) would not necessarily label them as student engagement. When developing their definitions, authors in these 18 studies referenced 22 different sources, with the work of Kuh and colleagues e.g., (Hu & Kuh, 2002 ; Kuh, 2001 ; Kuh et al., 2006 ), as well as Astin ( 1984 ), the only authors referred to more than once. The most popular definition of student engagement within these studies was that of active participation and involvement in learning and university life e.g., (Bolden & Nahachewsky, 2015 ; bFukuzawa & Boyd, 2016 ), which was also found by Joksimović et al. ( 2018 ) in their review of MOOC research. Interaction, especially between peers and with faculty, was the next most prevalent definition e.g., (Andrew, Ewens, & Maslin-Prothero, 2015 ; Bigatel & Williams, 2015 ). Time and effort was given as a definition in four studies (Gleason, 2012 ; Hatzipanagos & Code, 2016 ; Price, Richardson, & Jelfs, 2007 ; Sun & Rueda, 2012 ), with expending physical and psychological energy (Ivala & Gachago, 2012 ) another definition. This variance in definitions and sources reflects the ongoing complexity of the construct (Zepke, 2018 ), and serves to reinforce the need for a clearer understanding across the field (Schindler et al., 2017 ).

Theoretical underpinnings

Reflecting findings from other systematic and literature reviews on the topic (Abdool, Nirula, Bonato, Rajji, & Silver, 2017 ; Hunsu et al., 2016 ; Kaliisa & Picard, 2017 ; Lundin et al., 2018 ), 59% ( n  = 100) of studies did not employ a theoretical model in their research. Of the 41% ( n  = 100) that did, 18 studies drew on social constructivism, followed by the Community of Inquiry model ( n  = 8), Sociocultural Learning Theory ( n  = 5), and Community of Practice models ( n  = 4). These findings also reflect the state of the field in general (Al-Sakkaf et al., 2019 ; Bond, 2019b ; Hennessy, Girvan, Mavrikis, Price, & Winters, 2018 ).

Another interesting finding of this research is that whilst 144 studies (59%) provided research questions, 99 studies (41%) did not. Although it is recognised that not all studies have research questions (Bryman, 2007 ), or only develop them throughout the research process, such as with grounded theory (Glaser & Strauss, 1967 ), a surprising number of quantitative studies (36%, n  = 37) did not have research questions. This is a reflection on the lack of theoretical guidance, as 30 of these 37 studies also did not draw on a theoretical or conceptual framework.

Question 2: which indicators of cognitive, behavioural and affective engagement were identified in studies where educational technology was used? Which indicators of student disengagement?

Student engagement indicators.

Within the corpus, the behavioural engagement dimension was documented in some form in 209 studies (86%), whereas the dimension of affective engagement was reported in 163 studies (67%) and the cognitive dimension in only 136 (56%) studies. However, the ten most often identified student engagement indicators across the studies overall (see Table  2 ) were evenly distributed over all three dimensions (see Table  3 ). The indicators participation/interaction/involvement , achievement and positive interactions with peers and teachers each appear in at least 100 studies, which is almost double the amount of the next most frequent student engagement indicator.

Across the 243 studies in the corpus, 117 (48%) showed all three dimensions of affective, cognitive and behavioural student engagement e.g., (Szabo & Schwartz, 2011 ), including six studies that used established student engagement questionnaires, such as the NSSE (e.g., Delialioglu, 2012 ), or self-developed addressing these three dimensions. Another 54 studies (22%) displayed at least two student engagement dimensions e.g., (Hatzipanagos & Code, 2016 ), including six questionnaire studies. Studies exhibiting one student engagement dimension only, was reported in 71 studies (29%) e.g., (Vural, 2013 ).

Student disengagement indicators

Indicators of student disengagement (see Table  4 ) were identified considerably less often across the corpus, which could be explained by the purpose of the studies being to primarily address/measure positive engagement, but on the other hand this could potentially be due to a form of self-selected or publication bias, due to less frequently reporting and/or publishing studies with negative results. The three disengagement indicators that were most often indicated were frustration ( n  = 33, 14%) e.g., (Ikpeze, 2007 ), opposition/rejection ( n  = 20, 8%) e.g., (Smidt, Bunk, McGrory, Li, & Gatenby, 2014 ) and disappointment e.g., (Granberg, 2010 ) , as well as other affective disengagement ( n  = 18, 7% each).

Technology tool typology and engagement/disengagement indicators

Across the 243 studies, a plethora of over 50 individual educational technology tools were employed. The top five most frequently researched tools were LMS ( n  = 89), discussion forums ( n  = 80), videos ( n  = 44), recorded lectures ( n  = 25), and chat ( n  = 24). Following a slightly modified version of Bower’s ( 2016 ) educational tools typology, 17 broad categories of tools were identified (see Additional file 4 for classification, and 3.2 for further information). The frequency with which tools from the respective groups employed in studies varied considerably (see Additional file 4 ), with the top five categories being text-based tools ( n  = 138), followed by knowledge organisation & sharing tools ( n  = 104), multimodal production tools ( n  = 89), assessment tools ( n  = 65) and website creation tools ( n  = 29).

Figure  5 shows what percentage of each engagement dimension (e.g., affective engagement or cognitive disengagement) was fostered through each specific technology type. Given the results in 4.2.1 on student engagement, it was somewhat unsurprising to see the prevalence of text-based tools , knowledge organisation & sharing tools, and multimodal production tools having the highest proportion of affective, behavioural and cognitive engagement. For example, affective engagement was identified in 163 studies, with 63% of these studies using text-based tools (e.g., Bulu & Yildirim, 2008 ) , and cognitive engagement identified in 136 studies, with 47% of those using knowledge organisation & sharing tools e.g., (Shonfeld & Ronen, 2015 ). However, further analysis of studies employing discussion forums (a text-based tool ) revealed that, whilst the top affective and behavioural engagement indicators were found in almost two-thirds of studies (see Additional file  5 ), there was a substantial gap between that and the next most prevalent engagement indicator, with the exact pattern (and indicators) emerging for wikis. This represents an area for future research.

figure 5

Engagement and disengagement by tool typology. Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning; A&H = Arts & Humanities; BA&L = Business, Administration and Law; EDU = Education; EM&C = Engineering, Manufacturing & Construction; H&W = Health & Welfare; ICT = Information & Communication Technologies; ID = interdisciplinary; NS,M&S = Natural Science, Mathematics & Statistics; NS = Not specified; SoS = Social Sciences, Journalism & Information

Interestingly, studies using website creation tools reported more disengagement than engagement indicators across all three domains (see Fig.  5 ), with studies using assessment tools and social networking tools also reporting increased instances of disengagement across two domains (affective and cognitive, and behavioural and cognitive respectively). 23 of the studies (79%) using website creation tools , used blogs, with students showing, for example, disinterest in topics chosen e.g., (Sullivan & Longnecker, 2014 ), anxiety over their lack of blogging knowledge and skills e.g., (Mansouri & Piki, 2016 ), and continued avoidance of using blogs in some cases, despite introductory training e.g., (Keiller & Inglis-Jassiem, 2015 ). In studies where assessment tools were used, students found timed assessment stressful, particularly when trying to complete complex mathematical solutions e.g., (Gupta, 2009 ), as well as quizzes given at the end of lectures, with some students preferring take-up time of content first e.g., (DePaolo & Wilkinson, 2014 ). Disengagement in studies where social networking tools were used, indicated that some students found it difficult to express themselves in short posts e.g., (Cook & Bissonnette, 2016 ), that conversations lacked authenticity e.g., (Arnold & Paulus, 2010 ), and that some did not want to mix personal and academic spaces e.g., (Ivala & Gachago, 2012 ).

Question 3: What are the learning scenarios, modes of delivery and educational technology tools employed in the studies?

Learning scenarios.

With 58.4% across the sample, social-collaborative learning (SCL) was the scenario most often employed ( n  = 142), followed by 43.2% of studies investigating self-directed learning (SDL) ( n  = 105) and 5.8% of studies using game-based learning (GBL) ( n  = 14) (see Fig. 6 ). Studies coded as SCL included those exploring social learning (Bandura, 1971 ) and social constructivist approaches (Vygotsky, 1978 ). Personal learning environments (PLE) were found for 2.9% of studies, 1.3% studies used other scenarios ( n  = 3), whereas another 13.2% did not provide specification of their learning scenarios ( n  = 32). It is noteworthy that in 45% of possible cases for employing SDL scenarios, SCL was also used. Other learning scenarios were also used mostly in combination with SCL and SDL. Given the rising number of higher education studies exploring flipped learning (Lundin et al., 2018 ), studies exploring the approach were also specifically coded (3%, n  = 7).

figure 6

Co-occurrence of learning scenarios across the sample ( n  = 243). Note. SDL = self-directed learning; SCL = social collaborative learning; GBL = game-based learning; PLE = personal learning environments; other = other learning scenario

Modes of delivery

In 84% of studies ( n  = 204), a single mode of delivery was used, with blended learning the most researched (109 studies), followed by distance education (72 studies), and face-to-face instruction (55 studies). Of the remaining 39 studies, 12 did not indicate their mode of delivery, whilst the other 27 studies combined or compared modes of delivery, e.g. comparing face to face courses to blended learning, such as the study on using iPads in undergraduate nursing education by Davies ( 2014 ).

Educational technology tools investigated

Most studies in this corpus (55%) used technology asynchronously, with 12% of studies researching synchronous tools, and 18% of studies using both asynchronous and synchronous. When exploring the use of tools, the results are not surprising, with a heavy reliance on asynchronous technology. However, when looking at tool usage with studies in face-to-face contexts, the number of synchronous tools (31%) is almost as many as the number of asynchronous tools (41%), and surprisingly low within studies in distance education (7%).

Tool categories were used in combination, with text-based tools most often used in combination with other technology types (see Fig.  7 ). For example, in 60% of all possible cases using multimodal production tools, in 69% of all possible synchronous production tool cases, in 72% of all possible knowledge, organisation & sharing tool cases , and a striking 89% of all possible learning software cases and 100% of all possible MOOC cases. On the contrary, text-based tools were never used in combination with games or data analysis tools . However, studies using gaming tools were used in 67% of possible assessment tool cases as well. Assessment tools, however, constitute somewhat of a special case when studies using website creation tools are concerned, with only 7% of possible cases having employed assessment tools .

figure 7

Co-occurrence of tools across the sample ( n  = 243). Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning

In order to gain further understanding into how educational technology was used, we examined how often a combination of two variables should occur in the sample and how often it actually occurs, with deviations described as either ‘more than’ or ‘less than’ the expected value. This provides further insight into potential gaps in the literature, which can inform future research. For example, an analysis of educational technology tool usage amongst study populations (see Fig.  8 ) reveals that 5.0 more studies than expected looked at knowledge organisation & sharing for graduate students, but 5.0 studies less than expected investigated assessment tools for this group. By contrast, 5 studies more than expected researched assessment tools for unspecified study levels, and 4.3 studies less than expected employed knowledge organisation & sharing for undergraduate students.

figure 8

Relative frequency of educational technology tools used according to study level Note. Abbreviations are explained in Fig. 7

Educational technology tools were also used differently from the expected pattern within various fields of study (see Fig.  9 ), most obviously for the cases of the top five tools. However, also for virtual worlds, found in 5.8 studies more in Health & Welfare than expected, and learning software, used in 6.4 studies more in Arts & Humanities than expected. In all other disciplines, learning software was used less often than assumed. Text-based tools were used more often than expected in fields of study that are already text-intensive, including Arts & Humanities, Education, Business, Administration & Law as well as Social Sciences - but less often than thought in fields such as Engineering, Health & Welfare, and Natural Sciences, Mathematics & Statistics. Multimodal production tools were used more often only in Health & Welfare, ICT and Natural Sciences, and less often than assumed across all other disciplines. Assessment tools deviated most clearly, with 11.9 studies more in Natural Sciences, Mathematics & Statistics than assumed, but with 5.2 studies less in both Education and Arts & Humanities.

figure 9

Relative frequency of educational technology tools used according to field of study. Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning

In regards to mode of delivery and educational technology tools used, it is interesting to see that from the five top tools, except for assessment tools , all tools were used in face-to-face instruction less often than expected (see Fig.  10 ); from 1.6 studies less for website creation tools to 14.5 studies less for knowledge organisation & sharing tools . Assessment tools , however, were used in 3.3 studies more than expected - but less often than assumed (although moderately) in blended learning and distance education formats. Text-based tools, multimodal production tools and knowledge organisation & sharing tools were employed more often than expected in blended and distance learning, especially obvious in 13.1 studies more on t ext-based tools and 8.2 studies on knowledge organisation & sharing tools in distance education. Contrary to what one would perhaps expect, social networking tools were used in 4.2 studies less than expected for this mode of delivery.

figure 10

Relative frequency of educational technology tools used according mode of delivery. Note. Tool abbreviations as per Figure 10. BL = Blended learning; DE = Distance education; F2F = Face-to-face; NS = Not stated

The findings of this study confirm those of previous research, with the most prolific countries being the US, UK, Australia, Taiwan and China. This is rather representative of the field, with an analysis of instructional design and technology research from 2007 to 2017 listing the most productive countries as the US, Taiwan, UK, Australia and Turkey (Bodily, Leary, & West, 2019 ). Likewise, an analysis of 40 years of research in Computers & Education (CAE) found that the US, UK and Taiwan accounted for 49.9% of all publications (Bond, 2018 ). By contrast, a lack of African research was apparent in this review, which is also evident in educational technology research in top tier peer-reviewed journals, with only 4% of articles published in the British Journal of Educational Technology ( BJET ) in the past decade (Bond, 2019b ) and 2% of articles in the Australasian Journal of Educational Technology (AJET) (Bond, 2018 ) hailing from Africa. Similar results were also found in previous literature and systematic reviews (see Table 1 ), which again raises questions of literature search and inclusion strategies, which will be further discussed in the limitations section.

Whilst other reviews of educational technology and student engagement have found studies to be largely STEM focused (Boyle et al., 2016 ; Li et al., 2017 ; Lundin et al., 2018 ; Nikou & Economides, 2018 ), this corpus features a more balanced scope of research, with the fields of Arts & Humanities (42 studies, 17.3%) and Education (42 studies, 17.3%) constituting roughly one third of all studies in the corpus - and Natural Sciences, Mathematics & Statistics, nevertheless, assuming rank 3 with 38 studies (15.6%). Beyond these three fields, further research is needed within underrepresented fields of study, in order to gain more comprehensive insights into the usage of educational technology tools (Kay & LeSage, 2009 ; Nikou & Economides, 2018 ).

Results of the systematic map further confirm the focus that prior educational technology research has placed on undergraduate students as the target group and participants in technology-enhanced learning settings e.g. (Cheston et al., 2013 ; Henrie et al., 2015 ). With the overwhelming number of 146 studies researching undergraduate students—compared to 33 studies on graduate students and 23 studies investigating both study levels—this also indicates that further investigation into the graduate student experience is needed. Furthermore, the fact that 41 studies do not report on the study level of their participants is an interesting albeit problematic fact, as implications might not easily be drawn for application to one’s own specific teaching context if the target group under investigation is not clearly denominated. A more precise reporting of participants’ details, as well as specification of the study context (country, institution, study level to name a few) is needed to transfer and apply study results to practice—being then able to take into account why some interventions succeed and others do not.

In line with other studies e.g. (Henrie et al., 2015 ), this review has also demonstrated that student engagement remains an under-theorised concept, that is often only considered fragmentally in research. Whilst studies in this review have often focused on isolated aspects of student engagement, their results are nevertheless interesting and valuable. However, it is important to relate these individual facets to the larger framework of student engagement, by considering how these aspects are connected and linked to each other. This is especially helpful to integrate research findings into practice, given that student engagement and disengagement are rarely one-dimensional; it is not enough to focus only on one aspect of engagement, but also to look at aspects that are adjacent to it (Pekrun & Linnenbrink-Garcia, 2012 ). It is also vital, therefore, that researchers develop and refine an understanding of student engagement, and make this explicit in their research (Appleton et al., 2008 ; Christenson et al., 2012 ).

Reflective of current conversations in the field of educational technology (Bond, 2019b ; Castañeda & Selwyn, 2018 ; Hew et al., 2019 ), as well as other reviews (Abdool et al., 2017 ; Hunsu et al., 2016 ; Kaliisa & Picard, 2017 ; Lundin et al., 2018 ), a substantial number of studies in this corpus did not have any theoretical underpinnings. Kaliisa and Picard ( 2017 ) argue that, without theory, research can result in disorganised accounts and issues with interpreting data, with research effectively “sit[ting] in a void if it’s not theoretically connected” (Kara, 2017 ), p. 56. Therefore, framing research in educational technology with a stronger theoretical basis, can assist with locating the “field’s disciplinary alignment” (Crook, 2019 ), p. 486 and further drive conversations forward.

The application of methods in this corpus was interesting in two ways. First, it is noticeable that quantitative studies are prevalent across the 243 articles in the sample. The number of studies employing qualitative research methods in the sample was comparatively low (56 studies as opposed to 84 mixed method studies and 103 quantitative studies). This is also reflected in the educational technology field at large, with a review of articles published in BJET and Educational Technology Research & Development (ETR&D) from 2002 to 2014 revealing that 40% of articles used quantitative methods, 26% qualitative and 13% mixed (Baydas, Kucuk, Yilmaz, Aydemir, & Goktas, 2015 ), and likewise a review of educational technology research from Turkey 1990–2011 revealed that 53% of articles used quantitative methods, 22% qualitative and 10% mixed methods (Kucuk, Aydemir, Yildirim, Arpacik, & Goktas, 2013 ). Quantitative studies primarily show that an intervention has worked or not when applied to e.g. a group of students in a certain setting as done in the study on using mobile apps on student performance in engineering education by Jou, Lin, and Tsai ( 2016 ), however, not all student engagement indicators can actually be measured in this way. The lower numbers of affective and cognitive engagement found in the studies in the corpus, reflect a wider call to the field to increase research on these two domains (Henrie et al., 2015 ; Joksimović et al., 2018 ; O’Flaherty & Phillips, 2015 ; Schindler et al., 2017 ). Whilst it is arguably more difficult to measure these two than behavioural engagement, the use of more rigorous and accurate surveys could be one possibility, as they can “capture unobservable aspects” (Henrie et al., 2015 ), p. 45 such as student feelings and information about the cognitive strategies they employ (Finn & Zimmer, 2012 ). However, they are often lengthy and onerous, or subject to the limitations of self-selection.

Whereas low numbers of qualitative studies researching student engagement and educational technology were previously identified in other student engagement and technology reviews (Connolly et al., 2012 ; Kay & LeSage, 2009 ; Lundin et al., 2018 ), it is studies like that by Lopera Medina ( 2014 ) in this sample, which reveal how people perceive this educational experience and the actual how of the process. Therefore, more qualitative and ethnographic measures should also be employed, such as student observations with thick descriptions, which can help shed light on the complexity of teaching and learning environments (Fredricks et al., 2004 ; Heflin, Shewmaker, & Nguyen, 2017 ). Conducting observations can be costly, however, both in time and money, so this is suggested in combination with computerised learning analytic data, which can provide measurable, objective and timely insight into how certain manifestations of engagement change over time (Henrie et al., 2015 ; Ma et al., 2015 ).

Whereas other results of this review have confirmed previous results in the field, the technology tools that were used in the studies and considered in their relation to student engagement in this corpus deviate. Whilst Henrie et al. ( 2015 ) found that the most frequently researched tools were discussion forums, general websites, LMS, general campus software and videos, the studies here focused predominantly on LMS, discussion forums, videos, recorded lectures and chat. Furthermore, whilst Schindler et al. ( 2017 ) found that digital games, web-conferencing software and Facebook were the most effective tools at enhancing student engagement, this review found that it was rather text-based tools , knowledge organisation & sharing , and multimodal production tools .

Limitations

During the execution of this systematic review, we tried to adhere to the method as rigorously as possible. However, several challenges were also encountered - some of which are addressed and discussed in another publication (Bedenlier, 2020b ) - resulting in limitations to this study. Four large, general educational research databases were searched, which are international in scope. However, by applying the criterion of articles published in English, research published on this topic in languages other than English was not included in this review. The same applies to research documented in, for example, grey literature, book chapters or monographs, or even articles from journals that are not indexed in the four databases searched. Another limitation is that only research published within the period 2007–2016 was investigated. Whilst we are cognisant of this being a restriction, we also think that the technological advances and the implications to be drawn from this time-frame relate more meaningfully to the current situation, than would have been the case for technologies used in the 1990s see (Bond, 2019b ). The sampling strategy also most likely accounts for the low number of studies from certain countries, e.g. in South America and Africa.

Studies included in this review represent various academic fields, and they also vary in the rigour with which they were conducted. Harden and Gough ( 2012 ) stress that the appraisal of quality and relevance of studies “ensure[s] that only the most appropriate, trustworthy and relevant studies are used to develop the conclusions of the review” (p. 154), we have included the criterion of being a peer reviewed contribution as a formal inclusion criterion from the beginning. In doing so, we reason that studies met a baseline of quality as applicable to published research in a specific field - otherwise they would not have been accepted for publication by the respective community. Finally, whilst the studies were diligently read and coded, and disagreements also discussed and reconciled, the human flaw of having overlooked or misinterpreted information provided in the individual articles cannot fully be excluded.

Finally, the results presented here provide an initial window into the overall body of research identified during the search, and further research is being undertaken to provide deeper insight into discipline specific use of technology and resulting student engagement using subsets of this sample (Bedenlier, 2020a ; Bond, M., Bedenlier, S., Buntins, K., Kerres, M., & Zawacki-Richter, O.: Facilitating student engagement through educational technology: A systematic review in the field of education, forthcoming).

Recommendations for future work and implications for practice

Whilst the evidence map presented in this article has confirmed previous research on the nexus of educational technology and student engagement, it has also elucidated a number of areas that further research is invited to address. Although these findings are similar to that of previous reviews, in order to more fully and comprehensively understand student engagement as a multi-faceted construct, it is not enough to focus only on indicators of engagement that can easily be measured, but rather the more complex endeavour of uncovering and investigating those indicators that reside below the surface. This also includes the careful alignment of theory and methodological design, in order to both adequately analyse the phenomenon under investigation, as well as contributing to the soundly executed body of research within the field of educational technology. Further research is invited in particular into how educational technology affects cognitive and affective engagement, whilst considering how this fits within the broader sociocultural framework of engagement (Bond, 2019a ). Further research is also invited into how educational technology affects student engagement within fields of study beyond Arts & Humanities, Education and Natural Sciences, Mathematics & Statistics, as well as within graduate level courses. The use of more qualitative research methods is particularly encouraged.

The findings of this review suggest that research gaps exist with particular combinations of tools, study levels and modes of delivery. With respect to study level, the use of assessment tools with graduate students, as well as knowledge organisation & sharing tools with undergraduate students, are topics researched far less than expected. The use of text-based tools in Engineering, Health & Welfare and Natural Sciences, Mathematics & Statistics, as well as the use of multimodal production tools outside of these disciplines, are also areas for future research, as is the use of assessment tools in the fields of Education and Arts & Humanities in particular.

With 109 studies in this systematic review using a blended learning design, this is a confirmation of the argument that online distance education and traditional face-to-face education are becoming increasingly more integrated with one another. Whilst this indicates that a lot of educators have made the move from face-to-face teaching to technology-enhanced learning, this also makes a case for the need for further professional development, in order to apply these tools effectively within their own teaching contexts, with this review indicating that further research is needed in particlar into the use of social networking tools in online/distance education. The question also needs to be asked, not only why the number of published studies are low within certain countries and regions, but also to enquire into the nature of why that is the case. This entails questioning the conditions under which research is being conducted, potentially criticising publication policies of major, Western-based journals, but also ultimately to reflect on one’s search strategy and research assumptions as a Western educator-researcher.

Based on the findings of this review, educators within higher education institutions are encouraged to use text-based tools , knowledge, organisation and sharing tools , and multimodal production tools in particular and, whilst any technology can lead to disengagement if not employed effectively, to be mindful that website creation tools (blogs and ePortfolios), social networking tools and assessment tools have been found to be more disengaging than engaging in this review. Therefore, educators are encouraged to ensure that students receive sufficient and ongoing training for any new technology used, including those that might appear straightforward, e.g. blogs, and that they may require extra writing support. Ensure that discussion/blog topics are interesting, that they allow student agency, and they are authentic to students, including the use of social media. Social networking tools that augment student professional learning networks are particularly useful. Educators should also be aware, however, that some students do not want to mix their academic and personal lives, and so the decision to use certain social platforms could be decided together with students.

Availability of data and materials

All data will be made publicly available, as part of the funding requirements, via https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn .

The detailed search strategy, including the modified search strings according to the individual databases, can be retrieved from https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn

The full code set can be retrieved from the review protocol at https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn .

Abdool, P. S., Nirula, L., Bonato, S., Rajji, T. K., & Silver, I. L. (2017). Simulation in undergraduate psychiatry: Exploring the depth of learner engagement. Academic Psychiatry : the Journal of the American Association of Directors of Psychiatric Residency Training and the Association for Academic Psychiatry , 41 (2), 251–261. https://doi.org/10.1007/s40596-016-0633-9 .

Article   Google Scholar  

Alioon, Y., & Delialioğlu, Ö. (2017). The effect of authentic m-learning activities on student engagement and motivation. British Journal of Educational Technology , 32 , 121. https://doi.org/10.1111/bjet.12559 .

Alrasheedi, M., Capretz, L. F., & Raza, A. (2015). A systematic review of the critical factors for success of mobile learning in higher education (university students’ perspective). Journal of Educational Computing Research , 52 (2), 257–276. https://doi.org/10.1177/0735633115571928 .

Al-Sakkaf, A., Omar, M., & Ahmad, M. (2019). A systematic literature review of student engagement in software visualization: A theoretical perspective. Computer Science Education , 29 (2–3), 283–309. https://doi.org/10.1080/08993408.2018.1564611 .

Andrew, L., Ewens, B., & Maslin-Prothero, S. (2015). Enhancing the online learning experience using virtual interactive classrooms. Australian Journal of Advanced Nursing , 32 (4), 22–31.

Google Scholar  

Antonenko, P. D. (2015). The instrumental value of conceptual frameworks in educational technology research. Educational Technology Research and Development , 63 (1), 53–71. https://doi.org/10.1007/s11423-014-9363-4 .

Appleton, J. J., Christenson, S. L., & Furlong, M. J. (2008). Student engagement with school: Critical conceptual and methodological issues of the construct. Psychology in the Schools , 45 (5), 369–386. https://doi.org/10.1002/pits.20303 .

Arnold, N., & Paulus, T. (2010). Using a social networking site for experiential learning: Appropriating, lurking, modeling and community building. Internet and Higher Education , 13 (4), 188–196. https://doi.org/10.1016/j.iheduc.2010.04.002 .

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Development , 25 (4), 297–308.

Astin, A. W. (1999). Student involvement: A developmental theory for higher education. Journal of College Student Development , 40 (5), 518–529. https://www.researchgate.net/publication/220017441 (Original work published July 1984).

Atmacasoy, A., & Aksu, M. (2018). Blended learning at pre-service teacher education in Turkey: A systematic review. Education and Information Technologies , 23 (6), 2399–2422. https://doi.org/10.1007/s10639-018-9723-5 .

Azevedo, R. (2015). Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educational Psychologist , 50 (1), 84–94. https://doi.org/10.1080/00461520.2015.1004069 .

Bandura, A. (1971). Social learning theory . New York: General Learning Press.

Barak, M. (2018). Are digital natives open to change? Examining flexible thinking and resistance to change. Computers & Education , 121 , 115–123. https://doi.org/10.1016/j.compedu.2018.01.016 .

Barak, M., & Levenberg, A. (2016). Flexible thinking in learning: An individual differences measure for learning in technology-enhanced environments. Computers & Education , 99 , 39–52. https://doi.org/10.1016/j.compedu.2016.04.003 .

Baron, P., & Corbin, L. (2012). Student engagement: Rhetoric and reality. Higher Education Research and Development , 31 (6), 759–772. https://doi.org/10.1080/07294360.2012.655711 .

Baydas, O., Kucuk, S., Yilmaz, R. M., Aydemir, M., & Goktas, Y. (2015). Educational technology research trends from 2002 to 2014. Scientometrics , 105 (1), 709–725. https://doi.org/10.1007/s11192-015-1693-4 .

Bedenlier, S., Bond, M., Buntins, K., Zawacki-Richter, O., & Kerres, M. (2020a). Facilitating student engagement through educational technology in higher education: A systematic review in the field of arts & humanities. Australasian Journal of Educational Technology , 36 (4), 27–47. https://doi.org/10.14742/ajet.5477 .

Bedenlier, S., Bond, M., Buntins, K., Zawacki-Richter, O., & Kerres, M. (2020b). Learning by Doing? Reflections on Conducting a Systematic Review in the Field of Educational Technology. In O. Zawacki-Richter, M. Kerres, S. Bedenlier, M. Bond, & K. Buntins (Eds.), Systematic Reviews in Educational Research (Vol. 45 , pp. 111–127). Wiesbaden: Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-27602-7_7 .

Ben-Eliyahu, A., Moore, D., Dorph, R., & Schunn, C. D. (2018). Investigating the multidimensionality of engagement: Affective, behavioral, and cognitive engagement across science activities and contexts. Contemporary Educational Psychology , 53 , 87–105. https://doi.org/10.1016/j.cedpsych.2018.01.002 .

Betihavas, V., Bridgman, H., Kornhaber, R., & Cross, M. (2016). The evidence for ‘flipping out’: A systematic review of the flipped classroom in nursing education. Nurse Education Today , 38 , 15–21. https://doi.org/10.1016/j.nedt.2015.12.010 .

Bigatel, P., & Williams, V. (2015). Measuring student engagement in an online program. Online Journal of Distance Learning Administration , 18 (2), 9.

Bodily, R., Leary, H., & West, R. E. (2019). Research trends in instructional design and technology journals. British Journal of Educational Technology , 50 (1), 64–79. https://doi.org/10.1111/bjet.12712 .

Boekaerts, M. (2016). Engagement as an inherent aspect of the learning process. Learning and Instruction , 43 , 76–83. https://doi.org/10.1016/j.learninstruc.2016.02.001 .

Bolden, B., & Nahachewsky, J. (2015). Podcast creation as transformative music engagement. Music Education Research , 17 (1), 17–33. https://doi.org/10.1080/14613808.2014.969219 .

Bond, M. (2018). Helping doctoral students crack the publication code: An evaluation and content analysis of the Australasian Journal of Educational Technology. Australasian Journal of Educational Technology , 34 (5), 168–183. https://doi.org/10.14742/ajet.4363 .

Bond, M., & Bedenlier, S. (2019a). Facilitating Student Engagement Through Educational Technology: Towards a Conceptual Framework. Journal of Interactive Media in Education , 2019 (1), 1-14. https://doi.org/10.5334/jime.528 .

Bond, M., Zawacki-Richter, O., & Nichols, M. (2019b). Revisiting five decades of educational technology research: A content and authorship analysis of the British Journal of Educational Technology. British Journal of Educational Technology , 50 (1), 12–63. https://doi.org/10.1111/bjet.12730 .

Bouta, H., Retalis, S., & Paraskeva, F. (2012). Utilising a collaborative macro-script to enhance student engagement: A mixed method study in a 3D virtual environment. Computers & Education , 58 (1), 501–517. https://doi.org/10.1016/j.compedu.2011.08.031 .

Bower, M. (2015). A typology of web 2.0 learning technologies . EDUCAUSE Digital Library Retrieved 20 June 2019, from http://www.educause.edu/library/resources/typology-web-20-learning-technologies .

Bower, M. (2016). Deriving a typology of web 2.0 learning technologies. British Journal of Educational Technology , 47 (4), 763–777. https://doi.org/10.1111/bjet.12344 .

Boyle, E. A., Connolly, T. M., Hainey, T., & Boyle, J. M. (2012). Engagement in digital entertainment games: A systematic review. Computers in Human Behavior , 28 (3), 771–780. https://doi.org/10.1016/j.chb.2011.11.020 .

Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., … Pereira, J. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education , 94 , 178–192. https://doi.org/10.1016/j.compedu.2015.11.003 .

Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education , 27 , 1–13. https://doi.org/10.1016/j.iheduc.2015.04.007 .

Brunton, G., Stansfield, C., & Thomas, J. (2012). Finding relevant studies. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews , (pp. 107–134). Los Angeles: Sage.

Bryman, A. (2007). The research question in social research: What is its role? International Journal of Social Research Methodology , 10 (1), 5–20. https://doi.org/10.1080/13645570600655282 .

Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Educational Technology & Society , 11 (1), 132–147.

Bundick, M., Quaglia, R., Corso, M., & Haywood, D. (2014). Promoting student engagement in the classroom. Teachers College Record , 116 (4) Retrieved from http://www.tcrecord.org/content.asp?contentid=17402 .

Castañeda, L., & Selwyn, N. (2018). More than tools? Making sense of the ongoing digitizations of higher education. International Journal of Educational Technology in Higher Education , 15 (1), 211. https://doi.org/10.1186/s41239-018-0109-y .

Chen, P.-S. D., Lambert, A. D., & Guidry, K. R. (2010). Engaging online learners: The impact of web-based learning technology on college student engagement. Computers & Education , 54 (4), 1222–1232. https://doi.org/10.1016/j.compedu.2009.11.008 .

Cheston, C. C., Flickinger, T. E., & Chisolm, M. S. (2013). Social media use in medical education: A systematic review. Academic Medicine : Journal of the Association of American Medical Colleges , 88 (6), 893–901. https://doi.org/10.1097/ACM.0b013e31828ffc23 .

Choi, M., Glassman, M., & Cristol, D. (2017). What it means to be a citizen in the internet age: Development of a reliable and valid digital citizenship scale. Computers & Education , 107 , 100–112. https://doi.org/10.1016/j.compedu.2017.01.002 .

Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.) (2012). Handbook of research on student engagement . Boston: Springer US.

Coates, H. (2007). A model of online and general campus-based student engagement. Assessment & Evaluation in Higher Education , 32 (2), 121–141. https://doi.org/10.1080/02602930600801878 .

Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education , 59 (2), 661–686. https://doi.org/10.1016/j.compedu.2012.03.004 .

Cook, M. P., & Bissonnette, J. D. (2016). Developing preservice teachers’ positionalities in 140 characters or less: Examining microblogging as dialogic space. Contemporary Issues in Technology and Teacher Education (CITE Journal) , 16 (2), 82–109.

Crompton, H., Burke, D., Gregory, K. H., & Gräbe, C. (2016). The use of mobile learning in science: A systematic review. Journal of Science Education and Technology , 25 (2), 149–160. https://doi.org/10.1007/s10956-015-9597-x .

Crook, C. (2019). The “British” voice of educational technology research: 50th birthday reflection. British Journal of Educational Technology , 50 (2), 485–489. https://doi.org/10.1111/bjet.12757 .

Davies, M. (2014). Using the apple iPad to facilitate student-led group work and seminar presentation. Nurse Education in Practice , 14 (4), 363–367. https://doi.org/10.1016/j.nepr.2014.01.006 .

Article   MathSciNet   Google Scholar  

Delialioglu, O. (2012). Student engagement in blended learning environments with lecture-based and problem-based instructional approaches. Educational Technology & Society , 15 (3), 310–322.

DePaolo, C. A., & Wilkinson, K. (2014). Recurrent online quizzes: Ubiquitous tools for promoting student presence, participation and performance. Interdisciplinary Journal of E-Learning and Learning Objects , 10 , 75–91 Retrieved from http://www.ijello.org/Volume10/IJELLOv10p075-091DePaolo0900.pdf .

Doherty, K., & Doherty, G. (2018). Engagement in HCI. ACM Computing Surveys , 51 (5), 1–39. https://doi.org/10.1145/3234149 .

Eccles, J. (2016). Engagement: Where to next? Learning and Instruction , 43 , 71–75. https://doi.org/10.1016/j.learninstruc.2016.02.003 .

Eccles, J., & Wang, M.-T. (2012). Part I commentary: So what is student engagement anyway? In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 133–145). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_6 .

Chapter   Google Scholar  

Englund, C., Olofsson, A. D., & Price, L. (2017). Teaching with technology in higher education: Understanding conceptual change and development in practice. Higher Education Research and Development , 36 (1), 73–87. https://doi.org/10.1080/07294360.2016.1171300 .

Fabian, K., Topping, K. J., & Barron, I. G. (2016). Mobile technology and mathematics: Effects on students’ attitudes, engagement, and achievement. Journal of Computers in Education , 3 (1), 77–104. https://doi.org/10.1007/s40692-015-0048-8 .

Filsecker, M., & Kerres, M. (2014). Engagement as a volitional construct. Simulation & Gaming , 45 (4–5), 450–470. https://doi.org/10.1177/1046878114553569 .

Finn, J. (2006). The adult lives of at-risk students: The roles of attainment and engagement in high school (NCES 2006-328) . Washington, DC: U.S. Department of Education, National Center for Education Statistics Retrieved from website: https://nces.ed.gov/pubs2006/2006328.pdf .

Finn, J., & Zimmer, K. (2012). Student engagement: What is it? Why does it matter? In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 97–131). Boston: Springer US. https://doi.org/10.1007/978-1-4614-2018-7_5 .

Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research , 74 (1), 59–109. https://doi.org/10.3102/00346543074001059 .

Fredricks, J. A., Filsecker, M., & Lawson, M. A. (2016). Student engagement, context, and adjustment: Addressing definitional, measurement, and methodological issues. Learning and Instruction , 43 , 1–4. https://doi.org/10.1016/j.learninstruc.2016.02.002 .

Fredricks, J. A., Wang, M.-T., Schall Linn, J., Hofkens, T. L., Sung, H., Parr, A., & Allerton, J. (2016). Using qualitative methods to develop a survey measure of math and science engagement. Learning and Instruction , 43 , 5–15. https://doi.org/10.1016/j.learninstruc.2016.01.009 .

Fukuzawa, S., & Boyd, C. (2016). Student engagement in a large classroom: Using technology to generate a hybridized problem-based learning experience in a large first year undergraduate class. Canadian Journal for the Scholarship of Teaching and Learning , 7 (1). https://doi.org/10.5206/cjsotl-rcacea.2016.1.7 .

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine.

Gleason, J. (2012). Using technology-assisted instruction and assessment to reduce the effect of class size on student outcomes in undergraduate mathematics courses. College Teaching , 60 (3), 87–94.

Gough, D., Oliver, S., & Thomas, J. (2012). An introduction to systematic reviews . Los Angeles: Sage.

Granberg, C. (2010). Social software for reflective dialogue: Questions about reflection and dialogue in student Teachers’ blogs. Technology, Pedagogy and Education , 19 (3), 345–360. https://doi.org/10.1080/1475939X.2010.513766 .

Greenwood, L., & Kelly, C. (2019). A systematic literature review to explore how staff in schools describe how a sense of belonging is created for their pupils. Emotional and Behavioural Difficulties , 24 (1), 3–19. https://doi.org/10.1080/13632752.2018.1511113 .

Gupta, M. L. (2009). Using emerging technologies to promote student engagement and learning in agricultural mathematics. International Journal of Learning , 16 (10), 497–508. https://doi.org/10.18848/1447-9494/CGP/v16i10/46658 .

Harden, A., & Gough, D. (2012). Quality and relevance appraisal. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews , (pp. 153–178). London: Sage.

Hatzipanagos, S., & Code, J. (2016). Open badges in online learning environments: Peer feedback and formative assessment as an engagement intervention for promoting agency. Journal of Educational Multimedia and Hypermedia , 25 (2), 127–142.

Heflin, H., Shewmaker, J., & Nguyen, J. (2017). Impact of mobile technology on student attitudes, engagement, and learning. Computers & Education , 107 , 91–99. https://doi.org/10.1016/j.compedu.2017.01.006 .

Henderson, M., Selwyn, N., & Aston, R. (2017). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education , 42 (8), 1567–1579. https://doi.org/10.1080/03075079.2015.1007946 .

Hennessy, S., Girvan, C., Mavrikis, M., Price, S., & Winters, N. (2018). Editorial. British Journal of Educational Technology , 49 (1), 3–5. https://doi.org/10.1111/bjet.12598 .

Henrie, C. R., Halverson, L. R., & Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: A review. Computers & Education , 90 , 36–53. https://doi.org/10.1016/j.compedu.2015.09.005 .

Hew, K. F., & Cheung, W. S. (2013). Use of web 2.0 technologies in K-12 and higher education: The search for evidence-based practice. Educational Research Review , 9 , 47–64. https://doi.org/10.1016/j.edurev.2012.08.001 .

Hew, K. F., Lan, M., Tang, Y., Jia, C., & Lo, C. K. (2019). Where is the “theory” within the field of educational technology research? British Journal of Educational Technology , 50 (3), 956–971. https://doi.org/10.1111/bjet.12770 .

Howard, S. K., Ma, J., & Yang, J. (2016). Student rules: Exploring patterns of students’ computer-efficacy and engagement with digital technologies in learning. Computers & Education , 101 , 29–42. https://doi.org/10.1016/j.compedu.2016.05.008 .

Hu, S., & Kuh, G. D. (2002). Being (dis)engaged in educationally purposeful activities: The influences of student and institutional characteristics. Research in Higher Education , 43 (5), 555–575. https://doi.org/10.1023/A:1020114231387 .

Hunsu, N. J., Adesope, O., & Bayly, D. J. (2016). A meta-analysis of the effects of audience response systems (clicker-based technologies) on cognition and affect. Computers & Education , 94 , 102–119. https://doi.org/10.1016/j.compedu.2015.11.013 .

Ikpeze, C. (2007). Small group collaboration in peer-led electronic discourse: An analysis of group dynamics and interactions involving Preservice and Inservice teachers. Journal of Technology and Teacher Education , 15 (3), 383–407.

Ivala, E., & Gachago, D. (2012). Social media for enhancing student engagement: The use of Facebook and blogs at a university of technology. South African Journal of Higher Education , 26 (1), 152–167.

Järvelä, S., Järvenoja, H., Malmberg, J., Isohätälä, J., & Sobocinski, M. (2016). How do types of interaction and phases of self-regulated learning set a stage for collaborative engagement? Learning and Instruction , 43 , 39–51. https://doi.org/10.1016/j.learninstruc.2016.01.005 .

Joksimović, S., Poquet, O., Kovanović, V., Dowell, N., Mills, C., Gašević, D., … Brooks, C. (2018). How do we model learning at scale? A systematic review of research on MOOCs. Review of Educational Research , 88 (1), 43–86. https://doi.org/10.3102/0034654317740335 .

Jou, M., Lin, Y.-T., & Tsai, H.-C. (2016). Mobile APP for motivation to learning: An engineering case. Interactive Learning Environments , 24 (8), 2048–2057. https://doi.org/10.1080/10494820.2015.1075136 .

Junco, R. (2012). The relationship between frequency of Facebook use, participation in Facebook activities, and student engagement. Computers & Education , 58 (1), 162–171. https://doi.org/10.1016/j.compedu.2011.08.004 .

Kahn, P. (2014). Theorising student engagement in higher education. British Educational Research Journal , 40 (6), 1005–1018. https://doi.org/10.1002/berj.3121 .

Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education , 38 (5), 758–773. https://doi.org/10.1080/03075079.2011.598505 .

Kahu, E. R., & Nelson, K. (2018). Student engagement in the educational interface: Understanding the mechanisms of student success. Higher Education Research and Development , 37 (1), 58–71. https://doi.org/10.1080/07294360.2017.1344197 .

Kaliisa, R., & Picard, M. (2017). A systematic review on mobile learning in higher education: The African perspective. The Turkish Online Journal of Educational Technology , 16 (1) Retrieved from https://files.eric.ed.gov/fulltext/EJ1124918.pdf .

Kara, H. (2017). Research and evaluation for busy students and practitioners: A time-saving guide , (2nd ed., ). Bristol: Policy Press.

Book   Google Scholar  

Karabulut-Ilgu, A., Jaramillo Cherrez, N., & Jahren, C. T. (2018). A systematic review of research on the flipped learning method in engineering education: Flipped learning in engineering education. British Journal of Educational Technology , 49 (3), 398–411. https://doi.org/10.1111/bjet.12548 .

Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: A review of the literature. Computers & Education , 53 (3), 819–827. https://doi.org/10.1016/j.compedu.2009.05.001 .

Keiller, L., & Inglis-Jassiem, G. (2015). A lesson in listening: Is the student voice heard in the rush to incorporate technology into health professions education? African Journal of Health Professions Education , 7 (1), 47–50. https://doi.org/10.7196/ajhpe.371 .

Kelley, K., Lai, K., Lai, M. K., & Suggests, M. (2018). Package ‘MBESS’. Retrieved from https://cran.r-project.org/web/packages/MBESS/MBESS.pdf

Kerres, M. (2013). Mediendidaktik. Konzeption und Entwicklung mediengestützter Lernangebote . München: Oldenbourg.

Kirkwood, A. (2009). E-learning: You don’t always get what you hope for. Technology, Pedagogy and Education , 18 (2), 107–121. https://doi.org/10.1080/14759390902992576 .

Koehler, M., & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of Educational Computing Research , 32 (2), 131–152.

Krause, K.-L., & Coates, H. (2008). Students’ engagement in first-year university. Assessment & Evaluation in Higher Education , 33 (5), 493–505. https://doi.org/10.1080/02602930701698892 .

Kucuk, S., Aydemir, M., Yildirim, G., Arpacik, O., & Goktas, Y. (2013). Educational technology research trends in Turkey from 1990 to 2011. Computers & Education , 68 , 42–50. https://doi.org/10.1016/j.compedu.2013.04.016 .

Kuh, G. D. (2001). The National Survey of student engagement: Conceptual framework and overview of psychometric properties . Bloomington: Indiana University Center for Postsecondary Research Retrieved from http://nsse.indiana.edu/2004_annual_report/pdf/2004_conceptual_framework.pdf .

Kuh, G. D. (2009). What student affairs professionals need to know about student engagement. Journal of College Student Development , 50 (6), 683–706. https://doi.org/10.1353/csd.0.0099 .

Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education , 79 (5), 540–563 Retrieved from http://www.jstor.org.ezproxy.umuc.edu/stable/25144692 .

Kuh, G. D., J. Kinzie, J. A. Buckley, B. K. Bridges, & J. C. Hayek. (2006). What matters to student success: A review of the literature. Washington, DC: National Postsecondary Education Cooperative.

Kupper, L. L., & Hafner, K. B. (1989). How appropriate are popular sample size formulas? The American Statistician , 43 (2), 101–105.

Lai, J. W. M., & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education , 133 , 27–42. https://doi.org/10.1016/j.compedu.2019.01.010 .

Lawson, M. A., & Lawson, H. A. (2013). New conceptual frameworks for student engagement research, policy, and practice. Review of Educational Research , 83 (3), 432–479. https://doi.org/10.3102/0034654313480891 .

Leach, L., & Zepke, N. (2011). Engaging students in learning: A review of a conceptual organiser. Higher Education Research and Development , 30 (2), 193–204. https://doi.org/10.1080/07294360.2010.509761 .

Li, J., van der Spek, E. D., Feijs, L., Wang, F., & Hu, J. (2017). Augmented reality games for learning: A literature review. In N. Streitz, & P. Markopoulos (Eds.), Lecture Notes in Computer Science. Distributed, Ambient and Pervasive Interactions , (vol. 10291, pp. 612–626). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-58697-7_46 .

Lim, C. (2004). Engaging learners in online learning environments. TechTrends , 48 (4), 16–23 Retrieved from https://link.springer.com/content/pdf/10.1007%2FBF02763440.pdf .

Lopera Medina, S. (2014). Motivation conditions in a foreign language reading comprehension course offering both a web-based modality and a face-to-face modality (Las condiciones de motivación en un curso de comprensión de lectura en lengua extranjera (LE) ofrecido tanto en la modalidad presencial como en la modalidad a distancia en la web). PROFILE: Issues in Teachers’ Professional Development , 16 (1), 89–104 Retrieved from https://search.proquest.com/docview/1697487398?accountid=12968 .

Lundin, M., Bergviken Rensfeldt, A., Hillman, T., Lantz-Andersson, A., & Peterson, L. (2018). Higher education dominance and siloed knowledge: A systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education , 15 (1), 1. https://doi.org/10.1186/s41239-018-0101-6 .

Ma, J., Han, X., Yang, J., & Cheng, J. (2015). Examining the necessary condition for engagement in an online learning environment based on learning analytics approach: The role of the instructor. The Internet and Higher Education , 24 , 26–34. https://doi.org/10.1016/j.iheduc.2014.09.005 .

Mahatmya, D., Lohman, B. J., Matjasko, J. L., & Farb, A. F. (2012). Engagement across developmental periods. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 45–63). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_3 .

Mansouri, A. S., & Piki, A. (2016). An exploration into the impact of blogs on students’ learning: Case studies in postgraduate business education. Innovations in Education and Teaching International , 53 (3), 260–273. https://doi.org/10.1080/14703297.2014.997777 .

Martin, A. J. (2012). Motivation and engagement: Conceptual, operational, and empirical clarity. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 303–311). Boston: Springer US. https://doi.org/10.1007/978-1-4614-2018-7_14 .

McCutcheon, K., Lohan, M., Traynor, M., & Martin, D. (2015). A systematic review evaluating the impact of online or blended learning vs. face-to-face learning of clinical skills in undergraduate nurse education. Journal of Advanced Nursing , 71 (2), 255–270. https://doi.org/10.1111/jan.12509 .

Miake-Lye, I. M., Hempel, S., Shanman, R., & Shekelle, P. G. (2016). What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Systematic Reviews , 5 , 28. https://doi.org/10.1186/s13643-016-0204-x .

Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ (Clinical Research Ed.) , 339 , b2535. https://doi.org/10.1136/bmj.b2535 .

Nelson Laird, T. F., & Kuh, G. D. (2005). Student experiences with information technology and their relationship to other aspects of student engagement. Research in Higher Education , 46 (2), 211–233. https://doi.org/10.1007/s11162-004-1600-y .

Nguyen, L., Barton, S. M., & Nguyen, L. T. (2015). iPads in higher education-hype and hope. British Journal of Educational Technology , 46 (1), 190–203. https://doi.org/10.1111/bjet.12137 .

Nicholas, D., Watkinson, A., Jamali, H. R., Herman, E., Tenopir, C., Volentine, R., … Levine, K. (2015). Peer review: Still king in the digital age. Learned Publishing , 28 (1), 15–21. https://doi.org/10.1087/20150104 .

Nikou, S. A., & Economides, A. A. (2018). Mobile-based assessment: A literature review of publications in major referred journals from 2009 to 2018. Computers & Education , 125 , 101–119. https://doi.org/10.1016/j.compedu.2018.06.006 .

Norris, L., & Coutas, P. (2014). Cinderella’s coach or just another pumpkin? Information communication technologies and the continuing marginalisation of languages in Australian schools. Australian Review of Applied Linguistics , 37 (1), 43–61 Retrieved from http://www.jbe-platform.com/content/journals/10.1075/aral.37.1.03nor .

OECD (2015a). Schooling redesigned. Educational Research and Innovation . OECD Publishing Retrieved from http://www.oecd-ilibrary.org/education/schooling-redesigned_9789264245914-en .

OECD (2015b). Students, computers and learning . PISA: OECD Publishing Retrieved from http://www.oecd-ilibrary.org/education/students-computers-and-learning_9789264239555-en .

O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education , 25 , 85–95. https://doi.org/10.1016/j.iheduc.2015.02.002 .

O’Gorman, E., Salmon, N., & Murphy, C.-A. (2016). Schools as sanctuaries: A systematic review of contextual factors which contribute to student retention in alternative education. International Journal of Inclusive Education , 20 (5), 536–551. https://doi.org/10.1080/13603116.2015.1095251 .

Oliver, B., & de St Jorre, Trina, J. (2018). Graduate attributes for 2020 and beyond: recommendations for Australian higher education providers. Higher Education Research and Development , 1–16. https://doi.org/10.1080/07294360.2018.1446415 .

O’Mara-Eves, A., Brunton, G., McDaid, D., Kavanagh, J., Oliver, S., & Thomas, J. (2014). Techniques for identifying cross-disciplinary and ‘hard-to-detect’ evidence for systematic review. Research Synthesis Methods , 5 (1), 50–59. https://doi.org/10.1002/jrsm.1094 .

Payne, L. (2017). Student engagement: Three models for its investigation. Journal of Further and Higher Education , 3 (2), 1–17. https://doi.org/10.1080/0309877X.2017.1391186 .

Pekrun, R., & Linnenbrink-Garcia, L. (2012). Academic emotions and student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 259–282). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_12 .

Popenici, S. (2013). Towards a new vision for university governance, pedagogies and student engagement. In E. Dunne, & D. Owen (Eds.), The student engagement handbook: Practice in higher education , (1st ed., pp. 23–42). Bingley: Emerald.

Price, L., Richardson, J. T., & Jelfs, A. (2007). Face-to-face versus online tutoring support in distance education. Studies in Higher Education , 32 (1), 1–20.

Quin, D. (2017). Longitudinal and contextual associations between teacher–student relationships and student engagement. Review of Educational Research , 87 (2), 345–387. https://doi.org/10.3102/0034654316669434 .

Rashid, T., & Asghar, H. M. (2016). Technology use, self-directed learning, student engagement and academic performance: Examining the interrelations. Computers in Human Behavior , 63 , 604–612. https://doi.org/10.1016/j.chb.2016.05.084 .

Redecker, C. (2017). European framework for the digital competence of educators . Luxembourg: Office of the European Union.

Redmond, P., Heffernan, A., Abawi, L., Brown, A., & Henderson, R. (2018). An online engagement framework for higher education. Online Learning , 22 (1). https://doi.org/10.24059/olj.v22i1.1175 .

Reeve, J. (2012). A self-determination theory perspective on student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 149–172). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_7 .

Reeve, J., & Tseng, C.-M. (2011). Agency as a fourth aspect of students’ engagement during learning activities. Contemporary Educational Psychology , 36 (4), 257–267. https://doi.org/10.1016/j.cedpsych.2011.05.002 .

Reschly, A. L., & Christenson, S. L. (2012). Jingle, jangle, and conceptual haziness: Evolution and future directions of the engagement construct. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 3–19). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_1 .

Salaber, J. (2014). Facilitating student engagement and collaboration in a large postgraduate course using wiki-based activities. The International Journal of Management Education , 12 (2), 115–126. https://doi.org/10.1016/j.ijme.2014.03.006 .

Schindler, L. A., Burkholder, G. J., Morad, O. A., & Marsh, C. (2017). Computer-based technology and student engagement: A critical review of the literature. International Journal of Educational Technology in Higher Education , 14 (1), 253. https://doi.org/10.1186/s41239-017-0063-0 .

Selwyn, N. (2016). Digital downsides: Exploring university students’ negative engagements with digital technology. Teaching in Higher Education , 21 (8), 1006–1021. https://doi.org/10.1080/13562517.2016.1213229 .

Shonfeld, M., & Ronen, I. (2015). Online learning for students from diverse backgrounds: Learning disability students, excellent students and average students. IAFOR Journal of Education , 3 (2), 13–29.

Skinner, E., & Pitzer, J. R. (2012). Developmental dynamics of student engagement, coping, and everyday resilience. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 21–44). Boston: Springer US.

Smidt, E., Bunk, J., McGrory, B., Li, R., & Gatenby, T. (2014). Student attitudes about distance education: Focusing on context and effective practices. IAFOR Journal of Education , 2 (1), 40–64.

Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine , 99 , 178–182.

Smith, T., & Lambert, R. (2014). A systematic review investigating the use of twitter and Facebook in university-based healthcare education. Health Education , 114 (5), 347–366. https://doi.org/10.1108/HE-07-2013-0030 .

Solomonides, I. (2013). A relational and multidimensional model of student engagement. In E. Dunne, & D. Owen (Eds.), The student engagement handbook: Practice in higher education , (1st ed., pp. 43–58). Bingley: Emerald.

Sosa Neira, E. A., Salinas, J., & de Benito, B. (2017). Emerging technologies (ETs) in education: A systematic review of the literature published between 2006 and 2016. International Journal of Emerging Technologies in Learning (IJET) , 12 (05), 128. https://doi.org/10.3991/ijet.v12i05.6939 .

Sullivan, M., & Longnecker, N. (2014). Class blogs as a teaching tool to promote writing and student interaction. Australasian Journal of Educational Technology , 30 (4), 390–401. https://doi.org/10.14742/ajet.322 .

Sun, J. C.-Y., & Rueda, R. (2012). Situational interest, computer self-efficacy and self-regulation: Their impact on student engagement in distance education. British Journal of Educational Technology , 43 (2), 191–204. https://doi.org/10.1111/j.1467-8535.2010.01157.x .

Szabo, Z., & Schwartz, J. (2011). Learning methods for teacher education: The use of online discussions to improve critical thinking. Technology, Pedagogy and Education , 20 (1), 79–94. https://doi.org/10.1080/1475939x.2010.534866 .

Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research , 81 (1), 4–28. https://doi.org/10.3102/0034654310393361 .

Trowler, V. (2010). Student engagement literature review . York: The Higher Education Academy Retrieved from website: https://www.heacademy.ac.uk/system/files/studentengagementliteraturereview_1.pdf .

Van Rooij, E., Brouwer, J., Fokkens-Bruinsma, M., Jansen, E., Donche, V., & Noyens, D. (2017). A systematic review of factors related to first-year students’ success in Dutch and Flemish higher education. Pedagogische Studien , 94 (5), 360–405 Retrieved from https://repository.uantwerpen.be/docman/irua/cebc4c/149722.pdf .

Vural, O. F. (2013). The impact of a question-embedded video-based learning tool on E-learning. Educational Sciences: Theory and Practice , 13 (2), 1315–1323.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Cambridge: Harvard University Press.

Webb, L., Clough, J., O’Reilly, D., Wilmott, D., & Witham, G. (2017). The utility and impact of information communication technology (ICT) for pre-registration nurse education: A narrative synthesis systematic review. Nurse Education Today , 48 , 160–171. https://doi.org/10.1016/j.nedt.2016.10.007 .

Wekullo, C. S. (2019). International undergraduate student engagement: Implications for higher education administrators. Journal of International Students , 9 (1), 320–337. https://doi.org/10.32674/jis.v9i1.257 .

Wimpenny, K., & Savin-Baden, M. (2013). Alienation, agency and authenticity: A synthesis of the literature on student engagement. Teaching in Higher Education , 18 (3), 311–326. https://doi.org/10.1080/13562517.2012.725223 .

Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist , 52 (1), 17–37. https://doi.org/10.1080/00461520.2016.1207538 .

Zepke, N. (2014). Student engagement research in higher education: Questioning an academic orthodoxy. Teaching in Higher Education , 19 (6), 697–708. https://doi.org/10.1080/13562517.2014.901956 .

Zepke, N. (2018). Student engagement in neo-liberal times: What is missing? Higher Education Research and Development , 37 (2), 433–446. https://doi.org/10.1080/07294360.2017.1370440 .

Zepke, N., & Leach, L. (2010). Improving student engagement: Ten proposals for action. Active Learning in Higher Education , 11 (3), 167–177. https://doi.org/10.1177/1469787410379680 .

Zhang, A., & Aasheim, C. (2011). Academic success factors: An IT student perspective. Journal of Information Technology Education: Research , 10 , 309–331. https://doi.org/10.28945/1518 .

Download references

Acknowledgements

The authors thank the two student assistants who helped during the article retrieval and screening stage.

This research resulted from the ActiveLearn project, funded by the Bundesministerium für Bildung und Forschung (BMBF-German Ministry of Education and Research) [grant number 16DHL1007].

Author information

Authors and affiliations.

Faculty of Education and Social Sciences (COER), Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany

Melissa Bond, Svenja Bedenlier & Olaf Zawacki-Richter

Learning Lab, Universität Duisburg-Essen, Essen, Germany

Katja Buntins & Michael Kerres

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the design and conceptualisation of the systematic review. MB, KB and SB conducted the systematic review search and data extraction. MB undertook the literature review on student engagement and educational technology, co-wrote the method, results, discussion and conclusion section. KB designed and executed the sampling strategy and produced all of the graphs and tables, as well as assisted with the formulation of the article. SB co-wrote the method, results, discussion and conclusion sections, and proof read the introduction and literature review sections. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Melissa Bond .

Ethics declarations

Consent for publication.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Literature reviews (LR) and systematic reviews (SR) on student engagement

Additional file 2.

Indicators of engagement and disengagement

Additional file 3.

Literature reviews (LR) and systematic reviews (SR) on student engagement and technology in higher education (HE)

Additional file 4.

Educational technology tool typology based on Bower ( 2016 ) and Educational technology tools used

Additional file 5.

Text-based tool examples by engagement domain

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Bond, M., Buntins, K., Bedenlier, S. et al. Mapping research in student engagement and educational technology in higher education: a systematic evidence map. Int J Educ Technol High Educ 17 , 2 (2020). https://doi.org/10.1186/s41239-019-0176-8

Download citation

Received : 01 May 2019

Accepted : 17 December 2019

Published : 22 January 2020

DOI : https://doi.org/10.1186/s41239-019-0176-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Educational technology
  • Higher education
  • Systematic review
  • Evidence map
  • Student engagement

research about student learning

  • Our Mission

Powerful Learning: Studies Show Deep Understanding Derives from Collaborative Methods

Cooperative learning and inquiry-based teaching yield big dividends in the classroom. And now we have the research to prove it.

Today's students will enter a job market that values skills and abilities far different from the traditional workplace talents that so ably served their parents and grandparents. They must be able to crisply collect, synthesize, and analyze information, then conduct targeted research and work with others to employ that newfound knowledge. In essence, students must learn how to learn, while responding to endlessly changing technologies and social, economic, and global conditions.

But what types of teaching and learning will develop these skills? And, just as important, do studies exist that support their use?

A growing body of research demonstrates that students learn more deeply if they have engaged in activities that require applying classroom-gathered knowledge to real-world problems. Like the old adage states, "Tell me and I forget, show me and I remember, involve me and I understand."

Research shows that such inquiry-based teaching is not so much about seeking the right answer but about developing inquiring minds, and it can yield significant benefits. For example, in the 1995 School Restructuring Study, conducted at the Center on Organization and Restructuring of Schools by Fred Newmann and colleagues at the University of Wisconsin, 2,128 students in twenty-three schools were found to have significantly higher achievement on challenging tasks when they were taught with inquiry-based teaching, showing that involvement leads to understanding. These practices were found to have a more significant impact on student performance than any other variable, including student background and prior achievement.

Similarly, studies also show the widespread benefits of cooperative learning, in which small teams of students use a variety of activities to more deeply understand a subject. Each member is responsible not only for learning what is taught but also for helping his or her teammates learn, so the group become a supportive learning environment.

What follows is a summary of the key research findings for both inquiry-based and cooperative learning. First, let's look at three inquiry-based approaches: project learning (also called project-based learning), problem-based learning, and design-based instruction.

Project-Based Pathways

Project learning involves completing complex tasks that result in a realistic product or presentation to an audience. "A Review of Research on Project-Based Learning," prepared by researcher John Thomas for the Autodesk Foundation, identified five key components of effective project learning:

  • Centrality to the curriculum
  • Driving questions that lead students to encounter central concepts
  • Investigations that involve inquiry and knowledge building
  • Processes that are student driven, rather than teacher driven
  • Authentic problems that people care about in the real world

Research on project learning found that student gains in factual learning are equivalent or superior to those of students in more traditional forms of classroom instruction. The goals of project learning, however, aim to take learning one step further by enabling students to transfer their learning to new kinds of situations, illustrated in three studies:

  • In a 1998 study by H.G. Shepherd, fourth and fifth graders completed a nine-week project to define and find solutions related to housing shortages in several countries. In comparison to the control group, the project-learning students scored significantly higher on a critical-thinking test and demonstrated increased confidence in their learning.
  • A more ambitious, longitudinal comparative study by Jo Boaler and colleagues in England in 1997 and 1998 followed students over three years in two schools similar in student achievement and income levels. The traditional school featured teacher-directed whole-class instruction organized around texts, workbooks, and frequent tests in tracked classrooms. Instruction in the other school used open-ended projects in heterogeneous classrooms. The study found that although students had comparable learning gains on basic mathematics procedures, significantly more project-learning students passed the National Exam in year three than those in the traditional school. Although students in the traditional school "thought that mathematical success rested on being able to remember and use rules," according to the study, the project-learning students developed more flexible and useful mathematical knowledge.
  • A third study, in 2000, on the impact of multimedia projects on student learning, showed similar gains. Students in the Challenge 2000 Multimedia Project , in California's Silicon Valley, developed a brochure informing school officials about problems homeless students face. The students in the multimedia program earned higher scores than a comparison group on content mastery, sensitivity to audience, and coherent design. They performed equally well on standardized test scores of basic skills.

Other short-term, comparative studies demonstrated benefits from project learning, such as increases in the ability to define problems, reason with clear arguments, and plan projects. Additional research has documented improvements in motivation, attitude toward learning, and work habits. Students who struggle in traditional instructional settings have often excelled when working on a project, which better matches their learning style or preference for collaboration.

Students as Problem Solvers

Problem-based-learning approaches are a close cousin of project learning, in which students use complex problems and cases to actively build their knowledge. Much of the research for this approach comes from medical education. Medical students are given a patient profile, history, and symptoms; groups of students generate a diagnosis, conduct research, and perform diagnostic tests to identify causes of the pain or illness. Meta-analyses of multiple studies have found that medical students in problem-based curricula score higher on clinical problem solving and performance.

Use of problem-based cases in teacher education has helped student teachers apply theory and practical knowledge to school contexts and classroom dilemmas; these cases, for example, have enabled teachers to take alternative perspectives to better appreciate cultural diversity.

Studies of problem-based learning suggest that it is comparable, though not always superior, to more traditional instruction in teaching facts and information. However, this approach has been found to be better in supporting flexible problem solving, reasoning skills, and generating accurate hypotheses and coherent explanations.

Learning Through Design

Design-based instruction is based on the premise that children learn deeply when they create products that require understanding and application of knowledge. Design activity involves stages of revisions as students create, assess, and redesign their products. The work often requires collaboration and specific roles for individual students, enabling them to become experts in a particular area.

research about student learning

Design-based approaches can be found across many disciplines, including science, technology, art, engineering, and architecture. Design competitions for students include the FIRST robotics competitions and Thinkquest , for which student teams design and build Web sites on topics including art, astronomy, computer programming, foster care, and mental health.

Thinkquest teams are mentored by a teacher who gives general guidance throughout the design process, leaving the specific creative and technical work to the students. Teams offer and receive feedback during a peer review of the initial submissions and use this information to revise their work. To date, more than 30,000 students have created more than 7,000 Web sites through this competition.

Few studies have used a control group to evaluate the impact of the learning-by-design model, but in a 2000 study by researchers C.E. Hmelo, D.L Holton, and J.L. Kolodner, sixth-grade students designed a set of artificial lungs and built a partially working model of the respiratory system. The learning-by-design students viewed the respiratory system more systemically and understood more about the structures and functions of the system than the control group.

Hmelo and colleagues argued that design challenges need to be carefully planned, and they emphasized the importance of dynamic feedback. They also determined that teachers working on design projects must pay particular attention to finding a balance between students' work on design activities and reflection on what they are learning; that balance allows teachers to guide students' progress, especially in recognizing irrelevant aspects of their research that may take them on unproductive tangents, and in remaining focused on the whole project rather than simply on its completion.

Shifting Ideas, Shifting Roles

A significant challenge to implementing inquiry approaches is the capacity and skill of teachers to undertake this more complex form of teaching. Teachers may think of project learning or problem-based teaching as unstructured and may fail to provide students with proper support and assessment as projects unfold.

When students have no prior experience with inquiry learning, they can have difficulty generating meaningful driving questions and logical arguments and may lack background knowledge to make sense of the inquiry. Students can neglect to use informational resources unless explicitly prompted. They can find it hard to work together, manage their time, and sustain motivation in the face of setbacks or confusion.

One of the principal challenges for teachers, then, is to learn how to juggle a host of new responsibilities -- from carving out the time needed for extended inquiry to developing new classroom-management techniques. They must also be able to illuminate key concepts, balance direct instruction with inquiry teaching, facilitate learning among groups, and develop assessments to guide the learning process. That's a tall order for even the most experienced teacher.

To address these problems, Alice D. Gertzman and Janet L. Kolodner, of the Georgia Institute of Technology, introduced the concept of a design diary in 1996 to support eighth-grade science students in creating a solution for coastal erosion on a specific island off the coast of Georgia. Students had access to stream tables, as well as resources on videotape and the Internet.

In a first study conducted by Gertzman and Kolodner, learning outcomes were disappointing but instructive: The researchers noted that the teacher missed many opportunities to advance learning because she could not listen to all small-group discussions and decided not to have whole-group discussions. They also noted that the students needed more specific prompts for justifying design decisions.

In a second study, the same researchers designed a broader system of tools that greatly improved the learning outcomes. These tools included more structured diary prompts asking for design explanations and the use of whole-class discussions at strategic moments. They also required students to publicly defend their designs earlier in the process. Requiring students to track and defend their thinking focused them on learning and connecting concepts in their design work.

Talented Teams

Inquiry-based learning often involves students working in pairs or groups. Cooperative small-group learning -- that is, students working together in a group small enough that everyone can participate on a collective task -- has been the subject of hundreds of studies. All the research arrives at the same conclusion: There are significant benefits for students who work together on learning activities.

In one comparison by Zhining Qin, David Johnson, and Roger Johnson, of four types of categories for problems presented to individuals and cooperative teams, researchers found that teams outperformed individuals on all types and across all ages. Results varied by how well defined the problems were (a single right answer versus open-ended solutions, such as writing a story) and how much they relied on language. Several experimental studies have shown that groups outperform individuals on learning tasks and that individuals who work in groups do better on later individual assessments.

Cooperative group work benefits students in social and behavioral areas as well, including improvement in student self-concept, social interaction, time on task, and positive feelings toward peers. Researchers say these social and self-concept measures were related to academic outcomes and that low-income students, urban students, and minority students benefited even more from cooperative group work, a finding repeated over several decades.

But effective cooperative learning can be difficult to implement. Researchers identify at least three major challenges: developing group structures to help individuals work together, creating tasks that support useful cooperative work, and introducing discussion strategies that support rich learning.

Productive Collaboration

A great deal of work has been done to specify the kinds of tasks, accountability, and roles that help students collaborate well. In a summary of forty years of research on cooperative learning, Roger and David Johnson, at the University of Minnesota, identified five important elements of cooperation across multiple classroom models:

  • Positive interdependence
  • Individual accountability
  • Structures that promote face-to-face interaction
  • Social skills
  • Group processing

Cooperative-learning approaches range from simply asking students to help one another complete individually assigned problem sets to having students collectively define projects and generate a product that reflects the work of the entire group. Many approaches fall between these two extremes.

Credit: Thomas Reis

In successful group learning, teachers pay careful attention to the work process and interaction among students. As Johns Hopkins University's Robert Slavin argues, "It is not enough to simply tell students to work together. They must have a reason to take one another's achievement seriously." Slavin developed a model that focuses on external motivators, such as rewards and individual accountability established by the teacher. He found that group tasks with individual accountability produce stronger learning outcomes.

Stanford University's Elizabeth Cohen reviewed research on productive small groups, focusing on internal group interaction around tasks. She and her colleagues developed Complex Instruction , one of the best-known approaches, which uses carefully designed activities requiring diverse talents and interdependence among group members. Teachers pay attention to unequal participation, a frequent result of status differences among peers, and are given strategies to bolster the status of infrequent contributors. Roles are assigned to encourage equal participation, such as recorder, reporter, materials manager, resource manager, communication facilitator, and harmonizer.

Studies identified social processes that explain how group work supports individual learning, such as resolving differing perspectives through argument, explaining one's thinking, observing the strategies of others, and listening to explanations.

Evidence shows that inquiry-based, collaborative approaches benefit students in learning important twenty-first-century skills, such as the ability to work in teams, solve complex problems, and apply knowledge from one lesson to others. The research suggests that inquiry-based lessons and meaningful group work can be challenging to implement. They require changes in curriculum, instruction, and assessment practices -- changes that are often new for teachers and students.

Teachers need time and a community to organize sustained project work. Inquiry-based instruction can help teachers deepen their repertoire for connecting with their peers and students in new and meaningful ways. That's powerful teaching and learning -- for students and teachers alike.

The Takeaway: Research Findings

A growing body of research has shown the following:

  • Students learn more deeply when they can apply classroom-gathered knowledge to real-world problems, and when they take part in projects that require sustained engagement and collaboration.
  • Active-learning practices have a more significant impact on student performance than any other variable, including student background and prior achievement.
  • Students are most successful when they are taught how to learn as well as what to learn.

Adapted from Powerful Learning: What We Know About Teaching for Understanding, a new book reviewing research on innovative classroom practices, by Linda Darling-Hammond, Brigid Barron, P. David Pearson, Alan H. Schoenfeld, Elizabeth K. Stage, Timothy D. Zimmerman, Gina N. Cervetti, and Jennifer L. Tilson, published in 2008 by Jossey-Bass. Published with support from The George Lucas Educational Foundation. Available at amazon.com .

The collaborative classroom: social and emotional learning.

Traditional academic approaches — those that employ narrow tasks to emphasize rote memorization or the application of simple procedures — won’t develop learners who are critical thinkers or effective writers and speakers. Rather, students need to take part in complex, meaningful projects that require sustained engagement and collaboration.

Listen to education expert Linda Darling-Hammond’s insights on cooperative teaching in the Edutopia video The Collaborative Classroom: An Interview with Linda Darling-Hammond . Darling-Hammond, a professor of education at Stanford University and former director of the National Commission on Teaching and America’s Future, was chosen in 2006 by Education Week as one of the nation’s ten most influential people affecting education policy over the last decade.

She and article coauthor Brigid Barron are two of the coauthors of Powerful Learning: What We Know About Teaching for Understanding , a review of research on the most effective K-12 teaching practices. In the book, copublished by Jossey-Bass and The George Lucas Educational Foundation, the authors explore the ways in which project learning, cooperative learning, and performance-based assessment generate meaningful student understanding in the classroom. Available for puchase at amazon.com .   [[{“type”:”media”,”view_mode”:”content_image”,”fid”:”68802″,”link_text”:null,”fields”:{},”attributes”:{“height”:”12″,”width”:”11″,”class”:”media-image media-element file-content-image”}}]] Download an expanded version of this article adapted from the book (PDF 7.6MB ).

Home

  • CRLT Consultation Services
  • Consultation
  • Midterm Student Feedback
  • Classroom Observation
  • Teaching Philosophy
  • Upcoming Events and Seminars
  • CRLT Calendar
  • Orientations
  • Teaching Academies
  • Provost's Seminars
  • Past Events
  • For Faculty
  • For Grad Students & Postdocs
  • For Chairs, Deans & Directors
  • Customized Workshops & Retreats
  • Assessment, Curriculum, & Learning Analytics Services
  • CRLT in Engineering
  • CRLT Players
  • Foundational Course Initiative
  • CRLT Grants
  • Other U-M Grants
  • Provost's Teaching Innovation Prize
  • U-M Teaching Awards
  • Retired Grants
  • Staff Directory
  • Faculty Advisory Board
  • Annual Report
  • Equity-Focused Teaching
  • Preparing to Teach
  • Teaching Strategies
  • Testing and Grading
  • Teaching with Technology
  • Teaching Philosophy & Statements
  • Training GSIs
  • Evaluation of Teaching
  • Occasional Papers

Home

Enhancing Student Learning: Seven Principles for Good Practice

The seven principles resource center, winona state university.

The Seven Principles for Good Practice in Undergraduate Education grew out of a review of 50 years of research on the way teachers teach and students learn (Chickering and Gamson, 1987, p. 1) and a conference that brought together a distinguished group of researchers and commentators on higher education.  The primary goal of the Principles’ authors was to identify practices, policies, and institutional conditions that would result in a powerful and enduring undergraduate education (Sorcinelli, 1991, p. 13).

The following principles are anchored in extensive research about teaching, learning, and the college experience.

1. Good Practice Encourages Student – Instructor Contact

Frequent student – instructor contact in and out of classes is an important factor in student motivation and involvement. Instructor concern helps students get through rough times and keep on working. Knowing a few instructors well enhances students’ intellectual commitment and encourages them to think about their own values and future plans.

Implementation Ideas:

  • Share past experiences, values, and attitudes.
  • Design an activity that brings students to your office during the first weeks of class.
  • Try to get to know your students by name by the end of the first three weeks of the term.
  • Attend, support, and sponsor events led by student groups.
  • Treat students as human beings with full real lives; ask how they are doing.
  • Hold “out of class” review sessions.
  • Use email regularly to encourage and inform.
  • Hold regular “hours” in the Michigan Union or residence halls where students can stop by for informal visits.
  • Take students to professional meetings or other events in your field.

2.  Good Practice Encourages Cooperation Among Students

Learning is enhanced when it is more like a team effort than a solo race. Good learning, like good work, is collaborative and social, not competitive and isolated. Working with others often increases involvement in learning. Sharing one’s own ideas and responding to others’ reactions improves thinking and deepens understanding.

  • Ask students to share information about each other’s backgrounds and academic interests.
  • Encourage students to prepare together for classes or exams.
  • Create study groups within your course.
  • Ask students to give constructive feedback on each other’s work and to explain difficult ideas to each other.
  • Use small group discussions, collaborative projects in and out of class, group presentations, and case study analysis.
  • Ask students to discuss key concepts with other students whose backgrounds and viewpoints are different from their own.
  • Encourage students to work together.

3.  Good Practice Encourages Active Learning

Learning is not a spectator sport. Students do not learn much just sitting in classes listening to instructors, memorizing assignments, and spitting out answers. They must talk about what they are learning, write about it, relate it to past experiences, and apply it to their daily lives. They must make what they learn part of themselves.

  • Ask students to present their work to the class.
  • Give students concrete, real life situations to analyze.
  • Ask students to summarize similarities and differences among research findings, artistic works or laboratory results.
  • Model asking questions, listening behaviors, and feedback.
  • Encourage use of professional journals.
  • Use technology to encourage active learning.
  • Encourage use of internships, study abroad, service learning and clinical opportunities.
  • Use class time to work on projects.

4.  Good Practice Gives Prompt Feedback

Knowing what you know and don’t know focuses learning. Students need appropriate feedback on performance to benefit from courses. In getting started, students need help in assessing existing knowledge and competence. In classes, students need frequent opportunities to perform and receive suggestions for improvement. At various points during college, and at the end, students need chances to reflect on what they have learned, what they still need to know, and how to assess themselves.

  • Return examinations promptly, preferably within a week, if not sooner.
  • Schedule brief meetings with the students to discuss their progress.
  • Prepare problems or exercises that give students immediate feedback on how well they are doing. (e.g., Angelo, 1993)
  • Give frequent quizzes and homework assignments to help students monitor their progress.
  • Give students written comments on the strengths and weakness of their tests/papers.
  • Give students focused feedback on their work early in the term.
  • Consider giving a mid-term assessment or progress report.
  • Be clear in relating performance level/expectations to grade.
  • Communicate regularly with students via email about various aspects of the class.

5.  Good Practice Emphasizes Time on Task

Time plus energy equals learning. There is no substitute for time on task. Learning to use one’s time well is critical for students and professionals alike. Students need help in learning effective time management. Allocating realistic amounts of time means effective learning for students and effective teaching for instructors.

  • Communicate to students the amount of time they should spend preparing for class.
  • Expect students to complete their assignments promptly.
  • Underscore the importance of regular work, steady application, self-pacing, scheduling.
  • Divide class into timed segments so as to keep on task.
  • Meet with students who fall behind to discuss their study habits, schedules.
  • Don’t hesitate to refer students to learning skills professionals on campus.
  • Use technology to make resources easily available to students.
  • Consider using mastery learning, contract learning, and computer assisted instruction as appropriate.

6.  Good Practice Communicates High Expectations

Expect more and you will get it. High expectations are important for everyone—for the poorly prepared, for those unwilling to exert themselves, and for the bright and well motivated. Expecting students to perform well becomes a self-fulfilling prophecy when instructors hold high expectations for themselves and make extra efforts.

  • Make your expectations clear at the beginning of the course both in writing and orally. Tell them you expect them to work hard.
  • Periodically discuss how well the class is doing during the course of the semester.
  • Encourage students to write; require drafts of work.  Give students opportunities to revise their work.
  • Set up study guidelines.
  • Publish students’ work on a course website. This often motivates students to higher levels of performance.
  • Be energized and enthusiastic in your interaction with students.

7.  Good Practice Respects Diverse Talents and Ways of Learning

There are many roads to learning. People bring different talents and styles of learning to college. Students rich in hands-on experiences may not do so well with theory. Students need the opportunity to show their talents and learn in ways that work for them. They can be pushed to learning in new ways that do not come so easily.

  • Use a range of teaching activities to address a broad spectrum of students.
  • Provide extra material or exercises for students who lack essential background knowledge or skills.
  • Identify students’ learning styles, backgrounds at the beginning of the semester.
  • Use different activities in class – videos, discussions, lecture, groups, guest speakers, pairwork.
  • Use different assignment methods – written, oral, projects, etc. – so as to engage as many ways of learning as possible (e.g., visual, auditory).
  • Give students a real-world problem to solve that has multiple solutions. Provide examples and questions to guide them.

Contributors:   The Teaching Excellence Center at Brigham Young University; Northern Essex Community College; Dennis Congos, University of Central Florida; Edward Nuhfer, University of Colorado at Denver and Delores Knipp, United States Air Force Academy; and James W. King, University of Nebraska-Lincoln.

Sources Cited:

Angelo, T.A., & Cross, K.P. (1993). Classroom assessment techniques: A handbook for college teachers .  San Francisco: Jossey-Bass. Chickering, A.W., & Gamson, Z.F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39 (7): 3-7. Sorcinelli, M.D. (1991). Research findings on the seven principles. In A.W. Chickering & Z.F. Gamson (Eds.) Applying the seven principles for good practice in undergraduate education (pp. 13-25). New Directions for Teaching and Learning, No. 47. San Francisco: Jossey-Bass.

Adapted with permission from The Seven Principles Resource Center, Winona State University, Winona, Minnesota.

Center for Research on Learning and Teaching logo

Contact CRLT

location_on University of Michigan 1071 Palmer Commons 100 Washtenaw Ave. Ann Arbor, MI 48109-2218

phone Phone: (734) 764-0505

description Fax: (734) 647-3600

email Email: [email protected]

Connect with CRLT

tweets

directions Directions to CRLT

group Staff Directory

markunread_mailbox Subscribe to our Blog

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 07 April 2023

Using machine learning to predict student retention from socio-demographic characteristics and app-based engagement metrics

  • Sandra C. Matz 1 ,
  • Christina S. Bukow 2 ,
  • Heinrich Peters 1 ,
  • Christine Deacons 3 ,
  • Alice Dinu 5   na1 &
  • Clemens Stachl 4  

Scientific Reports volume  13 , Article number:  5705 ( 2023 ) Cite this article

7823 Accesses

2 Citations

35 Altmetric

Metrics details

  • Human behaviour

An Author Correction to this article was published on 21 June 2023

This article has been updated

Student attrition poses a major challenge to academic institutions, funding bodies and students. With the rise of Big Data and predictive analytics, a growing body of work in higher education research has demonstrated the feasibility of predicting student dropout from readily available macro-level (e.g., socio-demographics or early performance metrics) and micro-level data (e.g., logins to learning management systems). Yet, the existing work has largely overlooked a critical meso-level element of student success known to drive retention: students’ experience at university and their social embeddedness within their cohort. In partnership with a mobile application that facilitates communication between students and universities, we collected both (1) institutional macro-level data and (2) behavioral micro and meso-level engagement data (e.g., the quantity and quality of interactions with university services and events as well as with other students) to predict dropout after the first semester. Analyzing the records of 50,095 students from four US universities and community colleges, we demonstrate that the combined macro and meso-level data can predict dropout with high levels of predictive performance (average AUC across linear and non-linear models = 78%; max AUC = 88%). Behavioral engagement variables representing students’ experience at university (e.g., network centrality, app engagement, event ratings) were found to add incremental predictive power beyond institutional variables (e.g., GPA or ethnicity). Finally, we highlight the generalizability of our results by showing that models trained on one university can predict retention at another university with reasonably high levels of predictive performance.

Similar content being viewed by others

research about student learning

Self-supervised learning for human activity recognition using 700,000 person-days of wearable data

Hang Yuan, Shing Chan, … Aiden Doherty

research about student learning

Ethics and discrimination in artificial intelligence-enabled recruitment practices

Zhisheng Chen

research about student learning

Interviews in the social sciences

Eleanor Knott, Aliya Hamid Rao, … Chana Teeger

Introduction

In the US, only about 60% of full-time students graduate from their program 1 , 2 with the majority of those who discontinue their studies dropping out during their first year 3 These high attrition rates pose major challenges to students, universities, and funding bodies alike 4 , 5 .

Dropping out of university without a degree negatively impacts students’ finances and mental health. Over 65% of US undergraduate students receive student loans to help pay for college, causing them to incur heavy debts over the course of their studies 6 . According to the U.S. Department of Education, students who take out a loan but never graduate are three times more likely to default on loan repayment than students who graduate 7 . This is hardly surprising, given that students who drop out of university without a degree, earn 66% less than university graduates with a bachelor's degree and are far more likely to be unemployed 2 . In addition to financial losses, the feeling of failure often adversely impacts students’ well-being and mental health 8 .

At the same time, student attrition negatively impacts universities and federal funding bodies. For universities, student attrition results in an average annual revenue reduction of approximately $16.5 billion per year through the loss of tuition fees 9 , 10 . Similarly, student attrition wastes valuable resources provided by states and federal governments. For example, the US Department of Education Integrated Postsecondary Education Data System (IPEDS) shows that between 2003 and 2008, state and federal governments together provided more than $9 billion in grants and subsidies to students who did not return to the institution where they were enrolled for a second year 11 .

Given the high costs of attrition, the ability to predict at-risk students – and to provide them with additional support – is critical 12 , 13 . As most dropouts occur during the first year 14 , such predictions are most valuable if they can identify at-risk students as early as possible 13 , 15 , 16 . The earlier one can identify students who might struggle, the better the chances that interventions aimed at protecting them from gradually falling behind – and eventually discontinuing their studies – will be effective 17 , 18 .

Indicators of student retention

Previous research has identified various predictors of student retention, including previous academic performance, demographic and socio-economic factors, and the social embeddedness of a student in their home institution 19 , 20 , 21 , 22 , 23 .

Prior academic performance (e.g., high school GPA, SAT and ACT scores or college GPA) has been identified as one of the most consistent predictors of student retention: Students who are more successful academically are less likely to drop out 17 , 21 , 24 , 25 , 26 , 27 , 28 , 29 . Similarly, research has highlighted the role of demographic and socio-economic variables, including age, gender, and ethnicity 12 , 19 , 25 , 27 , 30 as well as socio-economic status 31 in predicting a students’ likelihood of persisting. For example, women are more likely to continue their studies than men 12 , 30 , 32 , 33 while White and Asian students are more likely to persist than students from other ethnic groups 19 , 27 , 30 . Moreover, a student’s socio-economic status and immediate financial situation have been shown to impact retention. Students are more likely to discontinue their studies if they are first generation students 34 , 35 , 36 or experience high levels of financial distress, (e.g., due to student loans or working nearly full time to cover college expenses) 37 , 38 . In contrast, students who receive financial support that does not have to be repaid post-graduation are more likely to complete their degree 39 , 40 .

While most of the outlined predictors of student retention are relatively stable intrapersonal characteristics and oftentimes difficult or costly to change, research also points to a more malleable pillar of retention: the students’ experience at university. In particular, the extent to which they are successfully integrated and socialized into the institution 16 , 22 , 41 , 42 . As Bean (2005) notes, “few would deny that the social lives of students in college and their exchanges with others inside and outside the institution are important in retention decisions” (p. 227) 41 . The extent to which a student is socially integrated and embedded into their institution has been studied in a number of ways, relating retention to the development of friendships with fellow students 43 , the student’s position in the social networks 16 , 29 , the experience of social connectedness 44 and a sense of belonging 42 , 45 , 46 . Taken together, these studies suggest that interactions with peers as well as faculty and staff – for example through participation in campus activities, membership of organizations, and the pursuit of extracurricular activities – help students better integrate into university life 44 , 47 . In contrast, a lack of social integration resulting from commuting (i.e., not living on campus with other students) has shown to negatively impact a student’s chances of completing their degree 48 , 49 , 50 , 51 . In short, the stronger a student is embedded and feels integrated into the university community – particularly in their first year – the less likely the student will drop out 42 , 52 .

Predicting retention using machine learning

A large proportion of research on student attrition has focused on understanding and explaining drivers of student retention. However, alongside the rise of computational methods and predictive modeling in the social sciences 53 , 54 , 55 , educational researchers and practitioners have started exploring the feasibility and value of data-driven approaches in supporting institutional decision making and educational effectiveness (for excellent overviews of the growing field see 56 , 57 ). In line with this broader trend, a growing body of work has shown the potential of predicting student dropout with the help of machine learning. In contrast to traditional inferential approaches, machine learning approaches are predominantly concerned with predictive performance (i.e., the ability to accurately forecast behavior that has not yet occurred) 54 . In the context of student retention this means: How accurately can we predict whether a student is going to complete or discontinue their studies (in the future) by analyzing their demographic and socio-economic characteristics, their past and current academic performance, as well as their current embeddedness in the university system and culture?

Echoing the National Academy of Education’s statement (2017) that “in the educational context, big data typically take the form of administrative data and learning process data, with each offering their own promise for educational research” (p.4) 58 , the vast majority of existing studies have focused on the prediction of student retention from demographic and socio-economic characteristics as well as students’ academic history and current performance 13 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 . In a recent study, Aulck and colleagues trained a model on the administrative data of over 66,000 first-year students enrolled in a public US university (e.g., race, gender, highschool GPA, entrance exam scores and early college performance/transcript data) to predict whether they would re-enroll in the second year and eventually graduate 59 . Specifically, they used a range of linear and non-linear machine learning models (e.g., regularized logistic regression, k-nearest neighbor, random forest, support vector machine, and gradient boosted trees) to predict retention out-of-sample using a standard cross-validation procedures. Their model was able to predict dropouts with an accuracy of 88% and graduation with an accuracy of 81% (where 50% is chance).

While the existing body of work provides robust evidence for the potential of predictive models for identifying at-risk students, it is based on similar sets of macro-level data (e.g., institutional data, academic performance) or micro-level data (e.g., click-stream data). Almost entirely absent from this research is data on students’ daily experience and engagement with both other students and the university itself (meso-level). Although there have been a small number of studies trying to capture part of this experience by inferring social networks from smart card transactions that were made by students in the same time and place 16 or engagement metrics with an open online course 67 , none of the existing work has offered a more holistic and comprehensive view on students’ daily experience. One potential explanation for this gap is that information about students’ social interactions with classmates or their day-to-day engagement with university services and events is difficult to track. While universities often have access to demographic or socio-economic variables through their Student Information Systems (SISs), and can easily track their academic performance, most universities do not have an easy way of capturing student’s deeper engagement with the system.

Research overview

In this research, we partner with an educational software company – READY Education – that offers a virtual one-stop interaction platform in the form of a smartphone application to facilitate communication between students, faculty, and staff. Students receive relevant information and announcements, can manage their university activities, and interact with fellow students in various ways. For example, the app offers a social media experience like Facebook, including private messaging, groups, public walls, and friendships. In addition, it captures students’ engagement with the university asking them to check into events (e.g., orientation, campus events, and student services) using QR code functionality and prompting them to rate their experience afterwards (see Methods for more details on the features we extracted from this data). As a result, the READY Education app allows us to observe a comprehensive set of information about students that include both (i) institutional data (i.e., demographic, and socio-economic characteristics as well as academic performance), and (ii) their idiosyncratic experience at university captured by their daily interactions with other students and the university services/events. Combining the two data sources captures a student’s profile more holistically and makes it possible to consider potential interactions between the variable sets. For example, being tightly embedded in a social support network of friends might be more important for retention among first-generation students who might not receive the same level of academic support or learn about implicit academic norms and rules from their parents.

Building on this unique dataset, we use machine learning models to predict student retention (i.e., dropout) from both institutional and behavioral engagement data. Given the desire to identify at-risk students as early as possible, we only use information gathered in the students’ first semester to predict whether the student dropped out at any point in time during their program. To thoroughly validate and scrutinize our analytical approach, generate insights for potential interventions, and probe the generalizability of our predictive models across different universities, we investigate the following three research questions:

How accurately can we predict a student's likelihood of discontinuing their studies using information from the first term of their studies (i.e., institutional data, behavioral engagement data, and a combination of both)?

Which features are the most predictive of student retention?

How well do the predictive models generalize across universities (i.e., how well can we predict student retention of students from one university if we use the model trained on data from another university and vice versa)?

Participants

We analyze de-identified data from four institutions with a total of 50,095 students (min = 476, max = 45,062). All students provided informed consent to the use of the anonymized data by READY Education and research partners. All experimental protocols were approved by the Columbia University Ethics Board, and all methods carried out were in accordance with the Board’s guidelines and regulations. The data stem from two sources: (a) Institutional data and (b) behavioral engagement data. The institutional data collected by the universities contain socio-demographics (e.g., gender, ethnicity), general study information (e.g., term of admission, study program), financial information (e.g., pell eligibility), students’ academic achievement scores (e.g., GPA, ACT) as well as the retention status. The latter indicated whether students continued or dropped out and serves as the outcome variable. As different universities collect different information about their students, the scope of institutional data varied between universities. Table 1 provides a descriptive overview of the most important sociodemographic characteristics for each of the four universities. In addition, it provides a descriptive overview of the app usage, including the average number of logs per student, the total number of sessions and logs, as well as the percentage of students in a cohort using the app (i.e., coverage). The broad coverage of students using the app, ranging between 70 and 98%, results in a largely representative sample of the student populations at the respective universities.

Notably, Universities 1–3 are traditional university campuses, while University 4 is a combination of 16 different community colleges. Given that there is considerable heterogeneity across campuses, the predictive accuracies for University 4 are a-priori expected to be lower than those observed for universities 1–3 (and partly speak to the generalizability of findings already). The decision to include University 4 as a single entity was based on the fact that separating out the 16 colleges would have resulted in an over-representation of community colleges that all share similar characteristics thereby artificially inflating the observed cross-university accuracies. Given these limitations (and the fact that the University itself collapsed the college campuses for many of their internal reports), we decided to analyze it as a single unit, acknowledging that this approach brings its own limitations.

The behavioral engagement data were generated through the app (see Table 1 for the specific data collection windows at each University). Behavioral engagement data were available in the form of time-stamped event-logs (i.e., each row in the raw data represented a registered event such as a tab clicked, a comment posted, a message sent). Each log could be assigned to a particular student via an anonymized, unique identifier. Across all four universities, the engagement data contained 7,477,630 sessions (Mean = 1,869,408, SD = 3,329,852) and 17,032,633 logs (Mean = 4,258,158, SD = 6,963,613) across all universities. For complete overview of all behavioral engagement metrics including a description see Table S1 in the Supplementary Materials.

Pre-processing and feature extraction

As a first step, we cleaned both the institutional and app data. For the institutional data, we excluded students who did not use the app and therefore could not be assigned a unique identifier. In addition, we excluded students without a term of admission to guarantee that we are only observing the students’ first semester. Lastly, we removed duplicate entries resulting from dual enrollment in different programs. For the app usage data, we visually inspected the variables in our data set for outliers that might stem from technical issues. We pre-processed data that reflected clicking through the app, named “clicked_[…]” and “viewed_[…]” (see Table S1 in the Supplementary Materials). A small number of observations showed unrealistically high numbers of clicks on the same tab in a very short period, which is likely a reflection of a student repeatedly clicking on a tab due to long loading time or other technical issues. To avoid oversampling these behaviors, we removed all clicks of the same type which were made by the same person less than one minute apart.

We extracted up to 462 features for each university across two broad categories: (i) institutional features and (ii) engagement features, using evidence from previous research as a reference point (see Table S2 in the Supplementary Materials for a comprehensive overview of all features and their availability for each of the universities). Institutional features contain students’ demographic, socio-economic and academic information. The engagement features represent the students’ behavior during their first term of studies. They can be further divided into app engagement and community engagement. The app engagement features represent the students’ behavior related to app usage, such as whether the students used the app before the start of the semester, how often they clicked on notifications or the community tabs, or whether their app use increased over the course of the semester. The community involvement features reflect social behavior and interaction with others, e.g., the number of messages sent, posts and comments made, events visited or a student’s position in the network as inferred from friendships and direct messages. Importantly, many of the features in our dataset will be inter-correlated. For example, living in college accommodation could signal higher levels of socio-economic status, but also make it more likely that students attend campus events and connect with other students living on campus. While intercorrelations among predictors is a challenge with standard inferential statistical techniques such as regression analyses, the methods we apply in this paper can account for a large number of correlated predictors.

Institutional features were directly derived from the data recorded by the institutions. As noted above, not all features were available for all universities, resulting in slightly different feature sets across universities. The engagement features were extracted from the app usage data. As we focused on an early prediction of drop-out, we restricted the data to event-logs that were recorded in the respective students' first term. Notably, the data captures students’ engagement as a time-stamped series of events, offering fine-grained insights into their daily experience. For reasons of simplicity and interpretability (see research question 2), we collapse the data into a single entry for each student. Specifically, we describe a student’s overall experience during the first semester, by calculating distribution measures for each student such as the arithmetic mean, standard deviation, kurtosis, skewness, and sum values. For example, we calculate how many daily messages a particular student sent or received during their first semester, or how many campus events they attended in total. However, we also account for changes in a student’s behavior over time by calculating more complex features such as entropy (e.g., the extent to which a person has frequent contact with few people or the same degree of contact with many people) and the development of specific behaviors over time measured by the slope of regression analyses, as well as features representing the regularity of behavior (e.g., the deviation of time between sending messages). Overall, the feature set was aimed at describing a student’s overall engagement with campus resources and other students during the first semester as well as changed in engagement over time. Finally, we extracted some of the features separately for weekdays and weekends to account for differences and similarities in students’ activities during the week and the weekend. For example, little social interaction on weekdays might predict retention differently than little social interaction on the weekend.

We further cleaned the data by discarding participants for whom the retention status was missing and those in which 95% or more of the values were zero or missing. Furthermore, features were removed if they showed little or no variance across participants, which makes them essentially meaningless in a prediction task. Specifically, we excluded numerical features which showed the same values for more than 90% of observations and categorical features which showed the same value for all observations.

In addition to these general pre-processing procedures, we integrated additional pre-processing steps into the resampling prior to training the models to avoid an overestimation of model performance 68 . To prevent problems with categorical features that occur when there are fewer levels in the test than in the training data, we first removed categories that did not occur in the training data. Second, we removed constant categorical features containing a single value only (and therefore no variation). Third, we imputed missing values using the following procedures: Categorical features were imputed with the mode. Following commonly used approaches to dealing with missing data, the imputation of numeric features varied between the learners. For the elastic net, we imputed those features with the median. For the random forest, we used twice the maximum to give missing values a distinct meaning that would allow the model to leverage this information. Lastly, we used the "Synthetic Minority Oversampling Technique" (SMOTE) to create artificial examples for the minority class in the training data 69 . The only exception was University 4 which followed a different procedure due to the large sample size and estimated computing power for implementing SMOTE. Instead of oversampling minority cases, we downsampled majority cases such that the positive and negative class were balanced. This was done to address the class imbalance caused by most students continuing their studies rather than dropping out 12 .

Predictive modeling approach

We predicted the retention status (1 = dropped out, 0 = continued) in a binary prediction task, with three sets of features: (1) institutional features (2) engagement features, and (3) a combined set of all features. To ensure the robustness of our predictions and to identify the model which is best suited for the current prediction context 54 , we compared a linear classifier ( elastic net; implemented in glmnet 4.1–4) 70 , 71 and a nonlinear classifier ( random forest ; implemented in randomForest 4.7–1) 72 , 73 . Both models are particularly well suited for our prediction context and are common choices in computational social science. That is, simple linear or logistic regression models are not suitable to work with datasets that have many inter-correlated predictors (in our case, a total of 462 predictors many of which are highly correlated) due to a high risk of overfitting. Both the elastic net and the random forest algorithm can effectively utilize large feature sets while reducing the risk of overfitting. We evaluate the performance of our six models for each school (2 algorithms and 3 feature sets), using out-of-sample benchmark experiments that estimate predictive performance and compare it against a common non-informative baseline model. The baseline represents a null-model that does not include any features, but instead always predicts the majority class, which in our samples means “continued.” 74 Below, we provide more details about the specific algorithms (i.e., elastic net and random forest), the cross-validation procedure, and the performance metrics we used for model evaluation.

Elastic net model

The elastic net is a regularized regression approach that combines advantages of ridge regression 75 with those of the LASSO 76 and is motivated by the need to handle large feature sets. The elastic net shrinks the beta-coefficients of features that add little predictive value (e.g., intercorrelated, little variance). Additionally, the elastic net can effectively remove variables from the model by reducing the respective beta coefficients to zero 70 . Unlike classical regression models, the elastic net does not aim to optimize the sum of least squares, but includes two penalty terms (L1, L2) that incentivize the model to reduce the estimated beta value of features that do not add information to the model. Combining the L1 (the sum of absolute values of the coefficients) and L2 (the sum of the squared values of the coefficients) penalties, elastic net addresses the limitations of alternative linear models such as LASSO regression (not capable of handling multi-collinearity) and Ridge Regression (may not produce sparse-enough solutions) 70 .

Formally, following Hastie & Qian (2016) the model equation of elastic net for binary classification problems can be written as follows 77 . Suppose the response variable takes values in G = {0,1}, y i denoted as I(g i  = 1), the model formula is written as

After applying the log-odds transformation, the model formula can be written as

The objective function for logistic regression is the penalized negative binomial log-likelihood

where λ is the regularization parameter that controls the overall strength of the regularization, α is the mixing parameter that controls the balance between L1 and L2 regularization with α values closer to zero to result in sparser models (lasso regression α = 1, ridge regression α = 0). β represents coefficients of the regression model, ||β|| 1 is the is the L1 norm of the coefficients (the sum of absolute values of the coefficients), ||β|| 2 is the L2 norm of the coefficients (the sum of the squared values of the coefficients).

The regularized regression approach is especially relevant for our model because many of the app-based engagement features are highly correlated (e.g., the number of clicks is related to the number of activities registered in the app). In addition, we favored the elastic net algorithm over more complex alternatives, because the regularized beta coefficients can be interpreted as feature importance, allowing insights into which predictors are most informative of college dropout 78 , 79 .

Random forest model

Random forest models are a widely used ensemble learning method that grows many bagged and decorrelated decision trees to come up with a “collective” prediction of the outcome (i.e., the outcome that is chosen by most trees in a classification problem) 72 . Individual decision trees recursively split the feature space (rules to distinguish classes) with the goal to separate the different classes of the criterion (drop out vs. remain in our case). For a detailed description of how individual decision trees operate and translate to a random forest see Pargent, Schoedel & Stachl 80 .

Unlike the elastic net, random forest models can account for nonlinear associations between features and criterion and automatically include multi-dimensional interactions between features. Each decision tree in a random forest considers a random subset of bootstrapped cases and features, thereby increasing the variance of predictions across trees and the robustness of the overall prediction. For the splitting in each node of each tree, a random subset of features (mtry hyperparameter that we optimize in our models) are used by randomly drawing from the total set. For each split, all combinations of split variables and split points are compared, with the model choosing the splits that optimize the separation between classes 72 .

The random forest algorithm can be formally described as follows (verbatim from Hastie et al., 2016, p. 588):

For b = 1 to B:

Draw a bootstrap sample of size N from the training data.

Grow a decision tree to the bootstrapped data, by recursively repeating the following steps for each terminal node of the tree, until the minimum node size is reached.

Select m variables at random from the p variables.

Pick the best variable/split-point among the m according to the loss function (in our case Gini-impurity decrease)

Split the node into two daughter nodes.

Output the ensemble of trees

New predictions can then be made by generating a prediction for each tree and aggregating the results using majority vote.

The aggregation of predictions across trees in random forests improves the prediction performance compared to individual decision trees, as it can benefit from the trees’ variance and greatly reduces it to arrive at a single prediction 72 , 81 .

(Nested) Cross-validation: Out-of-sample model evaluation

We evaluate the performance of our predictive models using an out-of-sample validation approach. The idea behind out-of-sample validation is to increase the likelihood that a model will accurately predict student dropout on new data (e.g. new students) by using different datasets when training and evaluating the model. A commonly used, efficient technique for out-of-sample validation is to repeatedly fit (cf. training) and evaluate (cf. testing) models on non-overlapping parts of the same datasets and to combine the individual estimates across multiple iterations. This procedure – known as cross-validation – can also be used for model optimization (e.g., hyperparameter-tuning, pre-processing, variable selection), by repeatedly evaluating different settings for optimal predictive performance. When both approaches are combined, evaluation and optimization steps need to be performed in a nested fashion to ensure a strict separation of training and test data for a realistic out-of-sample performance estimation. The general idea is to emulate all modeling steps in each fold of the resampling as if it were a single in-sample model. Here, we use nested cross-validation to estimate the predictive performance of our models, to optimize model hyperparameters, and to pre-process data. We illustrate the procedure in Fig.  1 .

figure 1

Schematic cross-validation procedure for out-of-sample predictions. The figure shows a tenfold cross-validation in the outer loop which is used to estimate the overall performance of the model by comparing the predicted outcomes for each student in the previously unseen test set with their actual outcomes. Within each of the 10 outer loops, a fivefold cross-validation in the inner loop is used to finetune model hyperparameters by evaluating different model settings.

The cross-validation procedure works as follows: Say we have a dataset with 1,000 students. In a first step, the dataset is split into ten different subsamples, each containing data from 100 students. In the first round, nine of these subsamples are used for training (i.e., fitting the model to estimate parameters, green boxes). That means, the data from the first 900 students will be included in training the model to relate the different features to the retention outcome. Once training is completed, the model’s performance can be evaluated on the data of the remaining 100 students (i.e., test dataset, blue boxes). For each student, the actual outcome (retained or discontinued, grey and black figures) is compared to the predicted outcome (retained or discontinued, grey and black figures). This comparison allows for the calculation of various performance metrics (see “ Performance metrics ” section below for more details). In contrast to the application of traditional inferential statistics, the evaluation process in predictive models separates the data used to train a model from the data used to evaluate these associations. Hence any overfitting that occurs at the training stage (e.g., using researcher degrees of freedom or due to the model learning relationships that are unique to the training data), hurts the predictive performance in the testing stage. To further increase the robustness of findings and leverage the entire dataset, this process is repeated for all 10 subsamples, such that each subsample is used nine times for training and once for testing. Finally, the obtained estimates from those ten iterations are aggregated to arrive at a cross-validated estimate of model performance. This tenfold cross validation procedure is referred to as the “outer loop”.

In addition to the outer loop, our models also contain an “inner loop”. The inner loop consists of an additional cross-validation procedure that is used to identify ideal hyperparameter settings (see “ Hyperparameter tuning ” section below). That is, in each of the ten iterations of the outer loop, the training sample is further divided into a training and test set to identify the best parameter constellations before model evaluation in the outer loop. We used fivefold cross-validation in the inner loop. All analyses scripts for the pre-processing and modeling steps are available on OSF ( https://osf.io/bhaqp/?view_only=629696d6b2854aa9834d5745425cdbbc ).

Performance metrics

We evaluate model performance based on four different metrics. Our main metric for model performance is AUC (area under the received operating characteristics curve). AUC is commonly used to assess the performance of a model over a 50%-chance baseline, and can range anywhere between 0 and 1. The AUC metric captures the area under the receiver operating characteristic (ROC) curve, which plots the true positive rate (TPR or recall; i.e. the percentage of correctly classified dropouts among all students who actually dropped out), against the false positive rate (FPR; i.e. the percentage of students erroneously classified as dropouts among all the students who actually continued). When the AUC is 0.5, the model’s predictive performance is equal to chance or a coin flip. The closer to 1, the higher the model’s predictive performance in distinguishing between students who continued and those who dropped out.

In addition, we report the F1 score, which ranges between 0 and 1 82 . The F1 score is based on the model’s positive predictive value (or precision, i.e., the percentage of correctly classified dropouts among all students predicted to have dropped out) as well as the model's TPR. A high F1 score hence indicates that there are both few false positives and few false negatives.

Given the specific context, we also report the TPR and the true negative rates (TNR, i.e. the percentage of students predicted to continue among all students who actually continued). Depending on their objective, universities might place a stronger emphasis on optimizing the TPR to make sure no student who is at risk of dropping out gets overlooked or on optimizing the TNR to save resources and assure that students do not get overly burdened. Notably, in most cases, universities are likely to strive for a balance between the two, which is reflected in our main AUC measure. All reported performance metrics represent the mean predictive performance across the 10 cross-validation folds of the outer loop 54 .

Hyperparameter tuning

We used a randomized search with 50 iterations and fivefold cross-validation for hyperparameter tuning in the inner loop of our cross-validation. The randomized search algorithm fits models with hyperparameter configurations randomly selected from a previously defined hyperparameter space and then picks the model that shows the best generalized performance averaged over the five cross-validation folds. The best hyperparamter configuration is used for training in the outer resampling loop to evaluate model performance.

For the elastic net classifier, we tuned the regularization parameter lambda, the decision rule used to choose lambda, and the L1-ratio parameter. The search space for lambda encompassed the 100 glmnet default values 71 . The space of decision rules for lambda included lambda.min which chooses the value of lambda that results in the minimum mean cross-validation error, and lambda.1se which chooses the value of lambda that results in the most regularized model such that the cross-validation error remains within one standard error of the minimum. The search space for the L1-ratio parameter included the range of values between 0 (ridge) to 1 (lasso). For the random forest classifier, we tuned the number of features selected for each split within a decision tree (mtry) and the minimum node size (i.e., how many cases are required to be left in the resulting end-nodes of the tree). The search space for the number of input features per decision tree was set to a range of 1 to p, where p represents the dimensionality of the feature space. The search space for minimum node size was set to a range of 1 to 5. Additionally, for both models, we tuned the oversampling rate and the number or neighbors used to generate new samples utilized by the SMOTE algorithm. The oversampling rate was set to a range of 2 to 15 and the number of nearest neighbors was set to a range of 1 to 10.

RQ1: How accurately can we predict a student's likelihood of discontinuing their studies using information from the first term of their studies?

Figure  2 displays AUC scores (Y-axis) across the different universities (rows), separated by the different feature sets (colors) and predictive algorithms (X-axis labels). The figure displays the distribution of AUC accuracies across the 10 cross-validation folds, alongside their mean and standard deviation. Independent t-tests using Holm corrections for multiple comparisons indicate statistical differences in the predictive performance across the different models and feature sets within each university. Table 2 provides the predictive performance across all four metrics.

figure 2

AUC performance across the four universities for different feature sets and model. Inst. = Institutional data. Engag. = Engagement data. (EN) = Elastic Net. (RF) = Random Forest.

Overall, our models showed high levels of predictive accuracies across universities, models, feature sets and performance metrics, significantly outperforming the baseline in all instances. The main performance metric AUC reached an average of 73% (where 50% is chance), with a maximum of 88% for the random forest model and the full feature set in University 1. Both institutional features and engagement features significantly contributed to predictive performance, highlighting the fact that a student’s likelihood to drop out is both a function of their more stable socio-demographic characteristics as well as their experience of campus life. In most cases, the joint model (i.e., the combination of institutional and engagement features) performed better than each of the individual models alone. Finally, the random forest models produced higher levels of predictive performance than the elastic net in most cases (average AUC elastic net = 70%, AUC random forest = 75%), suggesting that the features are likely to interact with one another in predicting student retention and might not always be linearly related to the outcome.

RQ2: Which features are the most predictive of student retention?

To provide insights into the underlying relationships between student retention and socio-demographic as well as behavioral features, we examined two indicators of feature importance that both offer unique insights. First, we calculated the zero-order correlations between the features and the outcome for each of the four universities. We chose zero-order correlations over elastic net coefficients as they represent the relationships unaltered by the model’s regularization procedure (i.e., the relationship between a feature and the outcome is shown independently of the importance of the other features in the model). To improve the robustness of our findings, we only included the variables that passed the threshold for data inclusion in our models and had less than 50% of the data imputed. The top third of Table 3 displays the 10 most important features (i.e., highest absolute correlation with retention). The sign in brackets indicates the direction of the effects with ( +) indicating a protective factor and (−) indicating a risk factor. Features that showed up in the top 10 for more than 1 university are highlighted in bold.

Second, we calculated permutation variable importance scores for the elastic net and random forest models. For the elastic net model, feature importance is reported as the model coefficient after shrinking the coefficients according to their incremental predictive power. Compared to the zero-order correlation, the elastic net coefficients hence identify the features that have the strongest unique variance. For the random forest models, feature importance is reported as a model-agnostic metric that estimates the importance of a feature by observing the drop in model predictive performance when the actual association between the feature and the outcome is broken by randomly shuffling observations 72 , 83 . A feature is considered important if shuffling its values increases the model error (and therefore decreases the model’s predictive performance). In contrast to the coefficients from the elastic net model, the permutation feature importance scores are undirected and do not provide insights into the specific nature of the relationship between the feature and the outcome. However, they account for the fact that some features might not be predictive themselves but could still prove valuable in the overall model performance because they moderate the impact of other features. For example, minority or first-generation students might benefit more from being embedded in a strong social network than majority students who do not face the same barriers and are likely to have a stronger external support network. The bottom of Table 3 displays the 10 most important features in the elastic net and random forest models (i.e., highest permutation variable importance).

Supporting the findings reported in RQ1, the zero-order correlations confirm that both institutional and behavioral engagement features play an important role in predicting student retention. Aligned with prior work, students’ performance (measured by GPA or ACT) repeatedly appeared as one of the most important predictors across universities and models. In addition, many of the engagement features (e.g., services attended, chat messages network centrality) are related to social activities or network features, supporting the notion that a student’s social connections and support play a critical role in student retention. In addition, the extent to which students are positively engaged with their institutions (e.g., by attending events and rating them highly) appears to play a critical role in preventing dropout.

RQ3: How well do the predictive models generalize across universities?

To test the generalizability of our models across universities, we used the predictive model trained on one university (e.g., University 1) to predict retention of the remaining three universities (e.g., Universities 2–4). Figures  3 A,B display the AUCs across all possible pairs, indicating which university was used for training (X-axis) and which was used for testing (Y-axis, see Figure S1 in the SI for graphs illustrating the findings for F1, TNR and TPR).

figure 3

Performance (average AUC) of cross-university predictions.

Overall, we observed reasonably high levels of predictive performance when applying a model trained on one university to the data of another. The average AUC observed was 63% (for both the elastic net and the random forest), with the highest predictive performance reaching 74% (trained on University 1, predicting University 2), just 1%-point short of the predictive performance observed for the prediction from the universities own model (trained on University 2, predicting University 2). Contrary to the findings in RQ1, the random forest models did not perform better than the elastic net when making predictions for other universities. This suggests that the benefits afforded by the random forest models capture complex interaction patterns that are somewhat unique to each university but might not generalize well to new contexts. The main outlier in generalizability was University 4, where none of the other models reached accuracies much better than chance, and whose model produced relatively low levels of accuracies when predicting student retention across universities 1–2. This is likely a result of the fact that University 4 was qualitatively different from the other universities in several ways, including the fact that University 4 was a community college and consisted of 16 different campuses that were merged for the purpose of this analysis (see Methods for more details).

We show that student retention can be predicted from institutional data, behavioral engagement data, and their combination. Using data from over 50,000 students across four Universities, our predictive models achieve out-of-sample accuracies of up to 88% (where 50% is chance). Notably, while both institutional data and behavioral engagement data significantly predict retention, the combination of the two performs best in most instances. This finding is further supported by our feature importance analyses which suggest that both institutional and behavioral engagement features are among the most important predictors of student retention. Specifically, academic performance as measured by GPA and behavioral metrics associated with campus engagement (e.g., event attendances or ratings) or a student’s position in the network (e.g., closeness or centrality) were shown to consistently act as protective factors. Finally, we highlight the generalizability of our models across universities. Models trained on one university were able to predict student retention at another with reasonably high levels of predictive performance. As one might expect, the generalizability across universities heavily depends on the extent to which the universities are similar on important structural dimensions, with prediction accuracies dropping radically in cases where similarity is low (see low cross-generalization for University 4).

Contributions to the scientific literature

Our findings contribute to the existing literature in several ways. First, they respond to recent calls for more predictive research in psychology 54 , 55 as well as the use of Big Data analytics in education research 56 , 57 . Not only do our models consider socio-demographic characteristics that are collected by universities, but they also capture students’ daily experience and university engagement by tracking behaviors via the READY Education app. Our findings suggest, these more psychological predictors of student retention can improve the performance of predictive models above and beyond socio-demographic variables. This is consistent with previous findings suggesting that the inclusion of engagement metrics improves the performance of predictive models 16 , 84 , 85 . Overall, our models showed superior accuracies to models of former studies that were trained only on demographics and transcript records 15 , 25 or less comprehensive behavioral features 16 and provided results comparable to those reported in studies that additionally included a wide range of socio-economic variables 12 . Given that the READY Education app captures only a fraction of the students' actual experience, the high predictive accuracies make an even stronger case for the importance of student engagement in college retention.

Second, our findings provide insights into the features that are most important in predicting whether a student is going to drop out or not. By doing so they complement our predictive approach with layers of understanding that are conducive to not only validating our models but also generating insights into potential protective and risk factors. Most importantly, our findings highlight the relevance of the behavioral engagement metrics for predicting student retention. Most features identified as being important in the prediction were related to app and community engagement. In line with previous research, features indicative of early and deep social integration, such as interactions with peers and faculty or the development of friendships and social networks, were found to be highly predictive 16 , 41 . For example, it is reasonable to assume that a short time between app registration and the first visit of a campus event (one of the features identified as important) has a positive impact on retention, because campus events offer ideal opportunities for students to socialize 86 . Early participation in a campus event implies early integration and networking with others, protecting students from perceived stress 87 and providing better social and emotional support 88 . In contrast, a student who never attends an event or does so very late in the semester may be less connected to the campus life and the student community which in turn increases the likelihood of dropping out. This interpretation is strengthened by the fact that a high proportion of positive event ratings was identified as an important predictor of a student continuing their studies. Students who enjoy an event are likely to feel more comfortable, be embedded in the university life, make more connections, and build stronger connections. This might result in a virtuous cycle in which students continue attending events and over time create a strong social connection to their peers. As in most previous work, a high GPA score was consistently related to a higher likelihood of continuing one’s studies 21 , 24 . Although their importance varied across universities, ethnicity was also found to play a major role for retention, with consistent inequalities replicating in our predictive models 12 , 19 , 47 . For example, Black students were on average more likely to drop-out, suggesting that universities should dedicate additional resources to protect this group. Importantly, all qualitative interpretations are post-hoc. While many of the findings are intuitive and align with previous research on the topic, future studies should validate our results and investigate the causality underlying the effects in experimental or longitudinal within-person designs 54 , 78 .

Finally, our findings are the first to explore the extent to which the relationships between certain socio-demographic and behavioral characteristics might be idiosyncratic and unique to a specific university. By being able to compare the models across four different universities, we were able to show that many of the insights gained from one university can be leveraged to predict student retention at another. However, our findings also point to important boundary conditions: The more dissimilar universities are in their organizational structures and student experience, the more idiosyncratic the patterns between certain socio-demographic and behavioral features with student retention will be and the harder it is to merely translate general insights to the specific university campus.

Practical contributions

Our findings also have important practical implications. In the US, student attrition results in an average annual revenue loss of approximately $16.5 billion per year 9 , 10 and over $9 billion wasted in federal and state grants and subsidies that are awarded to students who do not finish their degree 11 . Hence, it is critical to predict potential dropouts as early and as accurately as possible to be able to offer dedicated support and allocate resources where they are needed the most. Our models rely exclusively on data collected in the first semester at university and are therefore an ideal “early warning” system for universities who want to predict whether their students will likely continue their studies or drop out at some point. Depending on the university’s resources and goals, the predictive models can be optimized for different performance measures. Indeed, a university might decide to focus on the true positive rate to capture as many dropouts as possible. While this would mean erroneously classifying “healthy “ students as potential dropouts, universities might decide that the burden of providing “unnecessary “ support to these healthy students is worth the reduced risk of missing a dropout. Importantly, our models go beyond mere socio-demographic variables and allow for a more nuanced, personal model that considers not just “who someone is” but also what their experience on campus looks like. As such, our models make it possible to acknowledge individuality rather than using over-generalized assessments of entire socio-demographic segments.

Importantly, however, it is critical to subject these models to continuous quality assurance. While predictive models could allow universities to flag at-risk students early, they could also perpetuate biases that get calcified in the predictive models themselves. For example, students who are traditionally less likely to discontinue their studies might have to pass a much higher level of dysfunctional engagement behavior before their file gets flagged as “at-risk”. Similarly, a person from a traditionally underrepresented group might receive an unnecessarily high volume of additional check-ins even though they are generally flourishing in their day-to-day experience. Given that being labeled as “at-risk” can be associated with stigma that could reinforce stigmas around historically marginalized groups, it will be critical to monitor both the performance of the model over time as well as the perception of its helpfulness among administrators, faculty, and students.

Limitations and future research

Our study has several limitations and highlights avenues for future research. First, our sample consisted of four US universities. Thus, our results are not necessarily generalizable to countries with more collectivistic cultures and other education systems such as Asia, where the reasons for dropping out might be different 89 , 90 , or Europe where most students work part-time jobs and live off-campus. Future research should investigate the extent to which our models can generalize to other cultural contexts and identify the features of student retention that are universally valid across contexts.

Second, our predictive models relied on app usage data. Therefore, our predictive approach could only be applied to students who decided to use the app. This selection, in and by itself, is likely to introduce a sampling bias, as students who decide to use the app might be more likely to retain in the first place, restricting the variance in observations, and excluding students for whom app usage data was not available. However, as our findings suggest, the institutional data alone provide predictive performance independent of the app features, making this a viable alternative for students who do not use the app.

Third, our predictive models rely on cross-sectional predictions. That is, we observe a students’ behavior over the course of an entire semester and based on the patterns observed in other students we predict whether that student is likely to drop out or not. Future research could try to improve both the predictive performance of the model and its usefulness for applied contexts by modeling within-person trends dynamically. Given enough data, the model could observe a person’s baseline behavior and identify changes from that baseline as potentially problematic. In fact, more social contact with other students might be considered a protective factor in our cross-sectional model. However, there are substantial individual differences in how much social contact individuals seek out and enjoy 91 . Hence, sending 10 chat messages a week might be considered a lot for one person, but very little for another. Future research should hence investigate whether the behavioral engagement features allow for a more dynamic within-person model that makes it possible to take base rates into account and provide a dynamic, momentary assessment of a student’s likelihood to drop out.

Fourth, although the engagement data was captured as a longitudinal time series with time-stamped events, we collapsed the data into a single set of cross-sectional features for each student. Although some of these features captures variation in behaviors over time (e.g., entropy and linear trends), future research should try to implement more advanced machine learning models to account for this time series data directly. For example, long short-term memory models (LSTMs) 92 – a type of recurrent neural network – are capable of learning patterns in longitudinal, sequential data like ours.

Fifth, even though the current research provides initial insights into the workings of the models by highlighting the importance of certain features, the conclusions that can be drawn from these analyses are limited as the importance metrics are calculated for the overall population. Future research could aim to calculate the importance of certain features at the individual level to test whether their importance varies across certain socio-demographic features. Estimating the importance of a person’s position in the social network on an individual level, for example, would make it possible to see whether the importance is correlated with institutional data such as minority or first-generation status.

Finally, our results lay the foundation for developing interventions that foster retention through shaping students’ experience at university 93 . Interventions which have been shown to have a positive effect on retention, include orientation programs and academic advising 94 , student support services like mentoring and coaching as well as need-based grants 95 . However, to date, the first-year experience programs meant to strengthen social integration of first year students, do not seem to have yielded positive results 96 , 97 . Our findings could support the development of interventions aimed at improving and maintaining student integration on campus. On a high level, the insights into the most important features provide an empirical path for developing relevant interventions that target the most important levers of student retention. For example, the fact that the time between registration and the first event attendance has such a big impact on student retention means that universities should do everything they can to get students to attend events as early as possible. Similarly, they could develop interventions that lead to more cohesive networks among cohorts and make sure that all students connect to their community. On a deeper, more sophisticated level, new approaches to model explainability could allow universities to tailor their intervention to each student 98 , 99 . For example, explainable AI makes it possible to derive decision rules for each student, indicating which features were critical in predicting the students’ outcome. While student A might be predicted to drop out because they are disconnected from the network, student B might be predicted to drop out because they don’t access the right information on the app. Given this information, universities would be able to personalize their offerings to the specific needs of the student. While student A might be encouraged to spend more time socializing with other students, student B might be reminded to check out important course information. Hence, predictive models could not only be used to identify students at risk but also provide an automated path to offering personalized guidance and support.

For every study that is discontinued, an educational dream shatters. And every shattered dream has a negative long-term impact both on the student and the university the student attended. In this study we introduce an approach to accurately predicting student retention after the first term. Our results show that student retention can be predicted with relatively high levels of predictive performance when considering institutional data, behavioral engagement data, or a combination of the two. By combining socio-demographic characteristics with passively observed behavioral traces reflecting a student’s daily activities, our models offer a holistic picture of students' university experiences and its relation to retention. Overall, such predictive models have great potential both for the early identification of at-risk students and for enabling timely, evidence-based interventions.

Data availability

Raw data are not publicly available due to their proprietary nature and the risks associated with de-anonymization, but they are available from the corresponding author on reasonable request. The pre-processed data and all analyses codes are available on OSF ( https://osf.io/bhaqp/ ) to facilitate reproducibility of our work. Data were analyzed using R, version 4.0.0 (R Core Team, 2020; see subsections for specific packages and versions used). The study’s design relies on secondary data and the analyses were not preregistered.

Change history

21 june 2023.

A Correction to this paper has been published: https://doi.org/10.1038/s41598-023-36579-2

Ginder, S. A., Kelly-Reid, J. E. & Mann, F. B. Graduation Rates for Selected Cohorts, 2009–14; Outcome Measures for Cohort Year 2009–10; Student Financial Aid, Academic Year 2016–17; and Admissions in Postsecondary Institutions, Fall 2017. First Look (Provisional Data). NCES 2018–151. National Center for Education Statistics (2018).

Snyder, T. D., de Brey, C. & Dillow, S. A. Digest of Education Statistics 2017 NCES 2018-070. Natl. Cent. Educ. Stat. (2019).

NSC Research Center. Persistence & Retention – 2019. NSC Research Center https://nscresearchcenter.org/snapshotreport35-first-year-persistence-and-retention/ (2019).

Bound, J., Lovenheim, M. F. & Turner, S. Why have college completion rates declined? An analysis of changing student preparation and collegiate resources. Am. Econ. J. Appl. Econ. 2 , 129–157 (2010).

Article   PubMed   PubMed Central   Google Scholar  

Bowen, W. G., Chingos, M. M. & McPherson, M. S. Crossing the finish line. in Crossing the Finish Line (Princeton University Press, 2009).

McFarland, J. et al. The Condition of Education 2019. NCES 2019-144. Natl. Cent. Educ. Stat. (2019).

Education, U. S. D. of. Fact sheet: Focusing higher education on student success. [Fact Sheet] (2015).

Freudenberg, N. & Ruglis, J. Peer reviewed: Reframing school dropout as a public health issue. Prev. Chronic Dis. 4 , 4 (2007).

Google Scholar  

Raisman, N. The cost of college attrition at four-year colleges & universities-an analysis of 1669 US institutions. Policy Perspect. (2013).

Wellman, J., Johnson, N. & Steele, P. Measuring (and Managing) the Invisible Costs of Postsecondary Attrition. Policy brief. Delta Cost Proj. Am. Instit. Res. (2012).

Schneider, M. Finishing the first lap: The cost of first year student attrition in America’s four year colleges and universities (American Institutes for Research, 2010).

Delen, D. A comparative analysis of machine learning techniques for student retention management. Decis. Support Syst. 49 , 498–506 (2010).

Article   Google Scholar  

Yu, R., Lee, H. & Kizilcec, R. F. Should College Dropout Prediction Models Include Protected Attributes? in Proceedings of the Eighth ACM Conference on Learning@ Scale 91–100 (2021).

Tinto, V. Reconstructing the first year of college. Plan. High. Educ. 25 , 1–6 (1996).

Ortiz-Lozano, J. M., Rua-Vieites, A., Bilbao-Calabuig, P. & Casadesús-Fa, M. University student retention: Best time and data to identify undergraduate students at risk of dropout. Innov. Educ. Teach. Int. 57 , 74–85 (2020).

Ram, S., Wang, Y., Currim, F. & Currim, S. Using big data for predicting freshmen retention. in 2015 international conference on information systems: Exploring the information frontier, ICIS 2015 (Association for Information Systems, 2015).

Levitz, R. S., Noel, L. & Richter, B. J. Strategic moves for retention success. N. Dir. High. Educ. 1999 , 31–49 (1999).

Veenstra, C. P. A strategy for improving freshman college retention. J. Qual. Particip. 31 , 19–23 (2009).

Astin, A. W. How, “good” is your institution’s retention rate?. Res. High. Educ. 38 , 647–658 (1997).

Coleman, J. S. Social capital in the creation of human capital. Am. J. Sociol. 94 , S95–S120 (1988).

Reason, R. D. Student variables that predict retention: Recent research and new developments. J. Stud. Aff. Res. Pract. 40 , 704–723 (2003).

Tinto, V. Dropout from higher education: A theoretical synthesis of recent research. Rev Educ Res 45 , 89–125 (1975).

Tinto, V. Completing college: Rethinking institutional action (University of Chicago Press, 2012).

Book   Google Scholar  

Astin, A. Retaining and Satisfying Students. Educ. Rec. 68 , 36–42 (1987).

Aulck, L., Velagapudi, N., Blumenstock, J. & West, J. Predicting student dropout in higher education. arXiv preprint arXiv:1606.06364 (2016).

Bogard, M., Helbig, T., Huff, G. & James, C. A comparison of empirical models for predicting student retention (Western Kentucky University, 2011).

Murtaugh, P. A., Burns, L. D. & Schuster, J. Predicting the retention of university students. Res. High. Educ. 40 , 355–371 (1999).

Porter, K. B. Current trends in student retention: A literature review. Teach. Learn. Nurs. 3 , 3–5 (2008).

Thomas, S. L. Ties that bind: A social network approach to understanding student integration and persistence. J. High. Educ. 71 , 591–615 (2000).

Peltier, G. L., Laden, R. & Matranga, M. Student persistence in college: A review of research. J. Coll. Stud. Ret. 1 , 357–375 (2000).

Nandeshwar, A., Menzies, T. & Nelson, A. Learning patterns of university student retention. Expert Syst. Appl. 38 , 14984–14996 (2011).

Boero, G., Laureti, T. & Naylor, R. An econometric analysis of student withdrawal and progression in post-reform Italian universities. (2005).

Tinto, V. Leaving college: Rethinking the causes and cures of student attrition (ERIC, 1987).

Choy, S. Students whose parents did not go to college: Postsecondary access, persistence, and attainment. Findings from the condition of education, 2001. (2001).

Ishitani, T. T. Studying attrition and degree completion behavior among first-generation college students in the United States. J. High. Educ. 77 , 861–885 (2006).

Thayer, P. B. Retention of students from first generation and low income backgrounds. (2000).

Britt, S. L., Ammerman, D. A., Barrett, S. F. & Jones, S. Student loans, financial stress, and college student retention. J. Stud. Financ. Aid 47 , 3 (2017).

McKinney, L. & Burridge, A. B. Helping or hindering? The effects of loans on community college student persistence. Res. High Educ. 56 , 299–324 (2015).

Hochstein, S. K. & Butler, R. R. The effects of the composition of a financial aids package on student retention. J. Stud. Financ. Aid 13 , 21–26 (1983).

Singell, L. D. Jr. Come and stay a while: Does financial aid effect retention conditioned on enrollment at a large public university?. Econ. Educ. Rev. 23 , 459–471 (2004).

Bean, J. P. Nine themes of college student. Coll. Stud. Retent. Formula Stud. Success 215 , 243 (2005).

Tinto, V. Through the eyes of students. J. Coll. Stud. Ret. 19 , 254–269 (2017).

Cabrera, A. F., Nora, A. & Castaneda, M. B. College persistence: Structural equations modeling test of an integrated model of student retention. J. High. Educ. 64 , 123–139 (1993).

Roberts, J. & Styron, R. Student satisfaction and persistence: Factors vital to student retention. Res. High. Educ. J. 6 , 1 (2010).

Gopalan, M. & Brady, S. T. College students’ sense of belonging: A national perspective. Educ. Res. 49 , 134–137 (2020).

Hoffman, M., Richmond, J., Morrow, J. & Salomone, K. Investigating, “sense of belonging” in first-year college students. J. Coll. Stud. Ret. 4 , 227–256 (2002).

Terenzini, P. T. & Pascarella, E. T. Toward the validation of Tinto’s model of college student attrition: A review of recent studies. Res. High Educ. 12 , 271–282 (1980).

Astin, A. W. The impact of dormitory living on students. Educational record (1973).

Astin, A. W. Student involvement: A developmental theory for higher education. J. Coll. Stud. Pers. 25 , 297–308 (1984).

Terenzini, P. T. & Pascarella, E. T. Studying college students in the 21st century: Meeting new challenges. Rev. High Ed. 21 , 151–165 (1998).

Thompson, J., Samiratedu, V. & Rafter, J. The effects of on-campus residence on first-time college students. NASPA J. 31 , 41–47 (1993).

Tinto, V. Research and practice of student retention: What next?. J. Coll. Stud. Ret. 8 , 1–19 (2006).

Lazer, D. et al. Computational social science. Science 1979 (323), 721–723 (2009).

Yarkoni, T. & Westfall, J. Choosing prediction over explanation in psychology: Lessons from machine learning. Perspect. Psychol. Sci. 12 , 1100–1122 (2017).

Peters, H., Marrero, Z. & Gosling, S. D. The Big Data toolkit for psychologists: Data sources and methodologies. in The psychology of technology: Social science research in the age of Big Data. 87–124 (American Psychological Association, 2022). doi: https://doi.org/10.1037/0000290-004 .

Fischer, C. et al. Mining big data in education: Affordances and challenges. Rev. Res. Educ. 44 , 130–160 (2020).

Hilbert, S. et al. Machine learning for the educational sciences. Rev. Educ. 9 , e3310 (2021).

National Academy of Education. Big data in education: Balancing the benefits of educational research and student privacy . (2017).

Aulck, L., Nambi, D., Velagapudi, N., Blumenstock, J. & West, J. Mining university registrar records to predict first-year undergraduate attrition. Int. Educ. Data Min. Soc. (2019).

Beaulac, C. & Rosenthal, J. S. Predicting university students’ academic success and major using random forests. Res. High Educ. 60 , 1048–1064 (2019).

Berens, J., Schneider, K., Görtz, S., Oster, S. & Burghoff, J. Early detection of students at risk–predicting student dropouts using administrative student data and machine learning methods. Available at SSRN 3275433 (2018).

Dawson, S., Jovanovic, J., Gašević, D. & Pardo, A. From prediction to impact: Evaluation of a learning analytics retention program. in Proceedings of the seventh international learning analytics & knowledge conference 474–478 (2017).

Dekker, G. W., Pechenizkiy, M. & Vleeshouwers, J. M. Predicting students drop Out: A case study. Int. Work. Group Educ. Data Min. (2009).

del Bonifro, F., Gabbrielli, M., Lisanti, G. & Zingaro, S. P. Student dropout prediction. in International Conference on Artificial Intelligence in Education 129–140 (Springer, 2020).

Hutt, S., Gardner, M., Duckworth, A. L. & D’Mello, S. K. Evaluating fairness and generalizability in models predicting on-time graduation from college applications. Int. Educ. Data Min. Soc. (2019).

Jayaprakash, S. M., Moody, E. W., Lauría, E. J. M., Regan, J. R. & Baron, J. D. Early alert of academically at-risk students: An open source analytics initiative. J. Learn. Anal. 1 , 6–47 (2014).

Balakrishnan, G. & Coetzee, D. Predicting student retention in massive open online courses using hidden markov models. Elect. Eng. Comput. Sci. Univ. Calif. Berkeley 53 , 57–58 (2013).

Hastie, T., Tibshirani, R. & Friedman, J. The elements of statistical learning (Springer series in statistics, New York, NY, USA, 2001).

Book   MATH   Google Scholar  

Chawla, N. V., Bowyer, K. W., Hall, L. O. & Kegelmeyer, W. P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 16 , 321–357 (2002).

Article   MATH   Google Scholar  

Zou, H. & Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Seri. B Stat. Methodol. 67 , 301–320 (2005).

Article   MathSciNet   MATH   Google Scholar  

Friedman, J., Hastie, T. & Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33 , 1 (2010).

Breiman, L. Random forests. Mach. Learn. 45 , 5–32 (2001).

Liaw, A. & Wiener, M. Classification and regression by randomForest. R News 2 , 18–22 (2002).

Pargent, F., Schoedel, R. & Stachl, C. An introduction to machine learning for psychologists in R. Psyarxiv (2022).

Hoerl, A. E. & Kennard, R. W. Ridge Regression. in Encyclopedia of Statistical Sciences vol. 8 129–136 (John Wiley & Sons, Inc., 2004).

Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B (Methodol.) 58 , 267–288 (1996).

MathSciNet   MATH   Google Scholar  

Hastie, T. & Qian, J. Glmnet vignette. vol. 9 1–42 https://hastie.su.domains/Papers/Glmnet_Vignette.pdf (2016).

Orrù, G., Monaro, M., Conversano, C., Gemignani, A. & Sartori, G. Machine learning in psychometrics and psychological research. Front. Psychol. 10 , 2970 (2020).

Pargent, F. & Albert-von der Gönna, J. Predictive modeling with psychological panel data. Z Psychol (2019).

Pargent, F., Schoedel, R. & Stachl, C. Best practices in supervised machine learning: A tutorial for psychologists. Doi: https://doi.org/10.31234/osf.io/89snd (2023).

Friedman, J., Hastie, T. & Tibshirani, R. The elements of statistical learning Vol. 1 (Springer series in statistics, 2001).

MATH   Google Scholar  

Rijsbergen, V. & Joost, C. K. Information Retrieval Butterworths London. Google Scholar Google Scholar Digital Library Digital Library (1979).

Molnar, C. Interpretable machine learning . (Lulu. com, 2020).

Aguiar, E., Ambrose, G. A., Chawla, N. v, Goodrich, V. & Brockman, J. Engagement vs Performance: Using Electronic Portfolios to Predict First Semester Engineering Student Persistence . Journal of Learning Analytics vol. 1 (2014).

Chai, K. E. K. & Gibson, D. Predicting the risk of attrition for undergraduate students with time based modelling. Int. Assoc. Dev. Inf. Soc. (2015).

Saenz, T., Marcoulides, G. A., Junn, E. & Young, R. The relationship between college experience and academic performance among minority students. Int. J. Educ. Manag (1999).

Pidgeon, A. M., Coast, G., Coast, G. & Coast, G. Psychosocial moderators of perceived stress, anxiety and depression in university students: An international study. Open J. Soc. Sci. 2 , 23 (2014).

Wilcox, P., Winn, S. & Fyvie-Gauld, M. ‘It was nothing to do with the university, it was just the people’: The role of social support in the first-year experience of higher education. Stud. High. Educ. 30 , 707–722 (2005).

Guiffrida, D. A. Toward a cultural advancement of Tinto’s theory. Rev. High Ed. 29 , 451–472 (2006).

Triandis, H. C., McCusker, C. & Hui, C. H. Multimethod probes of individualism and collectivism. J. Pers. Soc. Psychol. 59 , 1006 (1990).

Watson, D. & Clark, L. A. Extraversion and its positive emotional core. in Handbook of personality psychology 767–793 (Elsevier, 1997).

Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R. & Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 28 , 2222–2232 (2017).

Article   MathSciNet   PubMed   Google Scholar  

Arnold, K. E. & Pistilli, M. D. Course signals at Purdue: Using learning analytics to increase student success. in Proceedings of the 2nd international conference on learning analytics and knowledge 267–270 (2012).

Braxton, J. M. & McClendon, S. A. The fostering of social integration and retention through institutional practice. J. Coll. Stud. Ret. 3 , 57–71 (2001).

Sneyers, E. & de Witte, K. Interventions in higher education and their effect on student success: A meta-analysis. Educ. Rev. (Birm) 70 , 208–228 (2018).

Jamelske, E. Measuring the impact of a university first-year experience program on student GPA and retention. High Educ. (Dordr) 57 , 373–391 (2009).

Purdie, J. R. & Rosser, V. J. Examining the academic performance and retention of first-year students in living-learning communities and first-year experience courses. Coll. Stud. Aff. J. 29 , 95 (2011).

Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2 , 56–67 (2020).

Ramon, Y., Farrokhnia, R. A., Matz, S. C. & Martens, D. Explainable AI for psychological profiling from behavioral data: An application to big five personality predictions from financial transaction records. Information 12 , 518 (2021).

Download references

Author information

Alice Dinu is an Independent Researcher.

Authors and Affiliations

Columbia University, New York, USA

Sandra C. Matz & Heinrich Peters

Ludwig Maximilian University of Munich, Munich, Germany

Christina S. Bukow

Ready Education, Montreal, Canada

Christine Deacons

University of St. Gallen, St. Gallen, Switzerland

Clemens Stachl

Montreal, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

S.C.M., C.B, A.D., H.P., and C.S. designed the research. C.D. and A.D. provided the data. S.C.M, C.B. and H.P. analyzed the data. S.C.M and C.B. wrote the manuscript. All authors reviewed the manuscript. Earlier versions of thi research were part of the C.B.’s masters thesis which was supervised by S.C.M. and C.S.

Corresponding author

Correspondence to Sandra C. Matz .

Ethics declarations

Competing interests.

C.D. is a former employee of Ready Education. None of the other authors have conflict of interests related to this submission.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this Article was revised: Alice Dinu was omitted from the author list in the original version of this Article. The Author Contributions section now reads: “S.C.M., C.B, A.D., H.P., and C.S. designed the research. C.D. and A.D. provided the data. S.C.M, C.B. and H.P. analyzed the data. S.C.M and C.B. wrote the manuscript. All authors reviewed the manuscript. Earlier versions of this research were part of the C.B.’s masters thesis which was supervised by S.C.M. and C.S.” Additionally, the Article contained an error in Data Availability section and the legend of Figure 2 was incomplete.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Matz, S.C., Bukow, C.S., Peters, H. et al. Using machine learning to predict student retention from socio-demographic characteristics and app-based engagement metrics. Sci Rep 13 , 5705 (2023). https://doi.org/10.1038/s41598-023-32484-w

Download citation

Received : 09 August 2022

Accepted : 28 March 2023

Published : 07 April 2023

DOI : https://doi.org/10.1038/s41598-023-32484-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research about student learning

Libraries | Research Guides

Using ai tools in your research.

  • About ChatGPT & Generative AI LLMs
  • Academic Integrity & AI
  • Machine Learning for Research

Librarians & Faculty: How can I use Artificial Intelligence/Machine Learning Tools?

Further resources for instruction librarians and faculty on using ChatGPT and Generative AI tools when teaching students to research.

  • How to use AI to do practical stuff: A new guide / Ethan Mollick Overview of ways to get AI to do practical things. This overview describes plainly how large language models like ChatGPT are not search engines, speculative like Alexa, nor can they provide informed opinions. But they do have uses.
  • Dept of Ed: Artificial Intelligence and the Future of Teaching and Learning 2023, May: Includes foundation principles on ethical and equitable AI policies, perspectives on AI, how AI enables and expands learning, how to use AI in teaching, formative assessment, and research and development.
  • OpenAI: Teaching with AI From OpenAI: We’re sharing a few stories of how educators are using ChatGPT to accelerate student learning and some prompts to help educators get started with the tool. New FAQ contains additional resources from leading education organizations on how to teach with and about AI, examples of new AI-powered education tools, and answers to frequently asked questions from educators about things like how ChatGPT works, its limitations, the efficacy of AI detectors, and bias.
  • Understanding AI Issues / Sentient Syllabus Project Includes clear, non-technical description of how large language models work: architecture, training, perplexity, and uptake; and implications for academia, economy, and society. 10 pages.
  • "What is AI?" Learning Tools from DAILy Workshop - MIT The Daily-AI workshop, designed by MIT educators and experienced facilitators, features hands-on and computer-based activities on AI concepts, ethical issues in AI, creative expression using AI, and how AI relates to your future.
  • "Don't Ban ChatGPT in Schools. Teach With It." New York Times, January 12, 2023 Editorial by Kevin Roose
  • "ChatGPT: Implications for academic librarians," ACRL TechConnect Christopher Cox and Elias Tzoc, 2023
  • Teaching & Learning with ChatGPT: Opportunity or Quagmire? (Part I), MIT Teaching & Learning Lab Considers broader questions and strategies for engaging with AI technologies across classrooms and learning spaces. Part 2 focuses on student learning, and Part 3 focuses on academic integrity and AI.
  • "Using ChatGPT to Engage in Library Instruction," LILi Lifelong Information Literacy presentation, by Ray Pun, Adler School of Education, February 22, 2023 (56 minutes)
  • Creating an Academic Library Workshop Series on AI Literacy: How can academic librarians foster critical AI literacy in their communities?, Choice LibTech blog post, 2/22/2023 Features a chapter from Sandy Hervieux and Amanda Wheatley’s edited collection, The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries (ACRL, 2022). Written by the editors themselves and licensed under CC BY-NC-SA 2.0, this chapter, “Separating Artificial Intelligence from Science Fiction: Creating an Academic Library Workshop Series on AI Literacy,” outlines how the McGill University Library created a workshop series on AI literacy, ethics and bias, and its use in research.
  • IFLA: Generative AI for library and information professionals Draft resource of a useful non-technical guide for information library and information professionals.
  • << Previous: Machine Learning for Research
  • Last Updated: Apr 4, 2024 2:53 PM
  • URL: https://libguides.northwestern.edu/ai-tools-research

Research on Students’ Learning Experience After Embedding Data Thinking into Curriculum

Take the Course “Staff Career Development” as an Example

  • Conference paper
  • First Online: 16 June 2022
  • Cite this conference paper

Book cover

  • Juan Tang 10  

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13322))

Included in the following conference series:

  • International Conference on Human-Computer Interaction

1315 Accesses

1 Citations

The rapid development of Big Data provides opportunities and challenges for Higher Education. How to cultivate students’ Data Analysis ability, adapt to the needs of multiple scenarios of different enterprises and cultivate integrated professionals is the main direction of Teaching Reform. The author tries to embed Data Thinking into the course module of “Employee Career Development”, guides data collection, analysis, sorting and visualization, and use Technology Acceptance Model (TAM) to test students’ learning effect and experience. At present, students actively support Teaching Reform, and their sense of interaction and gain are enhanced in the learning process. The practice of this teaching reform has positive significance and reference value for the teaching of other courses and the optimization of curriculum system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Shan, X.: Research on the construction and application of personalized learning environment in secondary vocational schools from the perspective of data-driven, Master’s thesis of Zhejiang University of technology, pp. 1–3 (2020)

Google Scholar  

Feng, Q.: Research on the cultivation of applied talents under OBE education mode. J. Anhui Univ. Eng. 33 (3), 125 (2016)

Yang, X., Luo, J., Liu, Y., Chen, S.: Data driven teaching: a new trend of teaching paradigm in the era of big data. Journal 12 (296), 13 (2017)

Xing, Y., Lu, B.: Exploration of student themed experiential teaching mode - from knowledge to wisdom, research on higher engineering education. Journal 5 , 122–123 (2016)

Chen, J.: Research on the construction of adaptive learning framework and system design for online education, Master's thesis of Northeast Normal University, pp. 38–40 (2021)

Li, H.: Research on learner emotion analysis model for learning experience text. J. Distance Educ. J. 1 (261), 94–95 (2021)

Xia, X., Ma, Y.: Research on personalized learning guidance strategy based on student portrait. J. Heilongjiang Ecol. Eng. Vocat. Coll. 33 (3), 125–126 (2020)

Download references

Author information

Authors and affiliations.

Guangzhou City University of Technology, Guangzhou, 510800, Guangdong, China

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Juan Tang .

Editor information

Editors and affiliations.

Southern University of Science and Technology – SUSTech, Shenzhen, China

Marcelo M. Soares

World Usability Day and Bubble Mountain Consulting, Newton Center, MA, USA

Elizabeth Rosenzweig

Aaron Marcus and Associates, Berkeley, CA, USA

Aaron Marcus

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Tang, J. (2022). Research on Students’ Learning Experience After Embedding Data Thinking into Curriculum. In: Soares, M.M., Rosenzweig, E., Marcus, A. (eds) Design, User Experience, and Usability: Design for Emotion, Well-being and Health, Learning, and Culture. HCII 2022. Lecture Notes in Computer Science, vol 13322. Springer, Cham. https://doi.org/10.1007/978-3-031-05900-1_21

Download citation

DOI : https://doi.org/10.1007/978-3-031-05900-1_21

Published : 16 June 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-05899-8

Online ISBN : 978-3-031-05900-1

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • See us on facebook
  • See us on twitter
  • See us on youtube
  • See us on linkedin
  • See us on instagram

AI improves accuracy of skin cancer diagnoses in Stanford Medicine-led study

Artificial intelligence algorithms powered by deep learning improve skin cancer diagnostic accuracy for doctors, nurse practitioners and medical students in a study led by the Stanford Center for Digital Health.

April 11, 2024 - By Krista Conger

test

Artificial intelligence helped clinicians diagnose skin cancer more accurately, a Stanford Medicine-led study found. Chanelle Malambo/peopleimages.com   -  stock.adobe.com

A new study led by researchers at Stanford Medicine finds that computer algorithms powered by artificial intelligence based on deep learning can help health care practitioners to diagnose skin cancers more accurately. Even dermatologists benefit from AI guidance, although their improvement is less than that seen for non-dermatologists.

“This is a clear demonstration of how AI can be used in collaboration with a physician to improve patient care,” said professor of dermatology and of epidemiology Eleni Linos , MD. Linos leads the Stanford Center for Digital Health , which was launched to tackle some of the most pressing research questions at the intersection of technology and health by promoting collaboration between engineering, computer science, medicine and the humanities.

Linos, associate dean of research and the Ben Davenport and Lucy Zhang Professor in Medicine, is the senior author of the study , which was published on April 9 in npj Digital Medicine . Postdoctoral scholar Jiyeong Kim , PhD, and visiting researcher Isabelle Krakowski, MD, are the lead authors of the research.

“Previous studies have focused on how AI performs when compared with physicians,” Kim said. “Our study compared physicians working without AI assistance with physicians using AI when diagnosing skin cancers.”

AI algorithms are increasingly used in clinical settings, including dermatology. They are created by feeding a computer hundreds of thousands or even millions of images of skin conditions labeled with information such as diagnosis and patient outcome. Through a process called deep learning, the computer eventually learns to recognize telltale patterns in the images that correlate with specific skin diseases including cancers. Once trained, an algorithm written by the computer can be used to suggest possible diagnoses based on an image of a patient’s skin that it has not been exposed to.

test

Eleni Linos

These diagnostic algorithms aren’t used alone, however. They are overseen by clinicians who also assess the patient, come to their own conclusions about a patient’s diagnosis and choose whether to accept the algorithm’s suggestion.

An accuracy boost

Kim and Linos’ team reviewed 12 studies detailing more than 67,000 evaluations of potential skin cancers by a variety of practitioners with and without AI assistance. They found that, overall, health care practitioners working without aid from artificial intelligence were able to accurately diagnose about 75% of people with skin cancer — a statistical measurement known as sensitivity. Conversely, the workers correctly diagnosed about 81.5% of people with cancer-like skin conditions but who did not have cancer — a companion measurement known as specificity.

Health care practitiones who used AI to guide their diagnoses did better. Their diagnoses were about 81.1% sensitive and 86.1% specific. The improvement may seem small, but the differences are critical for people told they don’t have cancer, but do, or for those who do have cancer but are told they are healthy.

When the researchers split the health care practitioners by specialty or level of training, they saw that medical students, nurse practitioners and primary care doctors benefited the most from AI guidance — improving on average about 13 points in sensitivity and 11 points in specificity. Dermatologists and dermatology residents performed better overall, but the sensitivity and specificity of their diagnoses also improved with AI.

“I was surprised to see everyone’s accuracy improve with AI assistance, regardless of their level of training,” Linos said. “This makes me very optimistic about the use of AI in clinical care. Soon our patients will not just be accepting, but expecting, that we use AI assistance to provide them with the best possible care.”

test

Jiyeong Kim

Researchers at the Stanford Center for Digital Health, including Kim, are interested in learning more about the promise of and barriers to integrating AI-based tools into health care. In particular, they are planning to investigate how the perceptions and attitudes of physicians and patients to AI will influence its implementation.

“We want to better understand how humans interact with and use AI to make clinical decisions,” Kim said. 

Previous studies have indicated that a clinician’s degree of confidence in their own clinical decision, the degree of confidence of the AI, and whether the clinician and the AI agree on the diagnosis all influence whether the clinician incorporates the algorithm’s advice when making clinical decisions for a patient.

Medical specialties like dermatology and radiology, which rely heavily on images — visual inspection, pictures, X-rays, MRIs and CT scans, among others — for diagnoses are low-hanging fruit for computers that can pick out levels of detail beyond what a human eye (or brain) can reasonably process. But even other more symptom-based specialties, or prediction modeling, are likely to benefit from AI intervention, Linos and Kim feel. And it’s not just patients who stand to benefit.

“If this technology can simultaneously improve a doctor’s diagnostic accuracy and save them time, it’s really a win-win. In addition to helping patients, it could help reduce physician burnout and improve the human interpersonal relationships between doctors and their patients,” Linos said. “I have no doubt that AI assistance will eventually be used in all medical specialties. The key question is how we make sure it is used in a way that helps all patients regardless of their background and simultaneously supports physician well-being.”

Researchers from the Karolinska Institute, the Karolinska University Hospital and the University of Nicosia contributed to the research.

The study was funded by the National Institutes of Health (grants K24AR075060 and R01AR082109), Radiumhemmet Research, the Swedish Cancer Society and the Swedish Research Council.

For more news about responsible AI in health and medicine,  sign up  for the RAISE Health newsletter.

Register  for the RAISE Health Symposium on May 14.

Krista Conger

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu .

Artificial intelligence

Exploring ways AI is applied to health care

Stanford Medicine Magazine: AI

IMAGES

  1. Study shows that students learn more when taking part in classrooms

    research about student learning

  2. (PDF) A Study of the Impact of Technology-Enhanced Learning on Student

    research about student learning

  3. Understanding the Student Learning Experience

    research about student learning

  4. How to Teach Research Skills to Elementary Students

    research about student learning

  5. 6 Effective Learning Methods

    research about student learning

  6. Kids doing research education flashcard 301948 Vector Art at Vecteezy

    research about student learning

VIDEO

  1. The Student Experience: Making sense of data in a complex higher education era

  2. Robert Duke: Why students don't learn what we think we teach

  3. Future of School: Hybrid Learning Models are Here to Stay

  4. A Science Based New Way to Study

  5. Impact of Educational Research: P David Pearson, PhD

  6. AI in education: Envisioning the future

COMMENTS

  1. Lessons in learning

    "When I first switched to teaching using active learning, some students resisted that change. This research confirms that faculty should persist and encourage active learning. Active engagement in every classroom, led by our incredible science faculty, should be the hallmark of residential undergraduate education at Harvard."

  2. PLAT 20 (1) 2021: Enhancing Student Learning in Research and

    Future research should examine how helping student teachers improve their self-reflections can translate into improved lessons and student learning. Future research should also consider interactions with other moderators more systematically such as the timing of feedback and learners' prior expertise (cf. Nückles et al., 2020; Roelle et al ...

  3. Full article: Is research-based learning effective? Evidence from a pre

    The effectiveness of research-based learning. Conducting one's own research project involves various cognitive, behavioural, and affective experiences (Lopatto, Citation 2009, 29), which in turn lead to a wide range of benefits associated with RBL. RBL is associated with long-term societal benefits because it can foster scientific careers: Students participating in RBL reported a greater ...

  4. Empowering Student Learning in Higher Education: Pathways to

    The last two articles attempt to extend the breadth and depth of research on student learning in higher education by suggesting somewhat neglected methodological issues or revealing and correcting the distorted understanding about student learning. Student assessment is an integrated part of university teaching. However, compared with the ...

  5. Fostering student engagement with motivating teaching: an observation

    Introduction. Research shows that student engagement constitutes a crucial precondition for optimal and deep-level learning (Barkoukis et al. Citation 2014; Skinner Citation 2016; Skinner, Zimmer-Gembeck, and Connell Citation 1998).In addition, student engagement is associated with students' motivation to learn (Aelterman et al. Citation 2012), and their persistence to complete school ...

  6. Student-centered learning: context needed

    Six scholars provide their perspectives in response to Lee and Hannafin's (Educational Technology Research and Development 64: 707-734, 2016) article describing the Own It, Learn It, Share It design framework. The framework combines constructivist, constructionist, and self-determination theories to address student-centered learning.

  7. Full article: Fostering student engagement through a real-world

    Literature. Student engagement is the level of effort, interest and attention that students invest in the learning process (Klem & Connell, Citation 2004; Marks, Citation 2000).However, meaningful engagement is deeper than simple participation and involvement (Speight el al., Citation 2018).In general, student engagement has three dimensions: behavioural, cognitive, and emotional (Klem ...

  8. Enhancing research and scholarly experiences based on students

    The impact of research experiences on students' learning could be categorised broadly as negative or no impact, potential/perceived impact, or positive impact (Fig 1). Approximately one third of responses (18 from 53 comments) stated that research had no impact on their learning, mostly because they had no research experience or occasionally ...

  9. Learning to learn: Research and development in student learning

    This paper is concerned with systematic attempts to help students to learn more effectively. Current approaches to learning-to-learn, chiefly in Britain and involving groups rather than individuals, are reviewed against the background of recent research findings on student learning. Four issues are identified and discussed: contrasting conceptions of learning-to-learn; responses to the ...

  10. Mapping research in student engagement and educational technology in

    What is student engagement. Student engagement has been linked to improved achievement, persistence and retention (Finn, 2006; Kuh, Cruce, Shoup, Kinzie, & Gonyea, 2008), with disengagement having a profound effect on student learning outcomes and cognitive development (Ma, Han, Yang, & Cheng, 2015), and being a predictor of student dropout in both secondary school and higher education (Finn ...

  11. (PDF) Student learning and academic understanding: A research

    The book as a whole follows the progression of one strand of student learning research from the late 1960s until the present day, paralleling the more general development in the field. Abstract ...

  12. PDF The Lasting Effects of Learning Communities

    learning community as first-year students. All students enrolled in a learning community at the study site in either the 2015-16 or 2016-17 academic years were invited, via email, to participate. Each participant held a class rank of junior or senior at the time of their interview. Of the 118 students eligible to participate in

  13. Undergraduate research experiences: Impacts and opportunities

    Undergraduate research experiences often engender enthusiasm in the students involved, but how useful are they in terms of enhancing student learning? Linn et al. review studies that focus on the effectiveness of undergraduate research programs. Undergraduate research experiences in a class were distinguished from those involving individualized ...

  14. Powerful Learning: Studies Show Deep Understanding Derives from

    Research on project learning found that student gains in factual learning are equivalent or superior to those of students in more traditional forms of classroom instruction. The goals of project learning, however, aim to take learning one step further by enabling students to transfer their learning to new kinds of situations, illustrated in ...

  15. New research shows effectiveness of student-centered learning in

    While, nationally, students of color and low-income students continue to achieve at far lower levels than their more advantaged peers, some schools are breaking that trend. New research from the Stanford Center for Opportunity Policy in Education (SCOPE) is documenting these successes at four such schools in Northern California —schools in which traditionally underserved

  16. Learning, Leading, and Letting Go of Control:

    PBL is a specific approach to learning closely connected to the ideas of student-directed teaching and learning (Barrows, 1986).PBL may occur more or less directed by teachers and as such related both to teaching activities in classes and to independent (project) work in teams. As a teaching activity, it is often seen as case-based, where the teacher has chosen a number of cases for the ...

  17. Six characteristics that promote student learning (opinion)

    Experiential approaches are more effective over all and promote such skills as problem identification, critical thinking, evaluating evidence and alternative ideas, and tolerance for ambiguity. Involve other people. Student learning and development may be a solitary activity, like reading, studying, viewing art or witnessing an event.

  18. PDF Learning: Theory and Research

    Learning: Theory and Research Learning theory and research have long been the province of education and psychology, but what is now known about how ... methods which require students to monitor their own learning. For instance, the use of ungraded tests and study Graduate Student Instructor Teaching & Resource Center, Graduate Division, UC ...

  19. Clarifying the Impact of Educational Research on Students' Learning

    As educational researchers, a natural question to ask is how our research can have not only a greater impact on student learning but also a broader impact. We start by sharing findings from a study that sparked our thinking—an article by Lindqvist and Vestman (2011) illustrating the effects of both cognitive and noncognitive abilities on ...

  20. How the support that students receive during online learning ...

    In recent years educational institutions are increasingly using online learning and because of this trend it is necessary to investigate its impact on student academic performance. Although this topic has been addressed in different educational fields before, there is an objective justification for our approach. Thus, the reasoning behind this particular research is the fact that recent ...

  21. Class Size: What Research Says and What it Means for State Policy

    Research on Class Size. There is a large body of research on the relationship between class size and student learning. A 1979 systematic review of the literature identified 80 studies. There are ...

  22. Enhancing Student Learning: Seven Principles for Good Practice

    The following principles are anchored in extensive research about teaching, learning, and the college experience. 1. Good Practice Encourages Student - Instructor Contact. Frequent student - instructor contact in and out of classes is an important factor in student motivation and involvement. Instructor concern helps students get through ...

  23. PDF ORIGINAL RESEARCH ARTICLE Enhancing the online learning experience of

    instructors should consider the influence of culture on student learning preferences when designing and delivering online classes (Lim 2004). While social constructiv-ism emphasises student-centred learning and peer interactions (Amineh & Asl, 2015; Anderson, 2008), our findings highlight the need for instructors to assume a more

  24. Using machine learning to predict student retention from socio ...

    Student attrition poses a major challenge to academic institutions, funding bodies and students. With the rise of Big Data and predictive analytics, a growing body of work in higher education ...

  25. Full article: Student perspectives on learning research methods in the

    Student insights greatly enrich studies of undergraduate research methods pedagogy (Rand Citation 2016; Hosein and Rao Citation 2017; Turner et al. Citation 2018) and while there is a literature on student learning at advanced levels this is limited in terms of showing 'what student learning looks like' (Earley Citation 2014, 248).

  26. Research Guides: Using AI Tools in Your Research: Add'tl Reading for

    Further resources for instruction librarians and faculty on using ChatGPT and Generative AI tools when teaching students to research. How to use AI to do practical stuff: A new guide / Ethan Mollick ... From OpenAI: We're sharing a few stories of how educators are using ChatGPT to accelerate student learning and some prompts to help educators ...

  27. Research on Students' Learning Experience After Embedding ...

    Based on the above analysis of learning situation, teachers adhere to the teaching philosophy of achievement oriented, project driven, heuristic teaching and "student-centered development", adopt flipped classroom in the teaching process, guide students to carry out cross group autonomous and cooperative learning, embed data thinking into ...

  28. Emerick Elementary students discuss research at Deeper Learning

    The second annual Deeper Learning Showcase at Emerick Elementary School on April 12 gave students a chance to teach fellow students while honing their communicative and critical thinking skills.

  29. Using Scenario-Based Assessment in the Development of Students' Digital

    Peter Davidson teaches Business Communication and Technical Writing at Zayed University in Dubai, having previously taught in New Zealand, Japan, the UK and Turkey. He has co-edited a number of books on assessment including: Language Assessment in the Middle East and North Africa: Theory, Practice and Future Trends (2017, TESOL Arabia) and The Cambridge Guide to Second Language Assessment.

  30. AI improves accuracy of skin cancer diagnoses in Stanford Medicine-led

    Researchers from the Karolinska Institute, the Karolinska University Hospital and the University of Nicosia contributed to the research. The study was funded by the National Institutes of Health (grants K24AR075060 and R01AR082109), Radiumhemmet Research, the Swedish Cancer Society and the Swedish Research Council.