• Our Mission

Illustration concept of people solving research problems and puzzles

The 10 Most Significant Education Studies of 2021

From reframing our notion of “good” schools to mining the magic of expert teachers, here’s a curated list of must-read research from 2021.

It was a year of unprecedented hardship for teachers and school leaders. We pored through hundreds of studies to see if we could follow the trail of exactly what happened: The research revealed a complex portrait of a grueling year during which persistent issues of burnout and mental and physical health impacted millions of educators. Meanwhile, many of the old debates continued: Does paper beat digital? Is project-based learning as effective as direct instruction? How do you define what a “good” school is?

Other studies grabbed our attention, and in a few cases, made headlines. Researchers from the University of Chicago and Columbia University turned artificial intelligence loose on some 1,130 award-winning children’s books in search of invisible patterns of bias. (Spoiler alert: They found some.) Another study revealed why many parents are reluctant to support social and emotional learning in schools—and provided hints about how educators can flip the script.

1. What Parents Fear About SEL (and How to Change Their Minds)

When researchers at the Fordham Institute asked parents to rank phrases associated with social and emotional learning , nothing seemed to add up. The term “social-emotional learning” was very unpopular; parents wanted to steer their kids clear of it. But when the researchers added a simple clause, forming a new phrase—”social-emotional & academic learning”—the program shot all the way up to No. 2 in the rankings.

What gives?

Parents were picking up subtle cues in the list of SEL-related terms that irked or worried them, the researchers suggest. Phrases like “soft skills” and “growth mindset” felt “nebulous” and devoid of academic content. For some, the language felt suspiciously like “code for liberal indoctrination.”

But the study suggests that parents might need the simplest of reassurances to break through the political noise. Removing the jargon, focusing on productive phrases like “life skills,” and relentlessly connecting SEL to academic progress puts parents at ease—and seems to save social and emotional learning in the process.

2. The Secret Management Techniques of Expert Teachers

In the hands of experienced teachers, classroom management can seem almost invisible: Subtle techniques are quietly at work behind the scenes, with students falling into orderly routines and engaging in rigorous academic tasks almost as if by magic. 

That’s no accident, according to new research . While outbursts are inevitable in school settings, expert teachers seed their classrooms with proactive, relationship-building strategies that often prevent misbehavior before it erupts. They also approach discipline more holistically than their less-experienced counterparts, consistently reframing misbehavior in the broader context of how lessons can be more engaging, or how clearly they communicate expectations.

Focusing on the underlying dynamics of classroom behavior—and not on surface-level disruptions—means that expert teachers often look the other way at all the right times, too. Rather than rise to the bait of a minor breach in etiquette, a common mistake of new teachers, they tend to play the long game, asking questions about the origins of misbehavior, deftly navigating the terrain between discipline and student autonomy, and opting to confront misconduct privately when possible.

3. The Surprising Power of Pretesting

Asking students to take a practice test before they’ve even encountered the material may seem like a waste of time—after all, they’d just be guessing.

But new research concludes that the approach, called pretesting, is actually more effective than other typical study strategies. Surprisingly, pretesting even beat out taking practice tests after learning the material, a proven strategy endorsed by cognitive scientists and educators alike. In the study, students who took a practice test before learning the material outperformed their peers who studied more traditionally by 49 percent on a follow-up test, while outperforming students who took practice tests after studying the material by 27 percent.

The researchers hypothesize that the “generation of errors” was a key to the strategy’s success, spurring student curiosity and priming them to “search for the correct answers” when they finally explored the new material—and adding grist to a 2018 study that found that making educated guesses helped students connect background knowledge to new material.

Learning is more durable when students do the hard work of correcting misconceptions, the research suggests, reminding us yet again that being wrong is an important milestone on the road to being right.

4. Confronting an Old Myth About Immigrant Students

Immigrant students are sometimes portrayed as a costly expense to the education system, but new research is systematically dismantling that myth.

In a 2021 study , researchers analyzed over 1.3 million academic and birth records for students in Florida communities, and concluded that the presence of immigrant students actually has “a positive effect on the academic achievement of U.S.-born students,” raising test scores as the size of the immigrant school population increases. The benefits were especially powerful for low-income students.

While immigrants initially “face challenges in assimilation that may require additional school resources,” the researchers concluded, hard work and resilience may allow them to excel and thus “positively affect exposed U.S.-born students’ attitudes and behavior.” But according to teacher Larry Ferlazzo, the improvements might stem from the fact that having English language learners in classes improves pedagogy , pushing teachers to consider “issues like prior knowledge, scaffolding, and maximizing accessibility.”

5. A Fuller Picture of What a ‘Good’ School Is

It’s time to rethink our definition of what a “good school” is, researchers assert in a study published in late 2020.⁣ That’s because typical measures of school quality like test scores often provide an incomplete and misleading picture, the researchers found.

The study looked at over 150,000 ninth-grade students who attended Chicago public schools and concluded that emphasizing the social and emotional dimensions of learning—relationship-building, a sense of belonging, and resilience, for example—improves high school graduation and college matriculation rates for both high- and low-income students, beating out schools that focus primarily on improving test scores.⁣

“Schools that promote socio-emotional development actually have a really big positive impact on kids,” said lead researcher C. Kirabo Jackson in an interview with Edutopia . “And these impacts are particularly large for vulnerable student populations who don’t tend to do very well in the education system.”

The findings reinforce the importance of a holistic approach to measuring student progress, and are a reminder that schools—and teachers—can influence students in ways that are difficult to measure, and may only materialize well into the future.⁣

6. Teaching Is Learning

One of the best ways to learn a concept is to teach it to someone else. But do you actually have to step into the shoes of a teacher, or does the mere expectation of teaching do the trick?

In a 2021 study , researchers split students into two groups and gave them each a science passage about the Doppler effect—a phenomenon associated with sound and light waves that explains the gradual change in tone and pitch as a car races off into the distance, for example. One group studied the text as preparation for a test; the other was told that they’d be teaching the material to another student.

The researchers never carried out the second half of the activity—students read the passages but never taught the lesson. All of the participants were then tested on their factual recall of the Doppler effect, and their ability to draw deeper conclusions from the reading.

The upshot? Students who prepared to teach outperformed their counterparts in both duration and depth of learning, scoring 9 percent higher on factual recall a week after the lessons concluded, and 24 percent higher on their ability to make inferences. The research suggests that asking students to prepare to teach something—or encouraging them to think “could I teach this to someone else?”—can significantly alter their learning trajectories.

7. A Disturbing Strain of Bias in Kids’ Books

Some of the most popular and well-regarded children’s books—Caldecott and Newbery honorees among them—persistently depict Black, Asian, and Hispanic characters with lighter skin, according to new research .

Using artificial intelligence, researchers combed through 1,130 children’s books written in the last century, comparing two sets of diverse children’s books—one a collection of popular books that garnered major literary awards, the other favored by identity-based awards. The software analyzed data on skin tone, race, age, and gender.

Among the findings: While more characters with darker skin color begin to appear over time, the most popular books—those most frequently checked out of libraries and lining classroom bookshelves—continue to depict people of color in lighter skin tones. More insidiously, when adult characters are “moral or upstanding,” their skin color tends to appear lighter, the study’s lead author, Anjali Aduki,  told The 74 , with some books converting “Martin Luther King Jr.’s chocolate complexion to a light brown or beige.” Female characters, meanwhile, are often seen but not heard.

Cultural representations are a reflection of our values, the researchers conclude: “Inequality in representation, therefore, constitutes an explicit statement of inequality of value.”

8. The Never-Ending ‘Paper Versus Digital’ War

The argument goes like this: Digital screens turn reading into a cold and impersonal task; they’re good for information foraging, and not much more. “Real” books, meanwhile, have a heft and “tactility”  that make them intimate, enchanting—and irreplaceable.

But researchers have often found weak or equivocal evidence for the superiority of reading on paper. While a recent study concluded that paper books yielded better comprehension than e-books when many of the digital tools had been removed, the effect sizes were small. A 2021 meta-analysis further muddies the water: When digital and paper books are “mostly similar,” kids comprehend the print version more readily—but when enhancements like motion and sound “target the story content,” e-books generally have the edge.

Nostalgia is a force that every new technology must eventually confront. There’s plenty of evidence that writing with pen and paper encodes learning more deeply than typing. But new digital book formats come preloaded with powerful tools that allow readers to annotate, look up words, answer embedded questions, and share their thinking with other readers.

We may not be ready to admit it, but these are precisely the kinds of activities that drive deeper engagement, enhance comprehension, and leave us with a lasting memory of what we’ve read. The future of e-reading, despite the naysayers, remains promising.

9. New Research Makes a Powerful Case for PBL

Many classrooms today still look like they did 100 years ago, when students were preparing for factory jobs. But the world’s moved on: Modern careers demand a more sophisticated set of skills—collaboration, advanced problem-solving, and creativity, for example—and those can be difficult to teach in classrooms that rarely give students the time and space to develop those competencies.

Project-based learning (PBL) would seem like an ideal solution. But critics say PBL places too much responsibility on novice learners, ignoring the evidence about the effectiveness of direct instruction and ultimately undermining subject fluency. Advocates counter that student-centered learning and direct instruction can and should coexist in classrooms.

Now two new large-scale studies —encompassing over 6,000 students in 114 diverse schools across the nation—provide evidence that a well-structured, project-based approach boosts learning for a wide range of students.

In the studies, which were funded by Lucas Education Research, a sister division of Edutopia , elementary and high school students engaged in challenging projects that had them designing water systems for local farms, or creating toys using simple household objects to learn about gravity, friction, and force. Subsequent testing revealed notable learning gains—well above those experienced by students in traditional classrooms—and those gains seemed to raise all boats, persisting across socioeconomic class, race, and reading levels.

10. Tracking a Tumultuous Year for Teachers

The Covid-19 pandemic cast a long shadow over the lives of educators in 2021, according to a year’s worth of research.

The average teacher’s workload suddenly “spiked last spring,” wrote the Center for Reinventing Public Education in its January 2021 report, and then—in defiance of the laws of motion—simply never let up. By the fall, a RAND study recorded an astonishing shift in work habits: 24 percent of teachers reported that they were working 56 hours or more per week, compared to 5 percent pre-pandemic.

The vaccine was the promised land, but when it arrived nothing seemed to change. In an April 2021 survey  conducted four months after the first vaccine was administered in New York City, 92 percent of teachers said their jobs were more stressful than prior to the pandemic, up from 81 percent in an earlier survey.

It wasn’t just the length of the work days; a close look at the research reveals that the school system’s failure to adjust expectations was ruinous. It seemed to start with the obligations of hybrid teaching, which surfaced in Edutopia ’s coverage of overseas school reopenings. In June 2020, well before many U.S. schools reopened, we reported that hybrid teaching was an emerging problem internationally, and warned that if the “model is to work well for any period of time,” schools must “recognize and seek to reduce the workload for teachers.” Almost eight months later, a 2021 RAND study identified hybrid teaching as a primary source of teacher stress in the U.S., easily outpacing factors like the health of a high-risk loved one.

New and ever-increasing demands for tech solutions put teachers on a knife’s edge. In several important 2021 studies, researchers concluded that teachers were being pushed to adopt new technology without the “resources and equipment necessary for its correct didactic use.” Consequently, they were spending more than 20 hours a week adapting lessons for online use, and experiencing an unprecedented erosion of the boundaries between their work and home lives, leading to an unsustainable “always on” mentality. When it seemed like nothing more could be piled on—when all of the lights were blinking red—the federal government restarted standardized testing .

Change will be hard; many of the pathologies that exist in the system now predate the pandemic. But creating strict school policies that separate work from rest, eliminating the adoption of new tech tools without proper supports, distributing surveys regularly to gauge teacher well-being, and above all listening to educators to identify and confront emerging problems might be a good place to start, if the research can be believed.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Teacher and Teaching Effects on Students’ Attitudes and Behaviors

David blazar.

Harvard Graduate School of Education

Matthew A. Kraft

Brown University

Associated Data

Research has focused predominantly on how teachers affect students’ achievement on tests despite evidence that a broad range of attitudes and behaviors are equally important to their long-term success. We find that upper-elementary teachers have large effects on self-reported measures of students’ self-efficacy in math, and happiness and behavior in class. Students’ attitudes and behaviors are predicted by teaching practices most proximal to these measures, including teachers’ emotional support and classroom organization. However, teachers who are effective at improving test scores often are not equally effective at improving students’ attitudes and behaviors. These findings lend empirical evidence to well-established theory on the multidimensional nature of teaching and the need to identify strategies for improving the full range of teachers’ skills.

1. Introduction

Empirical research on the education production function traditionally has examined how teachers and their background characteristics contribute to students’ performance on standardized tests ( Hanushek & Rivkin, 2010 ; Todd & Wolpin, 2003 ). However, a substantial body of evidence indicates that student learning is multidimensional, with many factors beyond their core academic knowledge as important contributors to both short- and long-term success. 1 For example, psychologists find that emotion and personality influence the quality of one’s thinking ( Baron, 1982 ) and how much a child learns in school ( Duckworth, Quinn, & Tsukayama, 2012 ). Longitudinal studies document the strong predictive power of measures of childhood self-control, emotional stability, persistence, and motivation on health and labor market outcomes in adulthood ( Borghans, Duckworth, Heckman, & Ter Weel, 2008 ; Chetty et al., 2011 ; Moffitt et. al., 2011 ). In fact, these sorts of attitudes and behaviors are stronger predictors of some long-term outcomes than test scores ( Chetty et al., 2011 ).

Consistent with these findings, decades worth of theory also have characterized teaching as multidimensional. High-quality teachers are thought and expected not only to raise test scores but also to provide emotionally supportive environments that contribute to students’ social and emotional development, manage classroom behaviors, deliver accurate content, and support critical thinking ( Cohen, 2011 ; Lampert, 2001 ; Pianta & Hamre, 2009 ). In recent years, two research traditions have emerged to test this theory using empirical evidence. The first tradition has focused on observations of classrooms as a means of identifying unique domains of teaching practice ( Blazar, Braslow, Charalambous, & Hill, 2015 ; Hamre et al., 2013 ). Several of these domains, including teachers’ interactions with students, classroom organization, and emphasis on critical thinking within specific content areas, aim to support students’ development in areas beyond their core academic skill. The second research tradition has focused on estimating teachers’ contribution to student outcomes, often referred to as “teacher effects” ( Chetty Friedman, & Rockoff, 2014 ; Hanushek & Rivkin, 2010 ). These studies have found that, as with test scores, teachers vary considerably in their ability to impact students’ social and emotional development and a variety of observed school behaviors ( Backes & Hansen, 2015 ; Gershenson, 2016 ; Jackson, 2012 ; Jennings & DiPrete, 2010 ; Koedel, 2008 ; Kraft & Grace, 2016 ; Ladd & Sorensen, 2015 ; Ruzek et al., 2015 ). Further, weak to moderate correlations between teacher effects on different student outcomes suggest that test scores alone cannot identify teachers’ overall skill in the classroom.

Our study is among the first to integrate these two research traditions, which largely have developed in isolation. Working at the intersection of these traditions, we aim both to minimize threats to internal validity and to open up the “black box” of teacher effects by examining whether certain dimensions of teaching practice predict students’ attitudes and behaviors. We refer to these relationships between teaching practice and student outcomes as “teaching effects.” Specifically, we ask three research questions:

  • To what extent do teachers impact students’ attitudes and behaviors in class?
  • To what extent do specific teaching practices impact students’ attitudes and behaviors in class?
  • Are teachers who are effective at raising test-score outcomes equally effective at developing positive attitudes and behaviors in class?

To answer our research questions, we draw on a rich dataset from the National Center for Teacher Effectiveness of upper-elementary classrooms that collected teacher-student links, observations of teaching practice scored on two established instruments, students’ math performance on both high- and low-stakes tests, and a student survey that captured their attitudes and behaviors in class. We used this survey to construct our three primary outcomes: students’ self-reported self-efficacy in math, happiness in class, and behavior in class. All three measures are important outcomes of interest to researchers, policymakers, and parents ( Borghans et al., 2008 ; Chetty et al., 2011 ; Farrington et al., 2012 ). They also align with theories linking teachers and teaching practice to outcomes beyond students’ core academic skills ( Bandura, Barbaranelli, Caprara, & Pastorelli, 1996 ; Pianta & Hamre, 2009 ), allowing us to test these theories explicitly.

We find that upper-elementary teachers have substantive impacts on students’ self-reported attitudes and behaviors in addition to their math performance. We estimate that the variation in teacher effects on students’ self-efficacy in math and behavior in class is of similar magnitude to the variation in teacher effects on math test scores. The variation of teacher effects on students’ happiness in class is even larger. Further, these outcomes are predicted by teaching practices most proximal to these measures, thus aligning with theory and providing important face and construct validity to these measures. Specifically, teachers’ emotional support for students is related both to their self-efficacy in math and happiness in class. Teachers’ classroom organization predicts students’ reports of their own behavior in class. Errors in teachers’ presentation of mathematical content are negatively related to students’ self-efficacy in math and happiness in class, as well as students’ math performance. Finally, we find that teachers are not equally effective at improving all outcomes. Compared to a correlation of 0.64 between teacher effects on our two math achievement tests, the strongest correlation between teacher effects on students’ math achievement and effects on their attitudes or behaviors is 0.19.

Together, these findings add further evidence for the multidimensional nature of teaching and, thus, the need for researchers, policymakers, and practitioners to identify strategies for improving these skills. In our conclusion, we discuss several ways that policymakers and practitioners may start to do so, including through the design and implementation of teacher evaluation systems, professional development, recruitment, and strategic teacher assignments.

2. Review of Related Research

Theories of teaching and learning have long emphasized the important role teachers play in supporting students’ development in areas beyond their core academic skill. For example, in their conceptualization of high-quality teaching, Pianta and Hamre (2009) describe a set of emotional supports and organizational techniques that are equally important to learners as teachers’ instructional methods. They posit that, by providing “emotional support and a predictable, consistent, and safe environment” (p. 113), teachers can help students become more self-reliant, motivated to learn, and willing to take risks. Further, by modeling strong organizational and management structures, teachers can help build students’ own ability to self-regulate. Content-specific views of teaching also highlight the importance of teacher behaviors that develop students’ attitudes and behaviors in ways that may not directly impact test scores. In mathematics, researchers and professional organizations have advocated for teaching practices that emphasize critical thinking and problem solving around authentic tasks ( Lampert, 2001 ; National Council of Teachers of Mathematics [NCTM], 1989 , 2014 ). Others have pointed to teachers’ important role of developing students’ self-efficacy and decreasing their anxiety in math ( Bandura et al., 1996 ; Usher & Pajares, 2008 ; Wigfield & Meece, 1988 ).

In recent years, development and use of observation instruments that capture the quality of teachers’ instruction have provided a unique opportunity to examine these theories empirically. One instrument in particular, the Classroom Assessment Scoring System (CLASS), is organized around “meaningful patterns of [teacher] behavior…tied to underlying developmental processes [in students]” ( Pianta & Hamre, 2009 , p. 112). Factor analyses of data collected by this instrument have identified several unique aspects of teachers’ instruction: teachers’ social and emotional interactions with students, their ability to organize and manage the classroom environment, and their instructional supports in the delivery of content ( Hafen et al., 2015 ; Hamre et al., 2013 ). A number of studies from developers of the CLASS instrument and their colleagues have described relationships between these dimensions and closely related student attitudes and behaviors. For example, teachers’ interactions with students predicts students’ social competence, engagement, and risk-taking; teachers’ classroom organization predicts students’ engagement and behavior in class ( Burchinal et al., 2008 ; Downer, Rimm-Kaufman, & Pianta, 2007 ; Hamre, Hatfield, Pianta, & Jamil, 2014 ; Hamre & Pianta, 2001 ; Luckner & Pianta, 2011 ; Mashburn et al., 2008 ; Pianta, La Paro, Payne, Cox, & Bradley, 2002 ). With only a few exceptions (see Downer et al., 2007 ; Hamre & Pianta, 2001 ; Luckner & Pianta, 2011 ), though, these studies have focused on pre-kindergarten settings.

Additional content-specific observation instruments highlight several other teaching competencies with links to students’ attitudes and behaviors. For example, in this study we draw on the Mathematical Quality of Instruction (MQI) to capture math-specific dimensions of teachers’ classroom practice. Factor analyses of data captured both by this instrument and the CLASS identified two teaching skills in addition to those described above: the cognitive demand of math activities that teachers provide to students and the precision with which they deliver this content ( Blazar et al., 2015 ). Validity evidence for the MQI has focused on the relationship between these teaching practices and students’ math test scores ( Blazar, 2015 ; Kane & Staiger, 2012 ), which makes sense given the theoretical link between teachers’ content knowledge, delivery of this content, and students’ own understanding ( Hill et al., 2008 ). However, professional organizations and researchers also describe theoretical links between the sorts of teaching practices captured on the MQI and student outcomes beyond test scores ( Bandura et al., 1996 ; Lampert, 2001 ; NCTM, 1989 , 2014 ; Usher & Pajares, 2008 ; Wigfield & Meece, 1988 ) that, to our knowledge, have not been tested.

In a separate line of research, several recent studies have borrowed from the literature on teachers’ “value-added” to student test scores in order to document the magnitude of teacher effects on a range of other outcomes. These studies attempt to isolate the unique effect of teachers on non-tested outcomes from factors outside of teachers’ control (e.g., students’ prior achievement, race, gender, socioeconomic status) and to limit any bias due to non-random sorting. Jennings and DiPrete (2010) estimated the role that teachers play in developing kindergarten and first-grade students’ social and behavioral outcomes. They found within-school teacher effects on social and behavioral outcomes that were even larger (0.21 standard deviations [sd]) than effects on students’ academic achievement (between 0.12 sd and 0.15 sd, depending on grade level and subject area). In a study of 35 middle school math teachers, Ruzek et al. (2015) found small but meaningful teacher effects on students’ motivation between 0.03 sd and 0.08 sd among seventh graders. Kraft and Grace (2016) found teacher effects on students’ self-reported measures of grit, growth mindset and effort in class ranging between 0.14 and 0.17 sd. Additional studies identified teacher effects on students’ observed school behaviors, including absences, suspensions, grades, grade progression, and graduation ( Backes & Hansen, 2015 ; Gershenson, 2016 ; Jackson, 2012 ; Koedel, 2008 ; Ladd & Sorensen, 2015 ).

To date, evidence is mixed on the extent to which teachers who improve test scores also improve other outcomes. Four of the studies described above found weak relationships between teacher effects on students’ academic performance and effects on other outcome measures. Compared to a correlation of 0.42 between teacher effects on math versus reading achievement, Jennings and DiPrete (2010) found correlations of 0.15 between teacher effects on students’ social and behavioral outcomes and effects on either math or reading achievement. Kraft and Grace (2016) found correlations between teacher effects on achievement outcomes and multiple social-emotional competencies were sometimes non-existent and never greater than 0.23. Similarly, Gershenson (2016) and Jackson (2012) found weak or null relationships between teacher effects on students’ academic performance and effects on observed schools behaviors. However, correlations from two other studies were larger. Ruzek et al. (2015) estimated a correlation of 0.50 between teacher effects on achievement versus effects on students’ motivation in math class. Mihaly, McCaffrey, Staiger, and Lockwood (2013) found a correlation of 0.57 between middle school teacher effects on students’ self-reported effort versus effects on math test scores.

Our analyses extend this body of research by estimating teacher effects on additional attitudes and behaviors captured by students in upper-elementary grades. Our data offer the unique combination of a moderately sized sample of teachers and students with lagged survey measures. We also utilize similar econometric approaches to test the relationship between teaching practice and these same attitudes and behaviors. These analyses allow us to examine the face validity of our teacher effect estimates and the extent to which they align with theory.

3. Data and Sample

Beginning in the 2010–2011 school year, the National Center for Teacher Effectiveness (NCTE) engaged in a three-year data collection process. Data came from participating fourth-and fifth-grade teachers (N = 310) in four anonymous, medium to large school districts on the East coast of the United States who agreed to have their classes videotaped, complete a teacher questionnaire, and help collect a set of student outcomes. Teachers were clustered within 52 schools, with an average of six teachers per school. While NCTE focused on teachers’ math instruction, participants were generalists who taught all subject areas. This is important, as it allowed us to isolate the contribution of individual teachers to students’ attitudes and behaviors, which is considerably more challenging when students are taught by multiple teachers. It also suggests that the observation measures, which assessed teachers’ instruction during math lessons, are likely to capture aspects of their classroom practice that are common across content areas.

In Table 1 , we present descriptive statistics on participating teachers and their students. We do so for the full NCTE sample, as well as for a subsample of teachers whose students were in the project in both the current and prior years. This latter sample allowed us to capture prior measures of students’ attitudes and behaviors, a strategy that we use to increase internal validity and that we discuss in more detail below. 2 When we compare these samples, we find that teachers look relatively similar with no statistically significant differences on any observable characteristic. Reflecting national patterns, the vast majority of elementary teachers in our sample are white females who earned their teaching credential through traditional certification programs. (See Hill, Blazar, & Lynch, 2015 for a discussion of how these teacher characteristics were measured.)

Participant Demographics

Students in our samples look similar to those in many urban districts in the United States, where roughly 68% are eligible for free or reduced-price lunch, 14% are classified as in need of special education services, and 16% are identified as limited English proficient; roughly 31% are African American, 39% are Hispanic, and 28% are white ( Council of the Great City Schools, 2013 ). We do observe some statistically significant differences between student characteristics in the full sample versus our analytic subsample. For example, the percentage of students identified as limited English proficient was 20% in the full sample compared to 14% in the sample of students who ever were part of analyses drawing on our survey measures. Although variation in samples could result in dissimilar estimates across models, the overall character of our findings is unlikely to be driven by these modest differences.

3.1. Students’ Attitudes and Behaviors

As part of the expansive data collection effort, researchers administered a student survey with items (N = 18) that were adapted from other large-scale surveys including the TRIPOD, the MET project, the National Assessment of Educational Progress (NAEP), and the Trends in International Mathematics and Science Study (TIMSS) (see Appendix Table 1 for a full list of items). Items were selected based on a review of the research literature and identification of constructs thought most likely to be influenced by upper-elementary teachers. Students rated all items on a five-point Likert scale where 1 = Totally Untrue and 5 = Totally True.

We identified a parsimonious set of three outcome measures based on a combination of theory and exploratory factor analyses (see Appendix Table 1 ). 3 The first outcome, which we call Self-Efficacy in Math (10 items), is a variation on well-known constructs related to students’ effort, initiative, and perception that they can complete tasks. The second related outcome measure is Happiness in Class (5 items), which was collected in the second and third years of the study. Exploratory factor analyses suggested that these items clustered together with those from Self-Efficacy in Math to form a single construct. However, post-hoc review of these items against the psychology literature from which they were derived suggests that they can be divided into a separate domain. As above, this measure is a school-specific version of well-known scales that capture students’ affect and enjoyment ( Diener, 2000 ). Both Self-Efficacy in Math and Happiness in Class have relatively high internal consistency reliabilities (0.76 and 0.82, respectively) that are similar to those of self-reported attitudes and behaviors explored in other studies ( Duckworth et al., 2007 ; John & Srivastava, 1999 ; Tsukayama et al., 2013 ). Further, self-reported measures of similar constructs have been linked to long-term outcomes, including academic engagement and earnings in adulthood, even conditioning on cognitive ability ( King, McInerney, Ganotice, & Villarosa, 2015 ; Lyubomirsky, King, & Diener, 2005 ).

The third and final construct consists of three items that were meant to hold together and which we call Behavior in Class (internal consistency reliability is 0.74). Higher scores reflect better, less disruptive behavior. Teacher reports of students’ classroom behavior have been found to relate to antisocial behaviors in adolescence, criminal behavior in adulthood, and earnings ( Chetty et al., 2011 ; Segal, 2013 ; Moffitt et al., 2011 ; Tremblay et al., 1992 ). Our analysis differs from these other studies in the self-reported nature of the behavior outcome. That said, other studies also drawing on elementary school students found correlations between self-reported and either parent- or teacher-reported measures of behavior that were similar in magnitude to correlations between parent and teacher reports of student behavior ( Achenbach, McConaughy, & Howell, 1987 ; Goodman, 2001 ). Further, other studies have found correlations between teacher-reported behavior of elementary school students and either reading or math achievement ( r = 0.22 to 0.28; Miles & Stipek, 2006 ; Tremblay et al., 1992 ) similar to the correlation we find between students’ self-reported Behavior in Class and our two math test scores ( r = 0.24 and 0.26; see Table 2 ). Together, this evidence provides both convergent and consequential validity evidence for this outcome measure. For all three of these outcomes, we created final scales by reverse coding items with negative valence and averaging raw student responses across all available items. 4 We standardized these final scores within years, given that, for some measures, the set of survey items varied across years.

Descriptive Statistics for Students' Academic Performance, Attitudes, and Behaviors

For high-stakes math test, reliability varies by district; thus, we report the lower bound of these estimates. Self-Efficacy in Math, Happiness in Class, and Behavior in Class are measured on a 1 to 5 Likert Scale. Statistics were generated from all available data.

3.2. Student Demographic and Test Score Information

Student demographic and achievement data came from district administrative records. Demographic data include gender, race/ethnicity, free- or reduced-price lunch (FRPL) eligibility, limited English proficiency (LEP) status, and special education (SPED) status. These records also included current- and prior-year test scores in math and English Language Arts (ELA) on state assessments, which we standardized within districts by grade, subject, and year using the entire sample of students.

The project also administered a low-stakes mathematics assessment to all students in the study. Internal consistency reliability is 0.82 or higher for each form across grade levels and school years ( Hickman, Fu, & Hill, 2012 ). We used this assessment in addition to high-stakes tests given that teacher effects on two outcomes that aim to capture similar underlying constructs (i.e., math achievement) provide a unique point of comparison when examining the relationship between teacher effects on student outcomes that are less closely related (i.e., math achievement versus attitudes and behaviors). Indeed, students’ high- and low-stake math test scores are correlated more strongly ( r = 0.70) than any other two outcomes (see Table 1 ). 5

3.3. Mathematics Lessons

Teachers’ mathematics lessons were captured over a three-year period, with an average of three lessons per teacher per year. 6 Trained raters scored these lessons on two established observational instruments, the CLASS and the MQI. Analyses of these same data show that items cluster into four main factors ( Blazar et al., 2015 ). The two dimensions from the CLASS instrument capture general teaching practices: Emotional Support focuses on teachers’ interactions with students and the emotional environment in the classroom, and is thought to increase students’ social and emotional development; and Classroom Organization focuses on behavior management and productivity of the lesson, and is thought to improve students’ self-regulatory behaviors ( Pianta & Hamre, 2009 ). 7 The two dimensions from the MQI capture mathematics-specific practices: Ambitious Mathematics Instruction focuses on the complexity of the tasks that teachers provide to their students and their interactions around the content, thus corresponding to the set of professional standards described by NCTM (1989 , 2014 ) and many elements contained within the Common Core State Standards for Mathematics ( National Governors Association Center for Best Practices, 2010 ); Mathematical Errors identifies any mathematical errors or imprecisions the teacher introduces into the lesson. Both dimensions from the MQI are linked to teachers’ mathematical knowledge for teaching and, in turn, to students’ math achievement ( Blazar, 2015 ; Hill et al., 2008 ; Hill, Schilling, & Ball, 2004 ). Correlations between dimensions range from roughly 0 (between Emotional Support and Mathematical Errors ) to 0.46 (between Emotional Support and Classroom Organization ; see Table 3 ).

Descriptive Statistics for CLASS and MQI Dimensions

Intraclass correlations were adjusted for the modal number of lessons. CLASS items (from Emotional Support and Classroom Organization) were scored on a scale from 1 to 7. MQI items (from Ambitious Instruction and Errors) were scored on a scale from 1 to 3. Statistics were generated from all available data.

We estimated reliability for these metrics by calculating the amount of variance in teacher scores that is attributable to the teacher (the intraclass correlation [ICC]), adjusted for the modal number of lessons. These estimates are: 0.53, 0.63, 0.74, and 0.56 for Emotional Support, Classroom Organization, Ambitious Mathematics Instruction , and Mathematical Errors , respectively (see Table 3 ). Though some of these estimates are lower than conventionally acceptable levels (0.7), they are consistent with those generated from similar studies ( Kane & Staiger, 2012 ). We standardized scores within the full sample of teachers to have a mean of zero and a standard deviation of one.

4. Empirical Strategy

4.1. estimating teacher effects on students’ attitudes and behaviors.

Like others who aim to examine the contribution of individual teachers to student outcomes, we began by specifying an education production function model of each outcome for student i in district d , school s , grade g , class c with teacher j at time t :

OUTCOME idsgict is used interchangeably for both math test scores and students’ attitudes and behaviors, which we modeled in separate equations as a cubic function of students’ prior achievement, A it −1 , in both math and ELA on the high-stakes district tests 8 ; demographic characteristics, X it , including gender, race, FRPL eligibility, SPED status, and LEP status; these same test-score variables and demographic characteristics averaged to the class level, X ¯ it c ; and district-by-grade-by-year fixed effects, τ dgt , that account for scaling of high-stakes test. The residual portion of the model can be decomposed into a teacher effect, µ j , which is our main parameter of interest and captures the contribution of teachers to student outcomes above and beyond factors already controlled for in the model; a class effect, δ jc , which is estimated by observing teachers over multiple school years; and a student-specific error term,. ε idsgjct 9

The key identifying assumption of this model is that teacher effect estimates are not biased by non-random sorting of students to teachers. Recent experimental ( Kane, McCaffrey, Miller, & Staiger, 2013 ) and quasi-experimental ( Chetty et al., 2014 ) analyses provide strong empirical support for this claim when student achievement is the outcome of interest. However, much less is known about bias and sorting mechanisms when other outcomes are used. For example, it is quite possible that students were sorted to teachers based on their classroom behavior in ways that were unrelated to their prior achievement. To address this possibility, we made two modifications to equation (1) . First, we included school fixed effects, ω s , to account for sorting of students and teachers across schools. This means that estimates rely only on between-school variation, which has been common practice in the literature estimating teacher effects on student achievement. In their review of this literature, Hanushek and Rivkin (2010) propose ignoring the between-school component because it is “surprisingly small” and because including this component leads to “potential sorting, testing, and other interpretative problems” (p. 268). Other recent studies estimating teacher effects on student outcomes beyond test scores have used this same approach ( Backes & Hansen, 2015 ; Gershenson, 2016 ; Jackson, 2012 ; Jennings & DiPrete, 2010 ; Ladd & Sorensen, 2015 ; Ruzek et al., 2015 ). Another important benefit of using school fixed effects is that this approach minimizes the possibility of reference bias in our self-reported measures ( West et al., 2016 ; Duckworth & Yeager, 2015 ). Differences in school-wide norms around behavior and effort may change the implicit standard of comparison (i.e. reference group) that students use to judge their own behavior and effort.

Restricting comparisons to other teachers and students within the same school minimizes this concern. As a second modification for models that predict each of our three student survey measures, we included OUTCOME it −1 on the right-hand side of the equation in addition to prior achievement – that is, when predicting students’ Behavior in Class , we controlled for students’ self-reported Behavior in Class in the prior year. 10 This strategy helps account for within-school sorting on factors other than prior achievement.

Using equation (1) , we estimated the variance of µ j , which is the stable component of teacher effects. We report the standard deviation of these estimates across outcomes. This parameter captures the magnitude of the variability of teacher effects. With the exception of teacher effects on students’ Happiness in Class , where survey items were not available in the first year of the study, we included δ jc in order to separate out the time-varying portion of teacher effects, combined with peer effects and any other class-level shocks. The fact that we are able to separate class effects from teacher effects is an important extension of prior studies examining teacher effects on outcomes beyond test scores, many of which only observed teachers at one point in time.

Following Chetty et al. (2011) , we estimated the magnitude of the variance of teacher effects using a direct, model-based estimate derived via restricted maximum likelihood estimation. This approach produces a consistent estimator for the true variance of teacher effects ( Raudenbush & Bryk, 2002 ). Calculating the variation across individual teacher effect estimates using Ordinary Least Squares regression would bias our variance estimates upward because it would conflate true variation with estimation error, particularly in instances where only a handful of students are attached to each teachers. Alternatively, estimating the variation in post-hoc predicted “shrunken” empirical Bayes estimates would bias our variance estimate downward relative to the size of the measurement error (Jacob & Lefgren, 2005).

4.2. Estimating Teaching Effects on Students’ Attitudes and Behaviors

We examined the contribution of teachers’ classroom practices to our set of student outcomes by estimating a variation of equation (1) :

This multi-level model includes the same set of control variables as above in order to account for the non-random sorting of students to teachers and for factors beyond teachers’ control that might influence each of our outcomes. We further included a vector of their teacher j ’s observation scores, OBSER VAT ^ ION l J , − t . The coefficients on these variables are our main parameters of interest and can be interpreted as the change in standard deviation units for each outcome associated with exposure to teaching practice one standard deviation above the mean.

One concern when relating observation scores to student survey outcomes is that they may capture the same behaviors. For example, teachers may receive credit on the Classroom Organization domain when their students demonstrate orderly behavior. In this case, we would have the same observed behaviors on both the left and right side of our equation relating instructional quality to student outcomes, which would inflate our teaching effect estimates. A related concern is that the specific students in the classroom may influence teachers’ instructional quality ( Hill et al., 2015 ; Steinberg & Garrett, 2016 ; Whitehurst, Chingos, & Lindquist, 2014 ). While the direction of bias is not as clear here – as either lesser- or higher-quality teachers could be sorted to harder to educate classrooms – this possibility also could lead to incorrect estimates. To avoid these sources of bias, we only included lessons captured in years other than those in which student outcomes were measured, denoted by – t in the subscript of OBSER VAT ^ ION l J , − t . To the extent that instructional quality varies across years, using out-of-year observation scores creates a lower-bound estimate of the true relationship between instructional quality and student outcomes. We consider this an important tradeoff to minimize potential bias. We used predicted shrunken observation score estimates that account for the fact that teachers contributed different numbers of lessons to the project, and fewer lessons could lead to measurement error in these scores ( Hill, Charalambous, & Kraft, 2012 ). 11

An additional concern for identification is the endogeneity of observed classroom quality. In other words, specific teaching practices are not randomly assigned to teachers. Our preferred analytic approach attempted to account for potential sources of bias by conditioning estimates of the relationship between one dimension of teaching practice and student outcomes on the three other dimensions. An important caveat here is that we only observed teachers’ instruction during math lessons and, thus, may not capture important pedagogical practices teachers used with these students when teaching other subjects. Including dimensions from the CLASS instrument, which are meant to capture instructional quality across subject areas ( Pianta & Hamre, 2009 ), helps account for some of this concern. However, given that we were not able to isolate one dimension of teaching quality from all others, we consider this approach as providing suggestive rather than conclusive evidence on the underlying causal relationship between teaching practice and students’ attitudes and behaviors.

4.3. Estimating the Relationship Between Teacher Effects Across Multiple Student Outcomes

In our third and final set of analyses, we examined whether teachers who are effective at raising math test scores are equally effective at developing students’ attitudes and behaviors. To do so, we drew on equation (1) to estimate µ̂ j for each outcome and teacher j . Following Chetty et al., 2014 ), we use post-hoc predicted “shrunken” empirical Bayes estimates of µ̂ j derived from equation (1) . Then, we generated a correlation matrix of these teacher effect estimates.

Despite attempts to increase the precision of these estimates through empirical Bayes estimation, estimates of individual teacher effects are measured with error that will attenuate these correlations ( Spearman, 1904 ). Thus, if we were to find weak to moderate correlations between different measures of teacher effectiveness, this could identify multidimensionality or could result from measurement challenges, including the reliability of individual constructs ( Chin & Goldhaber, 2015 ). For example, prior research suggests that different tests of students’ academic performance can lead to different teacher rankings, even when those tests measure similar underlying constructs ( Lockwood et al., 2007 ; Papay, 2011 ). To address this concern, we focus our discussion on relative rankings in correlations between teacher effect estimates rather than their absolute magnitudes. Specifically, we examine how correlations between teacher effects on two closely related outcomes (e.g., two math achievement tests) compare with correlations between teacher effects on outcomes that aim to capture different underlying constructs. In light of research highlighted above, we did not expect the correlation between teacher effects on the two math tests to be 1 (or, for that matter, close to 1). However, we hypothesized that these relationships should be stronger than the relationship between teacher effects on students’ math performance and effects on their attitudes and behaviors.

5.1. Do Teachers Impact Students’ Attitudes and Behaviors?

We begin by presenting results of the magnitude of teacher effects in Table 4 . Here, we observe sizable teacher effects on students’ attitudes and behaviors that are similar to teacher effects on students’ academic performance. Starting first with teacher effects on students’ academic performance, we find that a one standard deviation difference in teacher effectiveness is equivalent to a 0.17 sd or 0.18 sd difference in students’ math achievement. In other words, relative to an average teacher, teachers at the 84 th percentile of the distribution of effectiveness move the medium student up to roughly the 57 th percentile of math achievement. Notably, these findings are similar to those from other studies that also estimate within-school teacher effects in large administrative datasets ( Hanushek & Rivkin, 2010 ). This suggests that our use of school fixed effects with a more limited number of teachers observed within a given school does not appear to overly restrict our identifying variation. In Online Appendix A , where we present the magnitude of teacher effects from alternative model specifications, we show that results are robust to models that exclude school fixed effects or replace school fixed effects with observable school characteristics. Estimated teacher effects on students’ self-reported Self-Efficacy in Math and Behavior in Class are 0.14 sd and 0.15 sd, respectively. The largest teacher effects we observe are on students’ Happiness in Class , of 0.31 sd. Given that we do not have multiple years of data to separate out class effects for this measure, we interpret this estimate as the upward bound of true teacher effects on Happiness in Class. Rescaling this estimate by the ratio of teacher effects with and without class effects for Self-Efficacy in Math (0.14/0.19 = 0.74; see Online Appendix A ) produces an estimate of stable teacher effects on Happiness in Class of 0.23 sd, still larger than effects for other outcomes.

Teacher Effects on Students' Academic Performance, Attitudes, and Behaviors

Notes: Cells contain estimates from separate multi-level regression models.

All effects are statistically significant at the 0.05 level.

5.2. Do Specific Teaching Practices Impact Students’ Attitudes and Behaviors?

Next, we examine whether certain characteristics of teachers’ instructional practice help explain the sizable teacher effects described above. We present unconditional estimates in Table 5 Panel A, where the relationship between one dimension of teaching practice and student outcomes is estimated without controlling for the other three dimensions. Thus, cells contain estimates from separate regression models. In Panel B, we present conditional estimates, where all four dimensions of teaching quality are included in the same regression model. Here, columns contain estimates from separate regression models. We present all estimates as standardized effect sizes, which allows us to make comparisons across models and outcome measures. Unconditional and conditional estimates generally are quite similar. Therefore, we focus our discussion on our preferred conditional estimates.

Teaching Effects on Students' Academic Performance, Attitudes, and Behaviors

In Panel A, cells contain estimates from separate regression models. In Panel B, columns contain estimates from separate regression models, where estimates are conditioned on other teaching practices. All models control for student and class characteristics, school fixed effects, and district-by-grade-by-year fixed effects, and include and teacher random effects. Models predicting all outcomes except for Happiness in Class also include class random effects.

We find that students’ attitudes and behaviors are predicted by both general and content-specific teaching practices in ways that generally align with theory. For example, teachers’ Emotional Support is positively associated with the two closely related student constructs, Self-Efficacy in Math and Happiness in Class . Specifically, a one standard deviation increase in teachers’ Emotional Support is associated with a 0.14 sd increase in students’ Self-Efficacy in Math and a 0.37 sd increase in students’ Happiness in Class . These finding makes sense given that Emotional Support captures teacher behaviors such as their sensitivity to students, regard for students’ perspective, and the extent to which they create a positive climate in the classroom. As a point of comparison, these estimates are substantively larger than those between principal ratings of teachers’ ability to improve test scores and their actual ability to do so, which fall in the range of 0.02 sd and 0.08 sd ( Jacob & Lefgren, 2008 ; Rockoff, Staiger, Kane, & Taylor, 2012 ; Rockoff & Speroni, 2010 ).

We also find that Classroom Organization , which captures teachers’ behavior management skills and productivity in delivering content, is positively related to students’ reports of their own Behavior in Class (0.08 sd). This suggests that teachers who create an orderly classroom likely create a model for students’ own ability to self-regulate. Despite this positive relationship, we find that Classroom Organization is negatively associated with Happiness in Class (−0.23 sd), suggesting that classrooms that are overly focused on routines and management are negatively related to students’ enjoyment in class. At the same time, this is one instance where our estimate is sensitive to whether or not other teaching characteristics are included in the model. When we estimate the relationship between teachers’ Classroom Organization and students’ Happiness in Class without controlling for the three other dimensions of teaching quality, this estimate approaches 0 and is no longer statistically significant. 12 We return to a discussion of the potential tradeoffs between Classroom Organization and students’ Happiness in Class in our conclusion.

Finally, we find that the degree to which teachers commit Mathematical Errors is negatively related to students’ Self-Efficacy in Math (−0.09 sd) and Happiness in Class (−0.18 sd). These findings illuminate how a teacher’s ability to present mathematics with clarity and without serious mistakes is related to their students’ perceptions that they can complete math tasks and their enjoyment in class.

Comparatively, when predicting scores on both math tests, we only find one marginally significant relationship – between Mathematical Errors and the high-stakes math test (−0.02 sd). For two other dimensions of teaching quality, Emotional Support and Ambitious Mathematics Instruction , estimates are signed the way we would expect and with similar magnitudes, though they are not statistically significant. Given the consistency of estimates across the two math tests and our restricted sample size, it is possible that non-significant results are due to limited statistical power. 13 At the same time, even if true relationships exist between these teaching practices and students’ math test scores, they likely are weaker than those between teaching practices and students’ attitudes and behaviors. For example, we find that the 95% confidence intervals relating Classroom Emotional Support to Self-Efficacy in Math [0.068, 0.202] and Happiness in Class [0.162, 0.544] do not overlap with the 95% confidence intervals for any of the point estimates predicting math test scores. We interpret these results as indication that, still, very little is known about how specific classroom teaching practices are related to student achievement in math. 14

In Online Appendix B , we show that results are robust to a variety of different specifications, including (1) adjusting observation scores for characteristics of students in the classroom, (2) controlling for teacher background characteristics (i.e., teaching experience, math content knowledge, certification pathway, education), and (3) using raw out-of-year observation scores (rather than shrunken scores). This suggests that our approach likely accounts for many potential sources of bias in our teaching effect estimates.

5.3. Are Teachers Equally Effective at Raising Different Student Outcomes?

In Table 6 , we present correlations between teacher effects on each of our student outcomes. The fact that teacher effects are measured with error makes it difficult to estimate the precise magnitude of these correlations. Instead, we describe relative differences in correlations, focusing on the extent to which teacher effects within outcome type – i.e., teacher effects on the two math achievement tests or effects on students’ attitudes and behaviors – are similar or different from correlations between teacher effects across outcome type. We illustrate these differences in Figure 1 , where Panel A presents scatter plots of these relationships between teacher effects within outcome type and Panel B does the same across outcome type. Recognizing that not all of our survey outcomes are meant to capture the same underlying construct, we also describe relative differences in correlations between teacher effects on these different measures. In Online Appendix C , we find that an extremely conservative adjustment that scales correlations by the inverse of the square root of the product of the reliabilities leads to a similar overall pattern of results.

An external file that holds a picture, illustration, etc.
Object name is nihms866620f1.jpg

Scatter plots of teacher effects across outcomes. Solid lines represent the best-fit regression line.

Correlations Between Teacher Effects on Students' Academic Performance, Attitudes, and Behaviors

Standard errors in parentheses. See Table 4 for sample sizes used to calculate teacher effect estimates. The sample for each correlation is the minimum number of teachers between the two measures.

Examining the correlations of teacher effect estimates reveals that individual teachers vary considerably in their ability to impact different student outcomes. As hypothesized, we find the strongest correlations between teacher effects within outcome type. Similar to Corcoran, Jennings, and Beveridge (2012) , we estimate a correlation of 0.64 between teacher effects on our high- and low-stakes math achievement tests. We also observe a strong correlation of 0.49 between teacher effects on two of the student survey measures, students’ Behavior in Class and Self-Efficacy in Math . Comparatively, the correlations between teacher effects across outcome type are much weaker. Examining the scatter plots in Figure 1 , we observe much more dispersion around the best-fit line in Panel B than in Panel A. The strongest relationship we observe across outcome types is between teacher effects on the low-stakes math test and effects on Self-Efficacy in Math ( r = 0.19). The lower bound of the 95% confidence interval around the correlation between teacher effects on the two achievement measures [0.56, 0.72] does not overlap with the 95% confidence interval of the correlation between teacher effects on the low-stakes math test and effects on Self-Efficacy in Math [−0.01, 0.39], indicating that these two correlations are substantively and statistically significantly different from each other. Using this same approach, we also can distinguish the correlation describing the relationship between teacher effects on the two math tests from all other correlations relating teacher effects on test scores to effects on students’ attitudes and behaviors. We caution against placing too much emphasis on the negative correlations between teacher effects on test scores and effects on Happiness in Class ( r = −0.09 and −0.21 for the high- and low-stakes tests, respectively). Given limited precision of this relationship, we cannot reject the null hypothesis of no relationship or rule out weak, positive or negative correlations among these measures.

Although it is useful to make comparisons between the strength of the relationships between teacher effects on different measures of students’ attitudes and behaviors, measurement error limits our ability to do so precisely. At face value, we find correlations between teacher effects on Happiness in Class and effects on the two other survey measures ( r = 0.26 for Self-Efficacy in Math and 0.21 for Behavior in Class ) that are weaker than the correlation between teacher effects on Self-Efficacy in Math and effects on Behavior in Class described above ( r = 0.49). One possible interpretation of these findings is that teachers who improve students’ Happiness in Class are not equally effective at raising other attitudes and behaviors. For example, teachers might make students happy in class in unconstructive ways that do not also benefit their self-efficacy or behavior. At the same time, these correlations between teacher effects on Happiness in Class and the other two survey measures have large confidence intervals, likely due to imprecision in our estimate of teacher effects on Happiness in Class . Thus, we are not able to distinguish either correlation from the correlation between teacher effects on Behavior in Class and effects on Self-Efficacy in Math .

6. Discussion and Conclusion

6.1. relationship between our findings and prior research.

The teacher effectiveness literature has profoundly shaped education policy over the last decade and has served as the catalyst for sweeping reforms around teacher recruitment, evaluation, development, and retention. However, by and large, this literature has focused on teachers’ contribution to students’ test scores. Even research studies such as the Measures of Effective Teaching project and new teacher evaluation systems that focus on “multiple measures” of teacher effectiveness ( Center on Great Teachers and Leaders, 2013 ; Kane et al., 2013 ) generally attempt to validate other measures, such as observations of teaching practice, by examining their relationship to estimates of teacher effects on students’ academic performance.

Our study extends an emerging body of research examining the effect of teachers on student outcomes beyond test scores. In many ways, our findings align with conclusions drawn from previous studies that also identify teacher effects on students’ attitudes and behaviors ( Jennings & DiPrete, 2010 ; Kraft & Grace, 2016 ; Ruzek et al., 2015 ), as well as weak relationships between different measures of teacher effectiveness ( Gershenson, 2016 ; Jackson, 2012 ; Kane & Staiger, 2012 ). To our knowledge, this study is the first to identify teacher effects on measures of students’ self-efficacy in math and happiness in class, as well as on a self-reported measure of student behavior. These findings suggest that teachers can and do help develop attitudes and behaviors among their students that are important for success in life. By interpreting teacher effects alongside teaching effects, we also provide strong face and construct validity for our teacher effect estimates. We find that improvements in upper-elementary students’ attitudes and behaviors are predicted by general teaching practices in ways that align with hypotheses laid out by instrument developers ( Pianta & Hamre, 2009 ). Findings linking errors in teachers’ presentation of math content to students’ self-efficacy in math, in addition to their math performance, also are consistent with theory ( Bandura et al., 1996 ). Finally, the broad data collection effort from NCTE allows us to examine relative differences in relationships between measures of teacher effectiveness, thus avoiding some concerns about how best to interpret correlations that differ substantively across studies ( Chin & Goldhaber, 2015 ). We find that correlations between teacher effects on student outcomes that aim to capture different underlying constructs (e.g., math test scores and behavior in class) are weaker than correlations between teacher effects on two outcomes that are much more closely related (e.g., math achievement).

6.2. Implications for Policy

These findings can inform policy in several key ways. First, our findings may contribute to the recent push to incorporate measures of students’ attitudes and behaviors – and teachers’ ability to improve these outcomes – into accountability policy (see Duckworth, 2016 ; Miller, 2015 ; Zernike, 2016 for discussion of these efforts in the press). After passage of the Every Student Succeeds Act (ESSA), states now are required to select a nonacademic indicator with which to assess students’ success in school ( ESSA, 2015 ). Including measures of students’ attitudes and behaviors in accountability or evaluation systems, even with very small associated weights, could serve as a strong signal that schools and educators should value and attend to developing these skills in the classroom.

At the same time, like other researchers ( Duckworth & Yeager, 2015 ), we caution against a rush to incorporate these measures into high-stakes decisions. The science of measuring students’ attitudes and behaviors is relatively new compared to the long history of developing valid and reliable assessments of cognitive aptitude and content knowledge. Most existing measures, including those used in this study, were developed for research purposes rather than large-scale testing with repeated administrations. Open questions remain about whether reference bias substantially distorts comparisons across schools. Similar to previous studies, we include school fixed effects in all of our models, which helps reduce this and other potential sources of bias. However, as a result, our estimates are restricted to within-school comparisons of teachers and cannot be applied to inform the type of across-school comparisons that districts typically seek to make. There also are outstanding questions regarding the susceptibility of these measures to “survey” coaching when high-stakes incentives are attached. Such incentives likely would render teacher or self-assessments of students’ attitudes and behaviors inappropriate. Some researchers have started to explore other ways to capture students’ attitudes and behaviors, including objective performance-based tasks and administrative proxies such as attendance, suspensions, and participation in extracurricular activities ( Hitt, Trivitt, & Cheng, 2016 ; Jackson, 2012 ; Whitehurst, 2016 ). This line of research shows promise but still is in its early phases. Further, although our modeling strategy aims to reduce bias due to non-random sorting of students to teachers, additional evidence is needed to assess the validity of this approach. Without first addressing these concerns, we believe that adding untested measures into accountability systems could lead to superficial and, ultimately, counterproductive efforts to support the positive development of students’ attitudes and behaviors.

An alternative approach to incorporating teacher effects on students’ attitudes and behaviors into teacher evaluation may be through observations of teaching practice. Our findings suggest that specific domains captured on classroom observation instruments (i.e., Emotional Support and Classroom Organization from the CLASS and Mathematical Errors from the MQI) may serve as indirect measures of the degree to which teachers impact students’ attitudes and behaviors. One benefit of this approach is that districts commonly collect related measures as part of teacher evaluation systems ( Center on Great Teachers and Leaders, 2013 ), and such measures are not restricted to teachers who work in tested grades and subjects.

Similar to Whitehurst (2016) , we also see alternative uses of teacher effects on students’ attitudes and behaviors that fall within and would enhance existing school practices. In particular, measures of teachers’ effectiveness at improving students’ attitudes and behaviors could be used to identify areas for professional growth and connect teachers with targeted professional development. This suggestion is not new and, in fact, builds on the vision and purpose of teacher evaluation described by many other researchers ( Darling-Hammond, 2013 ; Hill & Grossman, 2013 ; Papay, 2012 ). However, in order to leverage these measures for instructional improvement, we add an important caveat: performance evaluations – whether formative or summative – should avoid placing teachers into a single performance category whenever possible. Although many researchers and policymakers argue for creating a single weighted composite of different measures of teachers’ effectiveness ( Center on Great Teachers and Leaders, 2013 ; Kane et al., 2013 ), doing so likely oversimplifies the complex nature of teaching. For example, a teacher who excels at developing students’ math content knowledge but struggles to promote joy in learning or students’ own self-efficacy in math is a very different teacher than one who is middling across all three measures. Looking at these two teachers’ composite scores would suggest they are similarly effective. A single overall evaluation score lends itself to a systematized process for making binary decisions such as whether to grant teachers tenure, but such decisions would be better informed by recognizing and considering the full complexity of classroom practice.

We also see opportunities to maximize students’ exposure to the range of teaching skills we examine through strategic teacher assignments. Creating a teacher workforce skilled in most or all areas of teaching practice is, in our view, the ultimate goal. However, this goal likely will require substantial changes to teacher preparation programs and curriculum materials, as well as new policies around teacher recruitment, evaluation, and development. In middle and high schools, content-area specialization or departmentalization often is used to ensure that students have access to teachers with skills in distinct content areas. Some, including the National Association of Elementary School Principals, also see this as a viable strategy at the elementary level ( Chan & Jarman, 2004 ). Similar approaches may be taken to expose students to a collection of teachers who together can develop a range of academic skills, attitudes and behaviors. For example, when configuring grade-level teams, principals may pair a math teacher who excels in her ability to improve students’ behavior with an ELA or reading teacher who excels in his ability to improve students’ happiness and engagement. Viewing teachers as complements to each other may help maximize outcomes within existing resource constraints.

Finally, we consider the implications of our findings for the teaching profession more broadly. While our findings lend empirical support to research on the multidimensional nature of teaching ( Cohen, 2011 ; Lampert, 2001 ; Pianta & Hamre, 2009 ), we also identify tensions inherent in this sort of complexity and potential tradeoffs between some teaching practices. In our primary analyses, we find that high-quality instruction around classroom organization is positively related to students’ self-reported behavior in class but negatively related to their happiness in class. Our results here are not conclusive, as the negative relationship between classroom organization and students’ happiness in class is sensitive to model specification. However, if there indeed is a negative causal relationship, it raises questions about the relative benefits of fostering orderly classroom environments for learning versus supporting student engagement by promoting positive experiences with schooling. Our own experience as educators and researchers suggests this need not be a fixed tradeoff. Future research should examine ways in which teachers can develop classroom environments that engender both constructive classroom behavior and students’ happiness in class. As our study draws on a small sample of students who had current and prior-year scores for Happiness in Class , we also encourage new studies with greater statistical power that may be able to uncover additional complexities (e.g., non-linear relationships) in these sorts of data.

Our findings also demonstrate a need to integrate general and more content-specific perspectives on teaching, a historical challenge in both research and practice ( Grossman & McDonald, 2008 ; Hamre et al., 2013 ). We find that both math-specific and general teaching practices predict a range of student outcomes. Yet, particularly at the elementary level, teachers’ math training often is overlooked. Prospective elementary teachers often gain licensure without taking college-level math classes; in many states, they do not need to pass the math sub-section of their licensure exam in order to earn a passing grade overall ( Epstein & Miller, 2011 ). Striking the right balance between general and content-specific teaching practices is not a trivial task, but it likely is a necessary one.

For decades, efforts to improve the quality of the teacher workforce have focused on teachers’ abilities to raise students’ academic achievement. Our work further illustrates the potential and importance of expanding this focus to include teachers’ abilities to promote students’ attitudes and behaviors that are equally important for students’ long-term success.

Supplementary Material

Acknowledgments.

The research reported here was supported in part by the Institute of Education Sciences, U.S. Department of Education, through Grant R305C090023 to the President and Fellows of Harvard College to support the National Center for Teacher Effectiveness. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education. Additional support came from the William T. Grant Foundation, the Albert Shanker Institute, and Mathematica Policy Research’s summer fellowship.

Appendix Table 1

Factor Loadings for Items from the Student Survey

Notes: Estimates drawn from all available data. Loadings of roughly 0.4 or higher are highlighted to identify patterns.

1 Although student outcomes beyond test scores often are referred to as “non-cognitive” skills, our preference, like others ( Duckworth & Yeager, 2015 ; Farrington et al., 2012 ), is to refer to each competency by name. For brevity, we refer to them as “attitudes and behaviors,” which closely characterizes the measures we focus on in this paper.

2 Analyses below include additional subsamples of teachers and students. In analyses that predict students’ survey response, we included between 51 and 111 teachers and between 548 and 1,529 students. This range is due to the fact that some survey items were not available in the first year of the study. Further, in analyses relating domains of teaching practice to student outcomes, we further restricted our sample to teachers who themselves were part of the study for more than one year, which allowed us to use out-of-year observation scores that were not confounded with the specific set of students in the classroom. This reduced our analysis samples to between 47 and 93 teachers and between 517 and 1,362 students when predicting students’ attitudes and behaviors, and 196 teachers and 8,660 students when predicting math test scores. Descriptive statistics and formal comparisons of other samples show similar patterns as those presented in Table 1 .

3 We conducted factor analyses separately by year, given that additional items were added in the second and third years to help increase reliability. In the second and third years, each of the two factors has an eigenvalue above one, a conventionally used threshold for selecting factors ( Kline, 1994 ). Even though the second factor consists of three items that also have loadings on the first factor between 0.35 and 0.48 – often taken as the minimum acceptable factor loading ( Field, 2013 ; Kline, 1994 ) – this second factor explains roughly 20% more of the variation across teachers and, therefore, has strong support for a substantively separate construct ( Field, 2013 ; Tabachnick & Fidell, 2001 ). In the first year of the study, the eigenvalue on this second factor is less strong (0.78), and the two items that load onto it also load onto the first factor.

4 Depending on the outcome, between 4% and 8% of students were missing a subset of items from survey scales. In these instances, we created final scores by averaging across all available information.

5 Coding of items from both the low- and high-stakes tests also identify a large degree of overlap in terms of content coverage and cognitive demand ( Lynch, Chin, & Blazar, 2015 ). All tests focused most on numbers and operations (40% to 60%), followed by geometry (roughly 15%), and algebra (15% to 20%). By asking students to provide explanations of their thinking and to solve non-routine problems such as identifying patterns, the low-stakes test also was similar to the high-stakes tests in two districts; in the other two districts, items often asked students to execute basic procedures.

6 As described by Blazar (2015) , capture occurred with a three-camera, digital recording device and lasted between 45 and 60 minutes. Teachers were allowed to choose the dates for capture in advance and directed to select typical lessons and exclude days on which students were taking a test. Although it is possible that these lessons were unique from a teachers’ general instruction, teachers did not have any incentive to select lessons strategically as no rewards or sanctions were involved with data collection or analyses. In addition, analyses from the MET project indicate that teachers are ranked almost identically when they choose lessons themselves compared to when lessons are chosen for them ( Ho & Kane, 2013 ).

7 Developers of the CLASS instrument identify a third dimension, Classroom Instructional Support . Factor analyses of data used in this study showed that items from this dimension formed a single construct with items from Emotional Support ( Blazar et al., 2015 ). Given theoretical overlap between Classroom Instructional Support and dimensions from the MQI instrument, we excluded these items from our work and focused only on Classroom Emotional Support.

8 We controlled for prior-year scores only on the high-stakes assessments and not on the low-stakes assessment for three reasons. First, including prior low-stakes test scores would reduce our full sample by more than 2,200 students. This is because the assessment was not given to students in District 4 in the first year of the study (N = 1,826 students). Further, an additional 413 students were missing fall test scores given that they were not present in class on the day it was administered. Second, prior-year scores on the high- and low-stakes test are correlated at 0.71, suggesting that including both would not help to explain substantively more variation in our outcomes. Third, sorting of students to teachers is most likely to occur based on student performance on the high-stakes assessments since it was readily observable to schools; achievement on the low-stakes test was not.

9 An alternative approach would be to specify teacher effects as fixed, rather than random, which relaxes the assumption that teacher assignment is uncorrelated with factors that also predict student outcomes ( Guarino, Maxfield, Reckase, Thompson, & Wooldridge, 2015 ). Ultimately, we prefer the random effects specification for three reasons. First, it allows us to separate out teacher effects from class effects by including a random effect for both in our model. Second, this approach allows us to control for a variety of variables that are dropped from the model when teacher fixed effects also are included. Given that all teachers in our sample remained in the same school from one year to the next, school fixed effects are collinear with teacher fixed effects. In instances where teachers had data for only one year, class characteristics and district-by-grade-by-year fixed effects also are collinear with teacher fixed effects. Finally, and most importantly, we find that fixed and random effects specifications that condition on students’ prior achievement and demographic characteristics return almost identical teacher effect estimates. When comparing teacher fixed effects to the “shrunken” empirical Bayes estimates that we employ throughout the paper, we find correlations between 0.79 and 0.99. As expected, the variance of the teacher fixed effects is larger than the variance of teacher random effects, differing by the shrinkage factor. When we instead calculate teacher random effects without shrinkage by averaging student residuals to the teacher level (i.e., “teacher average residuals”; see Guarino et al, 2015 for a discussion of this approach) they are almost identical to the teacher fixed effects estimates. Correlations are 0.99 or above across outcome measures, and unstandardized regression coefficients that retain the original scale of each measure range from 0.91 sd to 0.99 sd.

10 Adding prior survey responses to the education production function is not entirely analogous to doing so with prior achievement. While achievement outcomes have roughly the same reference group across administrations, the surveys do not. This is because survey items often asked about students’ experiences “in this class.” All three Behavior in Class items and all five Happiness in Class items included this or similar language, as did five of the 10 items from Self-Efficacy in Math . That said, moderate year-to-year correlations of 0.39, 0.38, and 0.53 for Self-Efficacy in Math , Happiness in Class , and Behavior in Class , respectively, suggest that these items do serve as important controls. Comparatively, year-to-year correlations for the high- and low-stakes tests are 0.75 and 0.77.

11 To estimate these scores, we specified the following hierarchical linear model separately for each school year: OBSER VAT ^ ION lj , − t = γ j + ε ljt The outcome is the observation score for lesson l from teacher j in years other than t ; γ j is a random effect for each teacher, and ε ljt is the residual. For each domain of teaching practice and school year, we utilized standardized estimates of the teacher-level residual as each teacher’s observation score in that year. Thus, scores vary across time. In the main text, we refer to these teacher-level residual as OBSER VAT ^ ION l J , − t rather than γ ̂ J for ease of interpretation for readers.

12 One explanation for these findings is that the relationship between teachers’ Classroom Organization and students’ Happiness in Class is non-liner. For example, it is possible that students’ happiness increases as the class becomes more organized, but then begins to decrease in classrooms with an intensive focus on order and discipline. To explore this possibility, we first examined the scatterplot of the relationship between teachers’ Classroom Organization and teachers’ ability to improve students’ Happiness in Class . Next, we re-estimated equation (2) including a quadratic, cubic, and quartic specification of teachers’ Classroom Organization scores. In both sets of analyses, we found no evidence for a non-linear relationship. Given our small sample size and limited statistical power, though, we suggest that this may be a focus of future research.

13 In similar analyses in a subset of the NCTE data, Blazar (2015) did find a statistically significant relationship between Ambitious Mathematics Instruction and the low-stakes math test of 0.11 sd. The 95% confidence interval around that point estimate overlaps with the 95% confidence interval relating Ambitious Mathematics Instruction to the low-stakes math test in this analysis. Estimates of the relationship between the other three domains of teaching practice and low-stakes math test scores were of smaller magnitude and not statistically significant. Differences between the two studies likely emerge from the fact that we drew on a larger sample with an additional district and year of data, as well as slight modifications to our identification strategy.

14 When we adjusted p -values for estimates presented in Table 5 to account for multiple hypothesis testing using both the Šidák and Bonferroni algorithms ( Dunn, 1961 ; Šidák, 1967 ), relationships between Emotional Support and both Self-Efficacy in Math and Happiness in Class , as well as between Mathematical Errors and Self-Efficacy in Math remained statistically significant.

Contributor Information

David Blazar, Harvard Graduate School of Education.

Matthew A. Kraft, Brown University.

  • Achenbach TM, McConaughy SH, Howell CT. Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity. Psychological Bulletin. 1987; 101 (2):213. [ PubMed ] [ Google Scholar ]
  • Backes B, Hansen M. Working Paper 146. Washington, D C: National Center for Analysis of Longitudinal in Education Research; 2015. Teach for America impact estimates on nontested student outcomes. Retrieved from http://www.caldercenter.org/sites/default/files/WP&%20146.pdf . [ Google Scholar ]
  • Bandura A, Barbaranelli C, Caprara GV, Pastorelli C. Multifaceted impact of self-efficacy beliefs on academic functioning. Child Development. 1996:1206–1222. [ PubMed ] [ Google Scholar ]
  • Baron J. Personality and intelligence. In: Sternberg RJ, editor. Handbook of human intelligence. New York: Cambridge University Press; 1982. pp. 308–351. [ Google Scholar ]
  • Blazar D. Effective teaching in elementary mathematics: Identifying classroom practices that support student achievement. Economics of Education Review. 2015; 48 :16–29. [ Google Scholar ]
  • Blazar D, Braslow D, Charalambous CY, Hill HC. Working Paper. Cambridge, MA: National Center for Teacher Effectiveness; 2015. Attending to general and content-specific dimensions of teaching: Exploring factors across two observation instruments. Retrieved from http://scholar.harvard.edu/files/david_blazar/files/blazar_et_al_attending_to_general_and_content_specific_dimensions_of_teaching.pdf . [ Google Scholar ]
  • Borghans L, Duckworth AL, Heckman JJ, Ter Weel B. The economics and psychology of personality traits. Journal of Human Resources. 2008; 43 (4):972–1059. [ Google Scholar ]
  • Burchinal M, Howes C, Pianta R, Bryant D, Early D, Clifford R, Barbarin O. Predicting child outcomes at the end of kindergarten from the quality of pre-kindergarten teacher-child interactions and instruction. Applied Developmental Science. 2008; 12 (3):140–153. [ Google Scholar ]
  • Center on Great Teachers and Leaders. Databases on state teacher and principal policies. 2013 Retrieved from: http://resource.tqsource.org/stateevaldb .
  • Chan TC, Jarman D. Departmentalize elementary schools. Principal. 2004; 84 (1):70–72. [ Google Scholar ]
  • Chetty R, Friedman JN, Hilger N, Saez E, Schanzenbach D, Yagan D. How does your kindergarten classroom affect your earnings? Evidence from Project STAR. Quarterly Journal of Economics. 2011; 126 (4):1593–1660. [ PubMed ] [ Google Scholar ]
  • Chetty R, Friedman JN, Rockoff JE. Measuring the impacts of teachers I: Evaluating Bias in Teacher Value-Added Estimates. American Economic Review. 2014; 104 (9):2593–2632. [ Google Scholar ]
  • Chin M, Goldhaber D. Working Paper. Cambridge, MA: National Center for Teacher Effectiveness; 2015. Exploring explanations for the “weak” relationship between value added and observation-based measures of teacher performance. Retrieved from: http://cepr.harvard.edu/files/cepr/files/sree2015_simulation_working_paper.pdf?m=1436541369 . [ Google Scholar ]
  • Cohen DK. Teaching and its predicaments. Cambridge, MA: Harvard University Press; 2011. [ Google Scholar ]
  • Corcoran SP, Jennings JL, Beveridge AA. Teacher effectiveness on high- and low-stakes tests. 2012 Unpublished manuscript. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.269.5537&rep=rep1&type=pdf .
  • Council of the Great City Schools. Beating the odds: Analysis of student performance on state assessments results from the 2012–2013 school year. Washington, DC: Author; 2013. [ Google Scholar ]
  • Darling-Hammond L. Getting teacher evaluation right: What really matters for effectiveness and improvement. New York: Teachers College Press; 2013. [ Google Scholar ]
  • Diener E. Subjective well-being: The science of happiness and a proposal for a national index. American Psychologist. 2000; 55 (1):34–43. [ PubMed ] [ Google Scholar ]
  • Downer JT, Rimm-Kaufman S, Pianta RC. How do classroom conditions and children's risk for school problems contribute to children's behavioral engagement in learning? School Psychology Review. 2007; 36 (3):413–432. [ Google Scholar ]
  • Duckworth A. Don’t grade schools on grit. The New York Times. 2016 Mar 26; Retrieved from http://www.nytimes.com/2016/03/27/opinion/sunday/dont-grade-schools-on-grit.html .
  • Duckworth AL, Peterson C, Matthews MD, Kelly DR. Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology. 2007; 92 (6):1087–1101. [ PubMed ] [ Google Scholar ]
  • Duckworth AL, Quinn PD, Tsukayama E. What No Child Left Behind leaves behind: The roles of IQ and self-control in predicting standardized achievement test scores and report card grades. Journal of Educational Psychology. 2012; 104 (2):439–451. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Duckworth AL, Yeager DS. Measurement matters: Assessing personal qualities other than cognitive ability for educational purposes. Educational Researcher. 2015; 44 (4):237–251. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dunn OJ. Multiple comparisons among means. Journal of the American Statistical Association. 1961; 56 (293):52–64. [ Google Scholar ]
  • Epstein D, Miller RT. Slow off the mark: Elementary school teachers and the crisis in science, technology, engineering, and math education. Washington, DC: Center for American Progress; 2011. [ Google Scholar ]
  • The Every Student Succeeds Act. Public Law 114-95, 114th Cong., 1st sess. 2015 Dec 10; available at https://www.congress.gov/bill/114th-congress/senate-bill/1177/text .
  • Farrington CA, Roderick M, Allensworth E, Nagaoka J, Keyes TS, Johnson DW, Beechum NO. Teaching adolescents to become learners: The role of non-cognitive factors in shaping school performance, a critical literature review. Chicago: University of Chicago Consortium on Chicago School Reform; 2012. [ Google Scholar ]
  • Field A. Discovering statistics using IBM SPSS statistics. 4. London: SAGE publications; 2013. [ Google Scholar ]
  • Gershenson S. Linking teacher quality, student attendance, and student achievement. Education Finance and Policy. 2016; 11 (2):125–149. [ Google Scholar ]
  • Goodman R. Psychometric properties of the strengths and difficulties questionnaire. Journal of the American Academy of Child & Adolescent Psychiatry. 2001; 40 (11):1337–1345. [ PubMed ] [ Google Scholar ]
  • Grossman P, McDonald M. Back to the future: Directions for research in teaching and teacher education. American Educational Research Journal. 2008; 45 :184–205. [ Google Scholar ]
  • Guarino CM, Maxfield M, Reckase MD, Thompson PN, Wooldridge JM. An evaluation of Empirical Bayes’ estimation of value-added teacher performance measures. Journal of Educational and Behavioral Statistics. 2015; 40 (2):190–222. [ Google Scholar ]
  • Hafen CA, Hamre BK, Allen JP, Bell CA, Gitomer DH, Pianta RC. Teaching through interactions in secondary school classrooms: Revisiting the factor structure and practical application of the classroom assessment scoring system–secondary. The Journal of Early Adolescence. 2015; 35 (5–6):651–680. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hamre B, Hatfield B, Pianta R, Jamil F. Evidence for general and domain-specific elements of teacher–child interactions: Associations with preschool children's development. Child Development. 2014; 85 (3):1257–1274. [ PubMed ] [ Google Scholar ]
  • Hamre BK, Pianta RC. Early teacher–child relationships and the trajectory of children's school outcomes through eighth grade. Child Development. 2001; 72 (2):625–638. [ PubMed ] [ Google Scholar ]
  • Hamre BK, Pianta RC, Downer JT, DeCoster J, Mashburn AJ, Jones SM, Brackett MA. Teaching through interactions: Testing a developmental framework of teacher effectiveness in over 4,000 classrooms. The Elementary School Journal. 2013; 113 (4):461–487. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hanushek EA, Rivkin SG. Generalizations about using value-added measures of teacher quality. American Economic Review. 2010; 100 (2):267–271. [ Google Scholar ]
  • Hickman JJ, Fu J, Hill HC. Technical report: Creation and dissemination of upper-elementary mathematics assessment modules. Princeton, NJ: Educational Testing Service; 2012. [ Google Scholar ]
  • Hill HC, Blazar D, Lynch K. Resources for teaching: Examining personal and institutional predictors of high-quality instruction. AERA Open. 2015; 1 (4):1–23. [ Google Scholar ]
  • Hill HC, Blunk ML, Charalambous CY, Lewis JM, Phelps GC, Sleep L, Ball DL. Mathematical knowledge for teaching and the mathematical quality of instruction: An exploratory study. Cognition and Instruction. 2008; 26 (4):430–511. [ Google Scholar ]
  • Hill HC, Charalambous CY, Kraft MA. When rater reliability is not enough teacher observation systems and a case for the generalizability study. Educational Researcher. 2012; 41 (2):56–64. [ Google Scholar ]
  • Hill HC, Grossman P. Learning from teacher observations: Challenges and opportunities posed by new teacher evaluation systems. Harvard Educational Review. 2013; 83 (2):371–384. [ Google Scholar ]
  • Hill HC, Schilling SG, Ball DL. Developing measures of teachers’ mathematics knowledge for teaching. Elementary School Journal. 2004; 105 :11–30. [ Google Scholar ]
  • Hitt C, Trivitt J, Cheng A. When you say nothing at all: The predictive power of student effort on surveys. Economics of Education Review. 2016; 52 :105–119. [ Google Scholar ]
  • Ho AD, Kane TJ. The reliability of classroom observations by school personnel. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2013. [ Google Scholar ]
  • Jackson CK. NBER Working Paper No. 18624. Cambridge, MA: National Bureau for Economic Research; 2012. Non-cognitive ability, test scores, and teacher quality: Evidence from ninth grade teachers in North Carolina. [ Google Scholar ]
  • Jacob BA, Lefgren L. Can principals identify effective teachers? Evidence on subjective performance evaluation in education. Journal of Labor Economics. 2008; 20 (1):101–136. [ Google Scholar ]
  • Jennings JL, DiPrete TA. Teacher effects on social and behavioral skills in early elementary school. Sociology of Education. 2010; 83 (2):135–159. [ Google Scholar ]
  • John OP, Srivastava S. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of personality: Theory and research. 1999; 2 (1999):102–138. [ Google Scholar ]
  • Kane TJ, McCaffrey DF, Miller T, Staiger DO. Have we identified effective teachers? Validating measures of effective teaching using random assignment. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2013. [ Google Scholar ]
  • Kane TJ, Staiger DO. Gathering feedback for teaching. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2012. [ Google Scholar ]
  • King RB, McInerney DM, Ganotice FA, Villarosa JB. Positive affect catalyzes academic engagement: Cross-sectional, longitudinal, and experimental evidence. Learning and Individual Differences. 2015; 39 :64–72. [ Google Scholar ]
  • Kline P. An easy guide to factor analysis. London: Routledge; 1994. [ Google Scholar ]
  • Kraft MA, Grace S. Working Paper. Providence, RI: Brown University; 2016. Teaching for tomorrow’s economy? Teacher effects on complex cognitive skills and social-emotional competencies. Retrieved from http://scholar.harvard.edu/files/mkraft/files/teaching_for_tomorrows_economy_-_final_public.pdf . [ Google Scholar ]
  • Koedel C. Teacher quality and dropout outcomes in a large, urban school district. Journal of Urban Economics. 2008; 64 (3):560–572. [ Google Scholar ]
  • Ladd HF, Sorensen LC. Working Paper No. 112. Washington, D C: National Center for Analysis of Longitudinal in Education Research; 2015. Returns to teacher experience: Student achievement and motivation in middle school. Retrieved from http://www.caldercenter.org/sites/default/files/WP%20112%20Update_0.pdf . [ Google Scholar ]
  • Lampert M. Teaching problems and the problems of teaching. Yale University Press; 2001. [ Google Scholar ]
  • Lockwood JR, McCaffrey DF, Hamilton LS, Stecher B, Le V, Martinez JF. The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Journal of Educational Measurement. 2007; 44 (1):47–67. [ Google Scholar ]
  • Luckner AE, Pianta RC. Teacher-student interactions in fifth grade classrooms: Relations with children's peer behavior. Journal of Applied Developmental Psychology. 2011; 32 (5):257–266. [ Google Scholar ]
  • Lynch K, Chin M, Blazar D. Working Paper. Cambridge, MA: National Center for Teacher Effectiveness; 2015. Relationship between observations of elementary teacher mathematics instruction and student achievement: Exploring variability across districts. [ Google Scholar ]
  • Lyubomirsky S, King L, Diener E. The benefits of frequent positive affect: Does happiness lead to success? Psychological Bulletin. 2005; 131 (6):803–855. [ PubMed ] [ Google Scholar ]
  • Mashburn AJ, Pianta RC, Hamre BK, Downer JT, Barbarin OA, Bryant D, Howes C. Measures of classroom quality in prekindergarten and children's development of academic, language, and social skills. Child Development. 2008; 79 (3):732–749. [ PubMed ] [ Google Scholar ]
  • Mihaly K, McCaffrey DF, Staiger DO, Lockwood JR. A composite estimator of effective teaching. Seattle, WA: Measures of Effective Teaching Project, Bill and Melinda Gates Foundation; 2013. [ Google Scholar ]
  • Miles SB, Stipek D. Contemporaneous and longitudinal associations between social behavior and literacy achievement in a sample of low-income elementary school children. Child Development. 2006; 77 (1):103–117. [ PubMed ] [ Google Scholar ]
  • Miller CC. Why what you learned in preschool is crucial at work. The New York Times. 2015 Oct 16; Retrieved from http://www.nytimes.com/2015/10/18/upshot/how-the-modern-workplace-has-become-more-like-preschool.html?_r=0 .
  • Moffitt TE, Arseneault L, Belsky D, Dickson N, Hancox RJ, Harrington H, Houts R, Poulton R, Roberts BW, Ross S. A gradient of childhood self-control predicts health, wealth, and public safety. Proceedings of the National Academy of Sciences. 2011; 108 (7):2693–2698. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • National Council of Teachers of Mathematics. Curriculum and evaluation standards for school mathematics. Reston, VA: Author; 1989. [ Google Scholar ]
  • National Council of Teachers of Mathematics. Principles to actions: Ensuring mathematical success for all. Reston, VA: Author; 2014. [ Google Scholar ]
  • National Governors Association Center for Best Practices. Common core state standards for mathematics. Washington, DC: Author; 2010. [ Google Scholar ]
  • Papay JP. Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal. 2011; 48 (1):163–193. [ Google Scholar ]
  • Papay JP. Refocusing the debate: Assessing the purposes and tools of teacher evaluation. Harvard Educational Review. 2012; 82 (1):123–141. [ Google Scholar ]
  • Pianta RC, Hamre BK. Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity. Educational Researcher. 2009; 38 (2):109–119. [ Google Scholar ]
  • Pianta R, La Paro K, Payne C, Cox M, Bradley R. The relation of kindergarten classroom environment to teacher, family, and school characteristics and child outcomes. Elementary School Journal. 2002; 102 :225–38. [ Google Scholar ]
  • Raudenbush SW, Bryk AS. Hierarchical linear models: Applications and data analysis methods. Second. Thousand Oaks, CA: Sage Publications; 2002. [ Google Scholar ]
  • Rockoff JE, Speroni C. Subjective and objective evaluations of teacher effectiveness. American Economic Review. 2010:261–266. [ Google Scholar ]
  • Rockoff JE, Staiger DO, Kane TJ, Taylor ES. Information and employee evaluation: evidence from a randomized intervention in public schools. American Economic Review. 2012; 102 (7):3184–3213. [ Google Scholar ]
  • Ruzek EA, Domina T, Conley AM, Duncan GJ, Karabenick SA. Using value-added models to measure teacher effects on students’ motivation and achievement. The Journal of Early Adolescence. 2015; 35 (5–6):852–882. [ Google Scholar ]
  • Segal C. Misbehavior, education, and labor market outcomes. Journal of the European Economic Association. 2013; 11 (4):743–779. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Šidák Z. Rectangular confidence regions for the means of multivariate normal distributions. Journal of the American Statistical Association. 1967; 62 (318):626–633. [ Google Scholar ]
  • Spearman C. “General Intelligence,” objectively determined and measured. The American Journal of Psychology. 1904; 15 (2):201–292. [ Google Scholar ]
  • Steinberg MP, Garrett R. Classroom composition and measured teacher performance: What do teacher observation scores really measure? Educational Evaluation and Policy Analysis. 2016; 38 (2):293–317. [ Google Scholar ]
  • Tabachnick BG, Fidell LS. Using multivariate statistics. 4. New York: Harper Collins; 2001. [ Google Scholar ]
  • Todd PE, Wolpin KI. On the specification and estimation of the production function for cognitive achievement. The Economic Journal. 2003; 113 (485):F3–F33. [ Google Scholar ]
  • Tremblay RE, Masse B, Perron D, LeBlanc M, Schwartzman AE, Ledingham JE. Early disruptive behavior, poor school achievement, delinquent behavior, and delinquent personality: Longitudinal analyses. Journal of Consulting and Clinical Psychology. 1992; 60 (1):64. [ PubMed ] [ Google Scholar ]
  • Tsukayama E, Duckworth AL, Kim B. Domain-specific impulsivity in school-age children. Developmental Science. 2013; 16 (6):879–893. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • U.S. Department of Education. A blueprint for reform: Reauthorization of the elementary and secondary education act. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation and Policy Development; 2010. [ Google Scholar ]
  • Usher EL, Pajares F. Sources of self-efficacy in school: Critical review of the literature and future directions. Review of Educational Research. 2008; 78 (4):751–796. [ Google Scholar ]
  • West MR, Kraft MA, Finn AS, Martin RE, Duckworth AL, Gabrieli CF, Gabrieli JD. Promise and paradox: Measuring students’ non-cognitive skills and the impact of schooling. Educational Evaluation and Policy Analysis. 2016; 38 (1):148–170. [ Google Scholar ]
  • Whitehurst GJ. Hard thinking on soft skills. Brookings Institute; Washington, DC: 2016. Retrieved from http://www.brookings.edu/research/reports/2016/03/24-hard-thinking-soft-skills-whitehurst . [ Google Scholar ]
  • Whitehurst GJ, Chingos MM, Lindquist KM. Evaluating teachers with classroom observations: Lessons learned in four districts. Brown Center on Education Policy at the Brookings Institute; Washington, DC: 2014. Retrieved from http://www.brookings.edu/~/media/research/files/reports/2014/05/13-teacher-evaluation/evaluating-teachers-with-classroom-observations.pdf . [ Google Scholar ]
  • Wigfield A, Meece JL. Math anxiety in elementary and secondary school students. Journal of Educational Psychology. 1988; 80 (2):210. [ Google Scholar ]
  • Zernike K. Testing for joy and grit? Schools nationwide push to measure students’ emotional skills. The New York Times. 2016 Feb 29; Retrieved from http://www.nytimes.com/2016/03/01/us/testing-for-joy-and-grit-schools-nationwide-push-to-measure-students-emotional-skills.html?_r=0 .
  • NAEYC Login
  • Member Profile
  • Hello Community
  • Accreditation Portal
  • Online Learning
  • Online Store

Popular Searches:   DAP ;  Coping with COVID-19 ;  E-books ;  Anti-Bias Education ;  Online Store

What is Teacher Research?

A teacher contemplates her study

You are here

Teacher research is intentional, systematic inquiry by teachers with the goals of gaining insights into teaching and learning, becom­ing more reflective practitioners, effecting changes in the classroom or school, and improving the lives of children.... Teacher research stems from teachers' own questions and seeks practical solutions to issues in their professional lives.... The major components of teacher research are: conceptualization, in which teachers identify a significant problem or interest and determine relevant re­search questions; implementation, in which teachers collect and analyze data; and interpretation, in which teachers examine findings for meaning and take appropriate actions.... Teacher research is systematic in that teachers follow specific procedures and carefully document each step of the process. — " The Nature of Teacher Research " by Barbara Henderson, Daniel R. Meier, and Gail Perry

Teacher Research Resources

The resources below provide early childhood education professionals with tools to learn more about the teacher research process, explore accounts of teachers conducting research in their own classrooms, and connect with others in the field interested in teacher research.

Resources from  Voices of Practitioners

The Nature of Teacher Research Barbara Henderson, Daniel R. Meier, and Gail Perry

The Value of Teacher Research: Nurturing Professional and Personal Growth through Inquiry Andrew J. Stremmel

How To Do Action Research In Your Classroom: Lessons from the Teachers Network Leadership Institute Frances Rust and Christopher Clark

Resources From Other Publications

The resources listed here provide early childhood education professionals with tools to learn more about the teacher research process, explore accounts of teachers conducting research in their own classrooms, and connect with others in the field interested in teacher research.

American Educational Research Association (AERA) AERA encourages scholarly inquiry and promotes the dissemination and application of research results. It includes special interest groups (SIGs) devoted to early childhood and teacher research. Potential members can join AERA and then choose the Action Research or Teachers as Researchers SIGs (See “AR SIG, AERA” and “TR SIG, AERA” below.) AERA holds an annual conference with presentations of early childhood teacher research among many other sessions. www.aera.net

Action Research Special Interest Group, American Educational Research Association (AR SIG, AERA) This group builds community among those engaged in action research and those teaching others to do action research. It offers a blog, links to action research communities, and lists of action research books, journals, and conferences. http://sites.google.com/site/aeraarsig/

Teacher as Researcher Special Interest Group, American Educational Research Association (TAR SIG, AERA) This group consists of AERA members who are teacher educators and preK–12th grade educators; it aims to present teacher research at the AERA conference and elsewhere nationally. Early childhood teacher research is an important part of the group. http://www.aera.net/SIG126/TeacherasResearcherSIG126/tabid/11980/Default.aspx

The Center for Practitioner Research (CFPR) of the National College of Education at National-Louis University CFPR aims to affect education through collaborative scholarship contributing to knowledge, practice, advocacy, and policy in education. The website includes selected action research resources, including links to websites, book lists, conference information, and its online journal  Inquiry in Education . http://nlu.nl.edu/cfpr

Educational Action Research Educational Action Research  is an international journal concerned with exploring the dialogue between research and practice in educational settings. www.tandf.co.uk/journals/reac

Let’s Collaborate, Teacher Research from Access Excellence @ the National Health Museum This site includes useful supports for engaging in teacher research, including examples of K–12 research focused on science education. It offers information on starting a project, examples of teacher research projects, and links to online resources. www.accessexcellence.org/LC/TL/AR/

National Association of Early Childhood Teacher Educators (NAECTE) NAECTE promotes the professional growth of early childhood teacher educators and advocates for improvements to the field. NAECTE’s  Journal of Early Childhood Teacher Education  occasionally publishes teacher research articles, including a special issue focused on teacher research (Volume 31, Issue 3). NAECTE also provides ResearchNets, a forum to foster educational research with teacher research presentations. www.naecte.org

Networks: An On-line Journal for Teacher Research at the University of Wisconsin A venue for sharing reports of action research and discussion on inquiry for teachers at all levels, this journal provides space for discussion of inquiry as a tool to learn about practice and improve its effectiveness. http://journals.library.wisc.edu/index.php/networks

Self-Study Teacher Research: Improving your Practice through Collaborative Inquiry, Student Study Guide from Sage Publications This web-based student study site accompanies a book of the same name; it provides a wealth of information on its own for teachers or teacher educators who conduct studies of their own teaching practice. http://www.sagepub.com/samaras/default.htm

Teacher Action Research from George Mason University This site offers information about the teacher research process, including resources for carrying out teacher research studies. It also contains discussion of current teacher research issues and a comparison of teacher research to other forms of educational research and professional development. http://gse.gmu.edu/research/tr

Teacher Inquiry Communities Network from the National Writing Project (NWP) This network offers information on a mini-grant program supporting an inquiry stance toward teaching and learning. It includes information about the grant program, program reports, and examples of projects (including early elementary projects). http://www.nwp.org/cs/public/print/programs/tic

Teaching and Teacher Education This journal aims to enhance theory, research, and practice in teaching and teacher education through the publication of primary research and review papers. http://www.journals.elsevier.com/teaching-and-teacher-education

Voices of Practitioners

The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research

  • Original Paper
  • Open access
  • Published: 25 March 2022
  • Volume 66 , pages 616–630, ( 2022 )

Cite this article

You have full access to this open access article

  • Ismail Celik   ORCID: orcid.org/0000-0002-5027-8284 1 ,
  • Muhterem Dindar 2 ,
  • Hanni Muukkonen 1 &
  • Sanna Järvelä 2  

42k Accesses

50 Citations

20 Altmetric

Explore all metrics

This study provides an overview of research on teachers’ use of artificial intelligence (AI) applications and machine learning methods to analyze teachers’ data. Our analysis showed that AI offers teachers several opportunities for improved planning (e.g., by defining students’ needs and familiarizing teachers with such needs), implementation (e.g., through immediate feedback and teacher intervention), and assessment (e.g., through automated essay scoring) of their teaching. We also found that teachers have various roles in the development of AI technology. These roles include acting as models for training AI algorithms and participating in AI development by checking the accuracy of AI automated assessment systems. Our findings further underlined several challenges in AI implementation in teaching practice, which provide guidelines for developing the field.

Similar content being viewed by others

research paper for teachers

Artificial intelligence in education: Addressing ethical challenges in K-12 settings

Selin Akgun & Christine Greenhow

research paper for teachers

Ethical principles for artificial intelligence in education

Andy Nguyen, Ha Ngan Ngo, … Bich-Phuong Thi Nguyen

research paper for teachers

Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success

Yu Chen, Scott Jensen, … Terri Lee

Avoid common mistakes on your manuscript.

Introduction

Artificial intelligence (AI) has been penetrating our everyday lives in various ways such as through web search engines, mobile apps, and healthcare systems (Sánchez-Prieto et al., 2020 ). The swift advancement of AI technologies also has important implications for learning and teaching. In fact, AI-supported instruction is expected to transform education (Zawacki-Richter et al., 2019 ). Thus, considerable investments have been made to integrate AI into teaching and learning (Cope et al., 2020 ). A significant challenge in the effective integration of AI into teaching and learning, however, is the profit orientation of most current AI applications in education. AI developers know little about learning sciences and lack pedagogical knowledge for the effective implementation of AI in teaching (Luckin & Cukurova, 2019 ). Moreover, AI developers often fail to consider the expectations of AI end-users in education, that is, of teachers (Cukurova & Luckin, 2018 , Luckin & Cukurova, 2019 ). Teachers are considered among the most crucial stakeholders in AI-based teaching (Seufert et al., 2020 ), so their views, experiences, and expectations need to be considered for the successful adoption of AI in schools (Holmes et al., 2019 ). Specifically, to make AI pedagogically relevant, the advantages that it offers teachers and the challenges that teachers face in AI-based teaching need to be understood better. However, little attention has been paid to AI-based education from the perspective of teachers. Moreover, teachers’ skills in the pedagogical use of AI and the roles of teachers in the development of AI have been somehow ignored in literature (Langran et al., 2020 ; Seufert et al., 2020 ). To address these research gaps, this study explores the promises and challenges of AI in teaching practice that have been surfaced in research. Since the field of AI-based instruction is still developing, this study can contribute to the development of comprehensive AI-based instruction systems that allow teachers to participate in the design process.

Educational Use of Artificial Intelligence

There have been several waves of emerging educational technologies over the past few decades, and now, there is artificial intelligence (AI; Bonk & Wiley, 2020 ). The term artificial intelligence was first mentioned in 1956 by John McCarthy (Russel & Norvig, 2010 ). Baker and Smith ( 2019 ) pointed out that AI does not refer to a single technology but is defined as “computers [that] perform cognitive tasks, usually associated with human minds, particularly learning and problem-solving” (p. 10). AI is a general term that refers to diverse analytical methods. These methods can be classified as machine learning, neural networks, and deep learning (Aggarwal, 2018 ). Machine learning is defined as the capacity of a computer algorithm learning from the data to make decisions without being programmed (Popenici & Kerr, 2017 ). Although numerous machine learning models exist, the two most used models are supervised and unsupervised learning models (Alloghani et al., 2020 ). Supervised machine learning algorithms build a model based on the sample data (or training data), while unsupervised machine learning algorithms are created from untagged data (Alenezi & Faisal, 2020 ). In other words, the unsupervised model performs on its own to explore patterns that were formerly undetected by humans.

AI is used in education in different ways. For instance, AI is integrated into several instructional technologies such as chatbots (Clark, 2020 ), intelligent tutoring, and automated grading systems (Heffernan & Heffernan, 2014 ). These AI-based systems offer several opportunities to all stakeholders throughout the learning and instructional process (Chen et al., 2020 ). Previous research conducted on the educational use of AI presented AI’s support for student collaboration and personalization of learning experiences (Luckin et al., 2016 ), scheduling of learning activities and adaptive feedback on learning processes (Koedinger et al., 2012 ), reducing teachers’ workload in collaborative knowledge construction (Roll & Wylie, 2016 ), predicting the probability of learners dropping out of school or being admitted into school (Popenici & Kerr, 2017 ), profiling students’ backgrounds (Cohen et al., 2017 ), monitoring student progress (Gaudioso et al., 2012 ; Swiecki et al., 2019 ), and summative assessment such as automated essay scoring (Okada et al., 2019 ; Vij et al., 2020 ; Yuan et al., 2020 ). Despite these opportunities, the educational use of AI is more behind what is expected, unlike in other sectors (e.g., finance and health). To achieve successful AI implementation in education, various stakeholders, specifically, teachers, should participate in AI creation, development, and integration (Langran et al., 2020 ; Qin et al., 2020 ).

The Roles of Teachers in AI-based Education

The evolution of education towards digital education does not imply that people will need less teachers in the future (Dillenbourg, 2016 ). Instead of speculating if AI will replace teachers, understanding the advantages that AI offers teachers and how these advantages can change teachers’ roles in the classroom is more reasonable (Hrastinski et al., 2019 ). Salomon ( 1996 ) demonstrated this during the early stages of development of educational technology by pointing out the need to consider how learning occurs through and with computers. As for AI, Holstein et al. ( 2019 ) suggested that in the future, AI-based machines can help teachers perform what Dillenbourg ( 2013 ) emphasized as their orchestrator role in the learning and teaching process. For AI to be able to truly help teachers in this way, however, it must first learn effective orchestration of learning and teaching from teachers’ data. This is because effective teaching depends on teachers’ capability to implement appropriate pedagogical methods in their instruction (Tondeur et al., 2020 ), and their pedagogically meaningful and productive teaching incidents can serve as models for AI-based educational systems (Prieto et al., 2018 ). That is, the data collected from the learning setting orchestrated by teachers form the foundation of AI-based teaching. For example, the data may help researchers to understand when and how teaching is effectively progressing (Luckin & Cukurova, 2019 ; Luckin et al., 2016 ). To prove that the role of teachers in providing the data on features of effective learning is crucial for the development of AI algorithms, we investigated the kind of data collected from teachers and teachers’ roles in the creation of AI algorithms.

To effectively integrate AI-based education in schools, teachers must be empowered to implement such integration by endowing them with the requisite knowledge, skills, and attitudes (Häkkinen et al., 2017 ; Kirschner, 2015 ; Seufert et al., 2020 ). However, teachers’ AI-related skills have not yet been sufficiently defined because the potential of AI in education has not yet been fully exploited (Luckin et al., 2016 ). To explore teachers’ AI-related knowledge, skills, and attitudes, their engagement with AI-based systems within their teaching setting has to be investigated in detail (Dillenbourg, 2016 ; Seufert et al., 2020 ). Therefore, in this study, we reviewed empirical research on how teachers interacted with AI-based systems and how they participated in the development of AI-based education systems. We believe that our synthesis of empirical research on the topic will contribute to the identification of AI-related teaching skills and the effective implementation of AI-based education in schools with the support of teachers.

This study explored the perspective and roles of teachers in AI-based research through a systematic review of the latest research on the topic. Our specific research questions (RQs) are as follows:

RQ1—What was the distribution over time of the studies that examined teachers’ AI use?

RQ2—What data were collected from teachers in the studies on AI-based education?

RQ3—What were the roles of teachers in AI-based research?

RQ4—What advantages did AI offer teachers?

RQ5—What challenges did teachers face when using AI for education?

RQ6—Which AI methods were utilized in AI-based research that teachers participated in?

Table 1 below lists these RQs with their corresponding rationales.

Manuscript Search and Selection Criteria

In reviews of research, several methods are used to select the studies that will be reviewed. Studies published in important journals of a given domain are selected from databases such as ProQuest (Heitink et al., 2016 ), Education Resources Information Center (ERIC), and the Social Science Citation Index (SSCI) (Akçayır & Akçayır, 2017 ; Kucuk et al., 2013 ). For this review, we selected English-language scientific studies on teachers’ AI use that were published in journals from the Web of Science (WoS) database within the last 20 years until 14 September 2020. We used this method because the field tags (e.g., the topic and research area) of the studies were easy to access from the WoS database (Luor et al., 2008 ). We used the following search string: “artificial intelligence,” “deep learning,” “reinforcement learning,” “supervised learning,” “unsupervised learning,” “neural network,” “ANN,” “natural language processing,” “fuzzy logic,” “decision trees,” “ensemble,” “Bayesian,” “clustering,” and “regularization.” To narrow our search, we used “teacher,” “teacher education,” “teacher professional development,” “K-12,” “middle school*,” “high school*,” “elementary school*,” and “kindergarten*.” We selected the search strings based on the main concepts of AI in education in past studies and literature reviews (Baran, 2014 ; Zawacki-Richter et al., 2019 ). Figure  1 presents our study search procedure.

figure 1

Flow chart for the selection of articles

In our first search, we found 751 studies. Next, we checked them to see if they met our inclusion and exclusion criteria. Our inclusion criteria were as follows: (a) empirical studies on AI in pre-service and in-service teacher education and on in-service teachers’ use of AI; (b) studies on AI applications and algorithms (e.g., personal tutors, automated scoring, personal assistant; decision trees, and artificial neural networks) for teaching or analyzing teachers’ data; and (c) studies on data collected from in-service K-12 teachers or pre-service teachers. We excluded editorials, reviews, and studies conducted at the higher education level. After we applied the criteria, 44 articles remained suitable for inclusion in this study.

Data Coding and Analysis

The publication year of the articles was noted to determine the distribution of the studies over time (RQ1). For RQ2, the following categories and category numbers were assigned to the data collected from teachers in previous AI-based research: self-report (1), video (2), interview (3), observation (4), feedback/discourse (5), grading (6), eye tracking (7), audiovisual/accelerometry (8), and log file (9). We qualitatively analyzed the content of the 44 articles to determine the advantages and challenges of AI for teachers (RQ4 and RQ5, respectively) and teachers’ roles in AI-based instruction as found in research (RQ3). We coded the studies not with the preliminary or template coding scheme, which would have unnecessarily limited them by fitting them into a pre-determined coding scheme (Şimşek & Yıldırım, 2011 ), but with the open coding process (Akçayır & Akçayır, 2017 ; Williamson, 2015 ), which followed these steps: (1) Familiarize with the whole set of articles; (2) Choose a document randomly, consider its primary meaning, and write down your thought on such meaning on the margin of the document; (3) List all your thoughts on the subject, combine similar thoughts, create three columns for key, unique, and leftover thoughts, and put each thought in the appropriate column; (4) Code the text; (5) Find the most illustrative phrases for your thoughts and turn them into categories; (6) Decide on an abbreviation for each category and alphabetize these abbreviations; (7) Incorporate the final codes and perform the initial analysis; and (8) Recode the studies if needed. To classify the AI methods (RQ6), we used previous literature reviews of AI use in diverse areas such as higher education, medicine, and business (Borges et al., 2020 ; Contreras & Vehi, 2018 ). We performed the investigator triangulation method to ensure the reliability of the coding process (Denzin, 2017 ). Accordingly, the first author coded the articles separately and then shared the codes with the second author. We negotiated disagreements by checking the code list and the relevant studies, and we updated and renamed some categories. Finally, we recoded the studies using the final code list.

Results and Discussion

Distribution of the studies.

(RQ1—What was the distribution over time of the studies that examined teachers’ AI use?)

Our analysis indicated that the first study on teachers’ AI use was published in 2004. Of the 44 studies we reviewed, 22 were published in 2018 and the following years. It has been forecasted that the usage of educational AI applications will increase (Qin et al., 2020 ; Zawacki-Richter et al., 2019 ). Such increase is implied in our finding that the publication of studies on AI-based teaching increased after 2017. Figure  2 presents the research trend on AI and teachers.

figure 2

Number of articles published by year

Figure  2 further indicates that research on teachers’ AI use in education intensified in the last four years. This implies that AI-based instruction by teachers is most likely to become more common in the near future. Supporting this, our review of literature on the topics “AI” and “education” showed that studies published between 2015 and 2019 accounted for 70% of all the studies from Web of Science and Google Scholar since 2010 (Chen et al., 2020 ). The availability of AI technologies and of educational software companies to create AI-based applications is increasing rapidly all over the world (Renz & Hilbig, 2020 ). Accordingly, it seems likely that teachers’ use of AI in the teaching process will grow and more studies will be conducted on this topic.

On the other hand, there are still fewer studies on AI use in education than in other areas such as medicine and business (Borges et al., 2020 ; Luckin & Cukurova, 2019 ). The educational technology (EdTech) market is growing much more slowly than other markets with respect to the dynamics of digital transformation. One of the reasons for this is the resistance of decision-makers such as educators, teachers, and traditional textbook publishers to the use of AI (EdTechXGlobal Report, 2016 ). Considering this resistance, it can be argued that more AI research is needed to show the pedagogical uses of AI in instructional processes and to speed up the uptake of AI technologies in education.

Data Types Collected from Teachers

(RQ2—What data were collected from teachers in the studies on AI-based education?)

Self-reported data were the most common data collected from teachers in the AI-based education studies. The researchers collected self-reported data to predict teacher-related variables such as engagement, performance, and teaching quality. In these studies, machine learning algorithms were used instead of conventional regression analysis to reveal nonlinear relationships between variables of teaching practice. For instance, Wang et al. ( 2020 ) collected data from 165 early childhood teachers to better understand indicators of quality teacher–child interaction. Similarly, in Yoo and Rho ( 2020 ), teachers’ self-reported job satisfaction was predicted by a machine learning technique. In some AI studies, teacher grades of student assignments or essays were used to train AI algorithms. For example, Yuan et al. ( 2020 ), in developing an automated scoring approach, needed expert teachers’ grades to validate their AI-based scoring system. A notable finding from our review is that self-reported grades accounted for nearly 44% of all data obtained from teachers (Fig.  3 ).

figure 3

In 11 of the studies that we reviewed, teachers provided more than one type of data. The data were mostly collected during or after teachers’ instruction. Our review findings highlight the crucial role of teachers in the instructional process (e.g., Huang et al., 2010 ; Lu, 2019 ; McCarthy et al., 2016 ; Pelham et al., 2020 ). For example, Schwarz et al. ( 2018 ) presented an online learning environment that uses machine learning to inform teachers about learners’ critical moments in collaborative learning by sending the teachers warnings. In their study, they observed how the teacher guided several groups at different times in a mathematics classroom. In addition to observations, they collected interview data from the teachers about the effectiveness of the online environment. Our review indicates that there is a significant gap in physiological data collection in AI studies with teachers. Only one of the studies we reviewed collected physiological data, that is, data on eye tracking and audiovisual/accelerometry data from sensors worn by the teachers (Prieto et al., 2018 ). In fact, physiological data can be considered relevant and useful for providing process-oriented, objective metrics regarding the critical moments that impact the quality of teaching or learning in an educational activity (Järvelä et al., 2021 ).

The Roles of Teachers in AI-based Research

(RQ3—What were the roles of teachers in AI-based research?)

Our findings from our open-coding analysis indicate that teachers have seven roles in AI research. These roles and their descriptions are shown in Table 2 . As seen from the table, teachers participated in AI research as models to train AI algorithms. This role was found to be the most common role of teachers in AI-based instruction ( f  = 18). This finding underlines the pivotal role of teachers in the development of AI-based education systems. For instance, Kelly et al. ( 2018 ) conducted a study to train AI algorithms to automatically detect teachers’ authentic questions in real-life classrooms. During the training of the AI algorithms, the teachers’ effective authentic questions were fed to the AI system as features. Following the AI training, the researchers tested AI in a different classroom and found that AI successfully identified authentic questions.

Another role that teachers were observed to have in AI research was providing big data to AI systems to enable them to forecast teachers’ professional development. In this line of research, teachers mostly provided data to AI systems for the latter’s prediction of different variables of the professional development of teachers such as their job satisfaction, performance, and engagement. For example, in one study, 10,642 teachers answered a survey (Buddhtha et al., 2019 ). Then, using AI, predictors of teacher engagement were determined. Similar to other areas, big data have played an important role in education, and teachers are considered among the most important sources of big data (Ruiz-Palmero et al., 2020 ). Our findings imply that AI can effectively inform teachers of their professional development.

This study also found that teachers involved in AI research provided input information on students’ characteristics for the AI-based implementation. For example, Nikiforos et al. ( 2020 ) investigated automatic detection of learners’ aggressive behavior in a virtual learning community. The AI system utilized teacher observations of students’ behavioral characteristics to predict the students who were more likely to bully others in the online community. Our review further revealed that teachers have taken on the role of grading assignments and essays to test the accuracy of AI algorithms in grading student performance. In such studies, the accuracy rate of the AI-based assessment was determined with the help of experienced teacher assessments (Bonneton-Botté et al., 2020 ; Gaudioso et al., 2012 ; McCarthy et al., 2016 ; Yuan et al., 2020 ).

In some AI-based education studies, teachers determined the criteria for some components of AI-based systems and assessments. For example, Huang et al. ( 2010 ) investigated the effect of the learning assistance tool ICT Literacy . The tool used machine learning. In their study, experienced teachers guided the AI system by defining the criteria for effective and timely feedback. In some studies, teachers also provided pedagogical guidance on the selection of materials for AI-based implementation. For example, Fitzgerald et al. ( 2015 ) utilized AI to present learning content with varying degrees of text complexity to early-grade students. They attempted to explore early-grade text complexity features. Text complexity in the AI system was determined based on teachers’ pedagogical guidance. Furthermore, teachers commented on the usability and design of AI-based technologies (Burstein et al., 2004 ). Finally, our results revealed a notable absence of pre-service teachers as participants in AI use studies. That is, there were no studies in which pre-service teachers actively participated or interacted with AI technologies.

Advantages of AI for Teachers

(RQ4—What advantages did AI offer teachers?)

We found several advantages of AI from our review of selected empirical studies on teachers’ AI use. The open coding revealed three categories of AI advantages: planning, implementation, and assessment (see Table 3 ).

The advantages of AI related to planning involved receiving information on students’ backgrounds and assisting teachers in deciding on the learning content during lesson planning. In a study, an AI system provided teachers background information on students’ risk factors for delinquency, such as aggression (Pelham et al., 2020 ). In terms of teacher assistance in planning learning content, Dalvean and Enkhbayar ( 2018 ) used machine learning to classify the readability of English fiction texts. The results of their study suggested that the classification can help English teachers to plan the course contents considering the readability features (Table 4 ).

Implementation

According to our review (see Table 3 ), the most prominent advantage of AI was stated as timely monitoring of learning processes ( f  =  12 ). For example, Su et al. ( 2014 ) developed a sensor-based learning concentration detection system using AI in a classroom environment. The system allowed teachers to monitor the degree of students’ concentration on lesson activities. Such AI-based monitoring can help teachers to provide immediate feedback (Burstein et al., 2004 ; Huang et al., 2010 , 2011 ) and quickly perform the necessary interventions (Nikiforos et al., 2020 ; Schwarz et al., 2018 ). For instance, teachers were able to discover critical moments in group learning and provide adaptive interventions for all the groups (Schwarz et al., 2018 ). Hence, AI systems can decrease the teaching burden on teachers by providing them feedback and assisting them with planning interventions and with student monitoring. In several studies, these contributions to teachers were particularly emphasized (Lu, 2019 ; Ma et al., 2020 ). Therefore, we assume that reduced teaching load may be another significant advantage of AI systems in education. For example, researchers reported that teachers benefitted from an AI-based peer tutor recommender system and saved time for other activities (Ma et al., 2020 ).

Our findings further revealed that AI can enable teachers to select or adapt the optimum learning activity based on AI feedback. For example, in Bonneton-Botté et al. ( 2020 ), teachers decided to implement exercises such as writing letters and numbers for students with a low graphomotor level based on the feedback they received from AI. According to our synthesis, AI can also make the teaching process more interesting for teachers. Teachers reported that AI-tutors facilitated enjoyable teaching experiences for them by breaking the monotony in the classroom (McCarthy et al., 2016 ). We also found out that AI algorithms can increase opportunities for teacher-student interaction by capturing and analyzing data from productive moments (Lamb & Premo, 2015 ) and tracking student progress (Farhan et al., 2018 ).

According to our review, AI helps teachers in exam automation and essay scoring and in decision-making on student performance. It has been found that an automated essay scoring system can not only significantly advance the effectiveness of essay scoring but also make scoring more objective (Yuan et al., 2020 ). Therefore, researchers are interested in the use of AI affordances to investigate automated systems. An important utility of AI-based applications in the context of assessment is to detect plagiarism in student essays (Dawson et al., 2020 ). Several existing AI-based systems (e.g., Turnitin) allow teachers to check the authenticity of essays submitted by students in graduate courses (Alharbi & Al-Hoorie, 2020 ). This can be considered an important utility of AI in student assessment. We coded seven studies on the advantage of exam automation and essay scoring. Six of these studies investigated the scoring of student-related outcomes (Annabestani et al., 2020 ; Huang et al., 2010 ; Tepperman et al., 2010 ; Yuan et al., 2020 ; Vij et al., 2020 ; Yang, 2012 ), and one study used AI-based systems to score teachers’ open-ended responses, to assess usable mathematics teaching knowledge (Kersting et al., 2014 ). We suggest that more studies be conducted on automatic scoring of teacher-related variables such as technological and pedagogical knowledge. Considering that classroom video analysis (CVA) assessment is capable of scoring and assessing teacher knowledge (Kersting et al., 2014 ), CVA can be used in both in-service and pre-service teacher education, particularly on micro-teaching methods. For example, natural language processing methods (Bywater et al., 2019 ) can utilize existing CVA scoring schemes to detect teachers’ verbal communication patterns in conveying instructional content to students. Furthermore, machine vision methods (Ozdemir & Tekin, 2016 ) can be applied to teachers’ video recordings to observe the patterns in their body posture. Such methods may provide valuable feedback to novice teachers on developing their teaching skills.

AI could also help provide teachers feedback on the effectiveness of their instructional practice (Farhan et al., 2018 ; Lamb & Premo, 2015 ). Teachers’ pedagogically meaningful teaching aspects can be modeled automatically using multiple data sources and AI (Dillenbourg, 2016 ; Prieto et al., 2018 ). Through these models, teachers can improve their instructional practices. Besides, the pedagogically effective models can train AI algorithms to make them more sophisticated.

Also, AI technologies were used to better predict or assess teacher performance or outcomes. Researchers predicted pre-service or in-service teachers’ professional development outcomes such as course achievement using machine learning algorithms, which are beneficial in revealing complex and nonlinear relationships. While seven studies collected data from in-service teachers, two studies obtained data from pre-service teachers (Akgün & Demir, 2018 ; Demir, 2015 ).

In addition, Cohen et al. ( 2017 ) conducted a study on a sample with autism spectrum disorder and another sample without. The results revealed that a machine learning tool can provide accurate and informative data for diagnosing autism spectrum disorder. In the study of Cohen et al., teachers commented on the accuracy of the tool.

Figure  4 illustrates the role of teachers in AI research and the advantages of AI for teachers. This gives us ideas about AI expectations from teachers and AI opportunities for teachers.

figure 4

Advantages of AI and teacher roles in AI research

Challenges in AI Use by Teachers

(RQ5—What challenges did teachers face when using AI for education?)

The challenges in teachers’ use of AI are summarized in Table 3 . One of the most observed challenges is the limited technical capacity of AI. For example, AI may not be efficient for scoring graphics or figures and text. Fitzgerald et al. ( 2015 ) reported that an AI-based system failed to assess the complexity of texts when they included images. The limited reliability of the AI algorithm was found to be another considerable challenge. Therefore, automated writing evaluation technologies that use AI algorithms have to be improved to provide trustworthy evaluations for teachers (Qian et al., 2020 ). Inefficiency of AI systems in assessment and evaluation is related more to validity than to reliability. AI-based scoring may sometimes improperly evaluate performance (Lu, 2019 ). Our review further indicated that AI systems may be too context-dependent such that using them in varying educational settings can be challenging. For example, an AI algorithm designed to detect specific behavior in a specific online learning environment cannot work in different languages (Nikiforos et al., 2020 ). In other words, this limitation can stem from cultural differences.

The lack of technological knowledge of teachers (Chiu & Chai, 2020 ) and the lack of technical infrastructure in schools (McCarthy et al., 2016 ) are two other challenges in integrating AI into education. It has also been reported that AI-based feedback is sometimes slow. This can lead to teacher boredom in using AI (McCarthy et al., 2016 ). Although adaptive and personalized feedback is important for teachers to reduce their workload, AI systems are not always capable of giving different kinds of feedback based on students’ needs (Burstein et al., 2004 ). Therefore, AI systems currently fall short of meeting the needs of teachers for effective feedback (Fig.  5 ).

figure 5

AI methods in the reviewed studies

AI Methods in Research

(RQ6—Which AI methods were utilized in AI-based research that teachers participated in?)

We coded AI methods in the studies, following previous reviews (Borges et al., 2020 ; Contreras & Vehi, 2018 ; Saa et al., 2019 ). Artificial neural networks (ANN) appeared to have been the most used ( f  =  16 ) AI method in the education studies involving teachers. ANN is a machine learning method that is widely used in business, economics, engineering, and higher education (Musso et al., 2013 ). According to our review, ANN also processes common data sourced from teachers. For example, Alzahrani and his colleagues (Alzahrani et al., 2020 ) investigated the relationship between thermal comfort and teacher performance. Through ANN analysis, they analyzed the data related to teachers’ productivity and the classroom temperature. Decision trees, another machine learning algorithm, were frequently utilized in our reviewed studies. For instance, Gaudioso et al. ( 2012 ) used decision tree algorithms on data to support teachers in detecting moments in which students were having problems in an adaptive educational system. Similar to our findings, a review of predictive machine learning methods for university students’ academic performance found that the decision tree algorithm was the most commonly used (Saa et al., 2019 ).

In our review, we also investigated the subject domains of teachers’ AI-based instruction. The studies with teachers from various domains accounted for 16% of all research (see Fig.  6 ). These studies generally had a larger sample size than the studies with teachers from a single domain (e.g., Buddhtha et al., 2019 ). Primary education and the English language appeared to be the domains where teachers use AI the most. Studies on automated essay scoring and adaptive feedback were conducted in English language courses. We found that 46% of all the studies we reviewed were performed in fields related to science, technology, engineering, and mathematics (STEM), and a much smaller percentage of studies were performed in the social science and early childhood fields together. These might have been because teachers in STEM fields are more accustomed to technology use (Chai et al., 2020 ).

figure 6

Distribution of studies by subject domain

Conclusions and Future Research

Due to the growing interest in AI use, the number of studies on teachers’ use of AI has been increasing in the last few years, and more studies are needed to know more about teachers’ AI use. As AI continues to become popular in education, undoubtedly more research will focus on AI use in teachers’ instruction. Our synthesis of relevant studies shows that there has been little interest in investigating AI in pre-service teacher education. Hence, we recommend more empirical studies on pre-service teachers’ AI use. Developing AI awareness and skills among pre-service teachers may facilitate better adoption of AI-based teaching in future classrooms. As Valtonen et al. ( 2021 ) have shown, teachers’ and students’ use of emerging technologies can make a major contribution to the development of 21st-century practices in schools.

Another gap we found in our review is the limited variety of methods and data channels used in AI-based systems. It seems that AI-based systems in education do not exploit the potential of multimodal data. Most of the AI applications that teachers use utilize only self-reported and/or observation data, while different data modalities can create more opportunities to understand teaching and learning processes (Järvelä & Bannert, 2021 ). Enriching AI systems with other data types (e.g., physiological data) may give a better understanding of different layers of teaching and learning, and thus, help teachers to plan effective learning interventions, provide timely feedback and conduct more accurate assessments of students’ cognitive and emotional states during the instruction. Utilizing multimodal data can help to model more efficient and effective AI systems for education. Thus, we conclude that further work is necessary to improve the capabilities of AI systems with multimodal data.

Our review revealed that teachers have limited involvement in the development of AI-based education systems. Although in some studies, experienced teachers were recruited to train AI algorithms, further efforts are needed to involve a wider population of teachers in developing AI systems. Such involvement should go beyond training AI algorithms and involve teachers in the crucial decision-making processes on how (not) to develop AI systems for better teaching. For their part, AI developers and software companies should consider involving teachers in the development process to a greater extent.

This study showed that AI has been reported as generally beneficial to teachers’ instruction. Teachers can take advantage of AI in their planning, implementation, and assessment work. AI assists them in identifying their students’ needs so that they can determine the most suitable learning content and activities for their students. During the activities, such as a collaborative task, with the help of AI, teachers can monitor their students in a timely manner and give them immediate feedback (e.g., Swiecki et al., 2019 ). After the instruction, AI-based automated scoring systems can help teachers with assessment (e.g., Kersting et al., 2014 ). These advantages mainly reduce teachers’ workload and help them to focus their attention on critical issues such as timely intervention and assessment (Vij et al., 2020 ). However, many of the studies reviewed were conducted to predict outcome variables (e.g., performance, engagement, and job satisfaction) through machine learning algorithms (Yoo & Rho, 2020 ). More studies are needed to enable AI systems to provide information and feedback on how the learning processes temporally unfold during teachers’ instruction. Then, teachers will be able to interact with actual AI systems to better understand possible opportunities.

This study revealed several limitations and challenges of AI for teachers’ use such as its limited reliability, technical capacity, and applicability in multiple settings. Future empirical research is necessary to address the challenges reported in this study. We conclude that developing AI systems that are technically and pedagogically capable of contributing to quality education in diverse learning settings is yet to be achieved. To achieve this objective, multidisciplinary collaboration between multiple stakeholders (e.g., AI developers, pedagogical experts, teachers, and students) is crucial. We hope that this review will serve as a springboard for such collaboration.

*References marked with an asterisk show the articles used in the review

Aggarwal, C. C. (2018). Neural networks and deep learning.  Springer ,  10 , 978-3. https://doi.org/10.1007/978-3-319-94463-0

Akçayır, M., & Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educational Research Review, 20 , 1–11. https://doi.org/10.1016/j.edurev.2016.11.002

Article   Google Scholar  

*Akgün, E., & Demir, M. (2018). Modeling course achievements of elementary education teacher candidates with artificial neural networks.  International Journal of Assessment Tools in Education ,  5 (3), 491–509. https://doi.org/10.21449/ijate.444073

Alenezi, H. S., & Faisal, M. H. (2020). Utilizing crowdsourcing and machine learning in education: Literature review.  Education and Information Technologies , 1-16. https://doi.org/10.1007/s10639-020-10102-w

Alharbi, M. A., & Al-Hoorie, A. H. (2020). Turnitin peer feedback: Controversial vs. non-controversial essays. International Journal of Educational Technology in Higher Education, 17 , 1–17. https://doi.org/10.1186/s41239-020-00195-1

Alloghani, M., Al-Jumeily, D., Mustafina, J., Hussain, A., & Aljaaf, A. J. (2020). A systematic review on supervised and unsupervised machine learning algorithms for data science. In  Supervised and Unsupervised Learning for Data Science  (pp. 3–21). Springer, Cham. https://doi.org/10.1007/978-3-030-22475-2_1

Alzahrani, H., Arif, M., Kaushik, A., Goulding, J., & Heesom, D. (2020). Artificial neural network analysis of teachers’ performance against thermal comfort. International Journal of Building Pathology and Adaptation . https://doi.org/10.1108/IJBPA-11-2019-0098

Annabestani, M., Rowhanimanesh, A., Mizani, A., & Rezaei, A. (2020). Fuzzy descriptive evaluation system: Real, complete and fair evaluation of students. Soft Computing, 24 (4), 3025–3035. https://doi.org/10.1007/s00500-019-04078-0

Baker, T., & Smith, L. (2019). Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges . Retrieved from Nesta Foundation website: https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf

Baran, E. (2014). A review of research on mobile learning in teacher education. Journal of Educational Technology & Society, 17 (4), 17–32.

Google Scholar  

*Bonneton-Botté, N., Fleury, S., Girard, N., Le Magadou, M., Cherbonnier, A., Renault, M., ... & Jamet, E. (2020). Can tablet apps support the learning of handwriting? An investigation of learning outcomes in kindergarten classroom.  Computers & Education ,  151 , 103831. https://doi.org/10.1016/j.compedu.2020.103831

Borges, A. F., Laurindo, F. J., Spínola, M. M., Gonçalves, R. F., & Mattos, C. A. (2020). The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions.  International Journal of Information Management , 102225. https://doi.org/10.1016/j.ijinfomgt.2020.102225

Bonk, C. J., & Wiley, D. A. (2020). Preface: Reflections on the waves of emerging learning technologies. Educational Technology Research and Development, 68 (4), 1595–1612. https://doi.org/10.1007/s11423-020-09809-x

Buddhtha, S., Natasha, C., Irwansyah, E., & Budiharto, W. (2019). Building an artificial neural network with backpropagation algorithm to determine teacher engagement based on the indonesian teacher engagement index and presenting the data in a Web-Based GIS. International Journal of Computational Intelligence Systems, 12 (2), 1575–1584. https://doi.org/10.2991/ijcis.d.191101.003

Burstein, J., Chodorow, M., & Leacock, C. (2004). Automated essay evaluation: The Criterion online writing service. Ai Magazine, 25 (3), 27–27. https://doi.org/10.1609/aimag.v25i3.1774

Bywater, J. B., Chiu J. l., Hong J., & Sankaranarayanan,V. (2019). The teacher responding tool: Scaffolding the teacher practice of responding to student ideas in mathematics classrooms.  Computers & Education   139 , 16-30. https://doi.org/10.1016/j.compedu.2019.05.004

Chai, C. S., Jong, M., & Yan, Z. (2020). Surveying Chinese teachers’ technological pedagogical STEM knowledge: A pilot validation of STEM-TPACK survey. International Journal of Mobile Learning and Organisation, 14 (2), 203–214. https://doi.org/10.1504/IJMLO.2020.106181

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8 , 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510

Chiu, T. K., & Chai, C. S. (2020). Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability, 12 (14), 5568. https://doi.org/10.3390/su12145568

Clark, D. (2020).  Artificial ıntelligence for learning: How to use AI to support employee development . Kogan Page Publishers.

*Cohen, I. L., Liu, X., Hudson, M., Gillis, J., Cavalari, R. N., Romanczyk, R. G., ... & Gardner, J. M. (2017). Level 2 Screening with the PDD Behavior Inventory: Subgroup Profiles and Implications for Differential Diagnosis.  Canadian Journal of School Psychology ,  32 (3-4), 299-315. https://doi.org/10.1177/0829573517721127

Contreras, I., & Vehi, J. (2018). Artificial intelligence for diabetes management and decision support: Literature review. Journal of Medical Internet Research, 20 (5), e10775. https://doi.org/10.2196/10775

Cope, B., Kalantzis, M., & Searsmith, D. (2020). Artificial intelligence for education: Knowledge and its assessment in AI-enabled learning ecologies.  Educational Philosophy and Theory , 1–17.

Cukurova, M., & Luckin, R. (2018). Measuring the impact of emerging technologies in education: A pragmatic approach. Springer, Cham. https://discovery.ucl.ac.uk/id/eprint/10068777

*Dalvean, M., & Enkhbayar, G. (2018). Assessing the readability of fiction: a corpus analysis and readability ranking of 200 English fiction texts* 4.  Linguistic Research ,  35 , 137–170. https://doi.org/10.17250/khisli.35.201809.006

Dawson, P., Sutherland-Smith, W., & Ricksen, M. (2020). Can software improve marker accuracy at detecting contract cheating? A pilot study of the Turnitin authorship investigate alpha. Assessment & Evaluation in Higher Education, 45 (4), 473–482.

*Demir, M. (2015). Predicting pre-service classroom teachers’ civil servant recruitment examination’s educational sciences test scores using artificial neural networks.  Educational Sciences: Theory & Practice ,  15 (5). Retrieved from https://doi.org/10.12738/estp.2015.5.0018

Denzin, N. K. (2017).  The research act: A theoretical introduction to sociological methods . Transaction publishers.

Dillenbourg, P. (2013). Design for classroom orchestration. Computers & Education, 69, 485–492. https://doi.org/10.1016/j.compedu.2013.04.013 .

Dillenbourg, P. (2016). The evolution of research on digital education. International Journal of Artificial Intelligence in Education, 26 (2), 544–560. https://doi.org/10.1007/s40593-016-0106-z

EdTechXGlobal. (2016). EdTechXGlobal report 2016—Global EdTech industry report: a map for the future of education and work. Retrieved from http://ecosystem.edtechxeurope.com/2016-edtech-report

Farhan, M., Jabbar, S., Aslam, M., Ahmad, A., Iqbal, M. M., Khan, M., & Maria, M. E. A. (2018). A real-time data mining approach for interaction analytics assessment: IoT based student interaction framework. International Journal of Parallel Programming, 46 (5), 886–903. https://doi.org/10.1007/s10766-017-0553-7

Fitzgerald, J., Elmore, J., Koons, H., Hiebert, E. H., Bowen, K., Sanford-Moore, E. E., & Stenner, A. J. (2015). Important text characteristics for early-grades text complexity. Journal of Educational Psychology, 107 (1), 4. https://doi.org/10.1037/a0037289

Gaudioso, E., Montero, M., & Hernandez-Del-Olmo, F. (2012). Supporting teachers in adaptive educational systems through predictive models: A proof of concept. Expert Systems with Applications, 39 (1), 621–625. https://doi.org/10.1016/j.eswa.2011.07.052

Häkkinen, P., Järvelä, S., Mäkitalo-Siegl, K., Ahonen, A., Näykki, P., & Valtonen, T. (2017). Preparing teacher students for 21st century learning practices (PREP 21): A framework for enhancing collaborative problem solving and strategic learning skills. Teachers and Teaching: Theory and Practice, 23 (1), 25–41. https://doi.org/10.1080/13540602.2016.1203772

Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education, 24 (4), 470–497. https://doi.org/10.1007/s40593-014-0024-x

Heitink, M. C., Van der Kleij, F. M., Veldkamp, B. P., Schildkamp, K., & Kippers, W. B. (2016). A systematic review of prerequisites for implementing assessment for learning in classroom practice. Educational Research Review, 17 , 50–62. https://doi.org/10.1016/j.edurev.2015.12.002

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and Implications for Teaching and Learning . Center for Curriculum Redesign.

Holstein, K., McLaren, B. M., & Aleven, V. (2019). Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity.  Journal of Learning Analytics ,  6 (2), 27–52. https://doi.org/10.18608/jla.2019.62.3

Hrastinski, S., Olofsson, A. D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., ... & Utterberg, M. (2019). Critical imaginaries and reflections on artificial intelligence and robots in post digital K-12 education.  Post digital Science and Education ,  1 (2), 427-445. https://doi.org/10.1007/s42438-019-00046-x

*Huang, C. J., Liu, M. C., Chang, K. E., Sung, Y. T., Huang, T. H., Chen, C. H., ... & Chang, T. Y. (2010). A learning assistance tool for enhancing ICT literacy of elementary school students.  Journal of Educational Technology & Society ,  13 (3), 126-138.

Huang, C. J., Wang, Y. W., Huang, T. H., Chen, Y. C., Chen, H. M., & Chang, S. C. (2011). Performance evaluation of an online argumentation learning assistance agent. Computers & Education, 57 (1), 1270–1280. https://doi.org/10.1016/j.compedu.2011.01.013

Järvelä, S. & Bannert, M. (2021). Temporal and adaptive processes of regulated learning – What can multimodal data tell? Learning and Instruction , 72, https://doi.org/10.1016/j.learninstruc.2019.101268

Järvelä, S., Malmberg, J., Haataja, E., Sobocinski, M., & Kirschner, P. A. (2021). What multimodal data can tell us about the students’ regulation of their learning process.  Learning and Instruction ,  101203 . https://doi.org/10.1016/j.learninstruc.2019.04.004

Kelly, S., Olney, A. M., Donnelly, P., Nystrand, M., & D’Mello, S. K. (2018). Automatically measuring question authenticity in real-world classrooms. Educational Researcher, 47 (7), 451–464. https://doi.org/10.3102/0013189X18785613

Kersting, N. B., Sherin, B. L., & Stigler, J. W. (2014). Automated scoring of teachers’ open-ended responses to video prompts: Bringing the classroom-video-analysis assessment to scale. Educational and Psychological Measurement, 74 (6), 950–974. https://doi.org/10.1177/0013164414521634

Kirschner, P. A. (2015). Do we need teachers as designers of technology enhanced learning? Instructional Science, 43 (2), 309–322. https://doi.org/10.1007/s11251-015-9346-9

Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The Knowledge-Learning-Instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36 (5), 757–798. https://doi.org/10.1111/j.1551-6709.2012.01245.x

Kucuk, S., Aydemir, M., Yildirim, G., Arpacik, O., & Goktas, Y. (2013). Educational technology research trends in Turkey from 1990 to 2011. Computers & Education, 68 , 42–50. https://doi.org/10.1016/j.compedu.2013.04.016

Lamb, R., & Premo, J. (2015). Computational modeling of teaching and learning through application of evolutionary algorithms. Computation, 3 (3), 427–443. https://doi.org/10.3390/computation3030427

Langran, E., Searson, M., Knezek, G., & Christensen, R. (2020). AI in Teacher Education. In  Society for Information Technology & Teacher Education International Conference  (pp. 735–740). Association for the Advancement of Computing in Education (AACE). https://www.learntechlib.org/p/215821/

Lu, X. (2019). An empirical study on the artificial intelligence writing evaluation system in China CET. Big Data, 7 (2), 121–129. https://doi.org/10.1089/big.2018.0151

Luckin, R., & Cukurova, M. (2019). Designing educational technologies in the age of AI: A learning sciences-driven approach. British Journal of Educational Technology, 50 (6), 2824–2838. https://doi.org/10.1111/bjet.12861

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education . Pearson Education.

Luor, T., Johanson, R. E., Lu, H. P., & Wu, L. L. (2008). Trends and lacunae for future computer assisted learning (CAL) research: An assessment of the literature in SSCI journals from 1998–2006. Journal of the American Society for Information Science and Technology, 59 (8), 1313–1320. https://doi.org/10.1002/asi.20836

Ma, Z. H., Hwang, W. Y., & Shih, T. K. (2020). Effects of a peer tutor recommender system (PTRS) with machine learning and automated assessment on vocational high school students’ computer application operating skills. Journal of Computers in Education, 7 (3), 435–462. https://doi.org/10.1007/s40692-020-00162-9

McCarthy, T., Rosenblum, L. P., Johnson, B. G., Dittel, J., & Kearns, D. M. (2016). An artificial intelligence tutor: A supplementary tool for teaching and practicing braille. Journal of Visual Impairment & Blindness, 110 (5), 309–322. https://doi.org/10.1177/0145482X1611000503

Musso, M. F., Kyndt, E., Cascallar, E. C., & Dochy, F. (2013). Predicting general academic performance and ıdentifying the differential contribution of participating variables using artificial neural networks.  Frontline Learning Research ,  1 (1), 42–71. https://doi.org/10.14786/flr.v1i1.13

Nikiforos, S., Tzanavaris, S., & Kermanidis, K. L. (2020). Virtual learning communities (VLCs) rethinking: Influence on behavior modification—bullying detection through machine learning and natural language processing. Journal of Computers in Education, 7 , 531–551. https://doi.org/10.1007/s40692-020-00166-5

Okada, A., Whitelock, D., Holmes, W., & Edwards, C. (2019). e-Authentication for online assessment: A mixed-method study. British Journal of Educational Technology, 50 (2), 861–875.

Ozdemir, O., & Tekin, A. (2016). Evaluation of the presentation skills of the pre-service teachers via fuzzy logic. Computers in Human Behavior, 61 , 288–299. https://doi.org/10.1016/j.chb.2016.03.013

Pelham, W. E., Petras, H., & Pardini, D. A. (2020). Can machine learning improve screening for targeted delinquency prevention programs? Prevention Science, 21 (2), 158–170. https://doi.org/10.1007/s11121-019-01040-2

Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12 (1), 1–13. https://doi.org/10.1186/s41039-017-0062-8

Prieto, L. P., Sharma, K., Kidzinski, Ł, Rodríguez-Triana, M. J., & Dillenbourg, P. (2018). Multimodal teaching analytics: Automated extraction of orchestration graphs from wearable sensor data. Journal of Computer Assisted Learning, 34 (2), 193–203. https://doi.org/10.1111/jcal.12232

Qian, L., Zhao, Y., & Cheng, Y. (2020). Evaluating China’s automated essay scoring system iWrite. Journal of Educational Computing Research, 58 (4), 771–790. https://doi.org/10.1177/0735633119881472

Qin, F., Li, K., & Yan, J. (2020). Understanding user trust in artificial intelligence-based educational systems: Evidence from China. British Journal of Educational Technology, 51 (5), 1693–1710. https://doi.org/10.1111/bjet.12994

Renz, A., & Hilbig, R. (2020). Prerequisites for artificial intelligence in further education: Identification of drivers, barriers, and business models of educational technology companies. International Journal of Educational Technology in Higher Education, 17 , 1–21. https://doi.org/10.1186/s41239-020-00193-3

Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26 (2), 582–599. https://doi.org/10.1007/s40593-016-0110-3

Ruiz-Palmero, J., Colomo-Magaña, E., Ríos-Ariza, J. M., & Gómez-García, M. (2020). Big data in education: Perception of training advisors on its use in the educational system. Social Sciences, 9 (4), 53. https://doi.org/10.3390/socsci9040053

Russel, S., & Norvig, P. (2010). Artificial intelligence - a modern approach . Pearson Education.

Saa, A. A., Al-Emran, M., & Shaalan, K. (2019). Factors affecting students’ performance in higher education: A systematic review of predictive data mining techniques. Technology, Knowledge and Learning, 24 (4), 567–598. https://doi.org/10.1007/s10758-019-09408-7

Salomon, G. (1996). Studying novel learning environments as patterns of change. In S. Vosiniadou, E. De Corte, R. Glaser & H. Mandl (Eds.). International Perspectives on the design of Technology Supported Learning. NJ: Lawrence Erlbaum Associates.

Swiecki, Z., Ruis, A. R., Gautam, D., Rus, V., & Williamson Shaffer, D. (2019). Understanding when students are active-in-thinking through modeling-in-context. British Journal of Educational Technology, 50 (5), 2346–2364. https://doi.org/10.1111/bjet.12869

Sánchez-Prieto, J. C., Cruz-Benito, J., Therón Sánchez, R., & García Peñalvo, F. J. (2020). Assessed by machines: Development of a TAM-based tool to measure ai-based assessment acceptance among students. International Journal of Interactive Multimedia and Artificial Intelligence, 6 (4), 80–86. https://doi.org/10.9781/ijimai.2020.11.009

Schwarz, B. B., Prusak, N., Swidan, O., Livny, A., Gal, K., & Segal, A. (2018). Orchestrating the emergence of conceptual learning: A case study in a geometry class. International Journal of Computer-Supported Collaborative Learning, 13 (2), 189–211. https://doi.org/10.1007/s11412-018-9276-z

Seufert, S., Guggemos, J., & Sailer, M. (2020). Technology-related knowledge, skills, and attitudes of pre-and in-service teachers: The current situation and emerging trends. Computers in Human Behavior, 115 , 106552. https://doi.org/10.1016/j.chb.2020.106552

Şimşek, H., & Yıldırım, A. (2011). Qualitative research methods in social sciences . Seçkin Publishing.

Su, Y. N., Hsu, C. C., Chen, H. C., Huang, K. K., & Huang, Y. M. (2014). Developing a sensor-based learning concentration detection system. Engineering Computations., 31 (2), 216–230. https://doi.org/10.1108/EC-01-2013-0010

Tepperman, J., Lee, S., Narayanan, S., & Alwan, A. (2010). A generative student model for scoring word reading skills. IEEE Transactions on Audio, Speech, and Language Processing, 19 (2), 348–360. https://doi.org/10.1109/TASL.2010.2047812

Tondeur, J., Scherer, R., Siddiq, F., & Baran, E. (2020). Enhancing pre-service teachers’ technological pedagogical content knowledge (TPACK): A mixed-method study. Educational Technology Research and Development, 68 (1), 319–343. https://doi.org/10.1007/s11423-019-09692-1

Valtonen, T., Hoang, N., Sointu, E., Näykki, P., Virtanen, A., Pöysä-Tarhonen, J., Häkkinen, P., Järvelä, S., Mäkitalo, K., & Kukkonen, J. (2021). How pre-service teachers perceive their 21st-century skills and dispositions: A longitudinal perspective. Computers in Human Behavior, 116 , 106643. https://doi.org/10.1016/j.chb.2020.106643

Vij, S., Tayal, D., & Jain, A. (2020). A machine learning approach for automated evaluation of short answers using text similarity based on WordNet graphs. Wireless Personal Communications, 111 (2), 1271–1282. https://doi.org/10.1007/s11277-019-06913-x

Wang, S., Hu, B. Y., & LoCasale-Crouch, J. (2020). Modeling the nonlinear relationship between structure and process quality features in Chinese preschool classrooms. Children and Youth Services Review, 109 , 104677. https://doi.org/10.1016/j.childyouth.2019.104677

Williamson, M. (2015). “I wasn’t reinventing the wheel, just operating the tools”: The evolution of the writing processes of online first-year composition students (unpublished doctorial dissertation) . Arizona State University.

Yang, C. H. (2012). Fuzzy fusion for attending and responding assessment system of affective teaching goals in distance learning. Expert Systems with Applications, 39 (3), 2501–2508. https://doi.org/10.1016/j.eswa.2011.08.102

*Yoo, J. E., & Rho, M. (2020). Exploration of predictors for Korean teacher job satisfaction via a machine learning technique, Group Mnet.  Frontiers in psychology ,  11 , 441. https://doi.org/10.3389/fpsyg.2020.00441

*Yuan, S., He, T., Huang, H., Hou, R., & Wang, M. (2020). Automated Chinese essay scoring based on deep learning.  CMC-Computers Materials & Continua ,  65 (1), 817–833. https://doi.org/10.32604/cmc.2020.010471

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 39. https://doi.org/10.1186/s41239-019-0171-0

Download references

Open Access funding provided by University of Oulu including Oulu University Hospital.

Author information

Authors and affiliations.

Learning and Learning Processes Research Unit, Faculty of Education, University of Oulu, 90014, Oulu, Finland

Ismail Celik & Hanni Muukkonen

Learning and Educational Technology Research Unit, Faculty of Education, University of Oulu, 90014, Oulu, Finland

Muhterem Dindar & Sanna Järvelä

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ismail Celik .

Ethics declarations

Human and animal rights.

There were no human participants and/or animals.

Informed Consent

This study is a literature review; therefore, no informed consent was needed.

Conflict of Interest

There are no potential conflicts of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Celik, I., Dindar, M., Muukkonen, H. et al. The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends 66 , 616–630 (2022). https://doi.org/10.1007/s11528-022-00715-y

Download citation

Accepted : 07 March 2022

Published : 25 March 2022

Issue Date : July 2022

DOI : https://doi.org/10.1007/s11528-022-00715-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence in education
  • Systematic review
  • Teacher professional development
  • Technology integration
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 27 December 2022

How does technology challenge teacher education?

  • Lina Kaminskienė 1 ,
  • Sanna Järvelä 2 &
  • Erno Lehtinen 1 , 3  

International Journal of Educational Technology in Higher Education volume  19 , Article number:  64 ( 2022 ) Cite this article

8224 Accesses

5 Citations

8 Altmetric

Metrics details

The paper presents an overview of challenges and demands related to teachers’ digital skills and technology integration into educational content and processes. The paper raises a debate how technologies have created new skills gaps in pre-service and in-service teacher training and how that affected traditional forms of teacher education. Accordingly, it is discussed what interventions might be applicable to different contexts to address these challenges. It is argued that technologies should be viewed both as the field where new competences should be developed and at the same time as the method used in developing learning environments for teacher students.

Introduction

In the last few decades, national authorities and multinational organisations have emphasised the importance of increasing the use of information and communication technologies (ICT) in schools and universities (Flecknoe, 2002 ; Roztocki et al., 2019 ; UNESCO ICT Competency Framework for Teachers, 2018 ). This poses a double challenge for teacher education: determining how new technologies can be used to improve the quality of learning experiences that student teachers receive during their university studies and identifying what kinds of new skills future teachers will need for teaching in technologically rich school environments. Several of the arguments in favour of greater use of ICT in schools are based on the belief that due to the general digitisation of the workforce, it is vital that students acquire good digital skills at an early age. However, it has also been argued that the use of ICT will be engaging for students and can thus result in better learning outcomes (Cheung & Slavin, 2013 ; Gloria, 2015 ). A number of large meta-analyses have shown that intervention studies utilising technology have positive effects on students’ motivation and learning (Fadda et al., 2022 ; Wouters et al., 2013 ).

However, when large-scale national and international evaluation studies have examined the relationship between the use of ICT and student achievement, the results have been mixed. Researchers have reported that re-analyses of large international evaluation studies, including the Trends in International Mathematics and Science Study (TIMMS) and Programme for International Student Assessment (PISA), indicate that there is either no relationship or a negative relationship between the frequency of ICT use in teaching and students’ achievements (Eickelmann et al., 2016 ; Papanastasiou et al., 2003 ).

The mixed results regarding the impact of ICT suggest that there are qualitative differences between the ways in which technology is implemented. In intervention studies, ICT applications have typically been used with careful planning, with extensive professional development for teachers conducting the experiment and with continuous support from researchers, whereas large-scale evaluation studies have focused on regular classrooms without such support. When technology is implemented in these latter situations, the pedagogical quality of the technology application is determined by the teachers’ knowledge and skills. However, there is a great deal of variation in the knowledge and competences of teachers when it comes to using technology in their classrooms (Valtonen et al., 2016 ). This highlights the importance of the in-service training of teachers while at the same time calling attention to pre-service teachers’ opportunities to acquire the competencies necessary to implement technology in their classrooms.

What does the development of technology mean for teacher education and how does it challenge traditional forms and content of the discipline? During the past few decades, a large number of policy documents and scientific studies have addressed this issue (Bakir, 2015 ). Different perspectives can be taken into account when discussing the influence of technology on teacher education. Several studies have examined the skills that will be necessary to apply technology to pedagogical practice in the future. This topic has been explored from several perspectives, such as teacher students’ technology literacy or their knowledge of technological pedagogical content (Mishra & Koehler, 2006 ). To ensure that future teachers possess adequate technical skills, standards and recommendations have been developed regarding the content of teacher education programmes. Rather than simply focusing on basic technological skills, the main emphasis has been on the knowledge and skills associated with the pedagogical use of technology (Erstad et al., 2021 ). Moreover, technology can provide many opportunities to develop novel methods to improve the quality of teacher education, such as the development of new methods for conducting research in the field of teacher education. In this special issue, these issues are addressed from a variety of theoretical and methodological perspectives.

Technology integration and content in teacher education

Given the many ways in which technology can be used in education, pre-service teachers’ pedagogical and technical competences in using ICT in teaching have different dimensions. For instance, Tondeur et al. ( 2017 ) developed a test to measure pre-service teachers’ ICT abilities and applied it to a large sample of Belgian teacher candidates. According to the findings, there are two dimensions to ICT competences: (1) competencies for supporting students’ use of ICT in class and (2) competencies for using ICT to create instructional materials. Some studies have also reported barriers that hinder the organisation of adequate teacher education to develop these skills, such as faculty beliefs and skills (Bakir, 2015 ; Polly et al., 2010 ).

The experiences pre-service teachers acquire during their teacher studies have also been shown to influence how willing and skilled they are when it comes to integrating technology in their classrooms (Agyei & Voogt, 2011 ). Additionally, studies have shown that the opportunity for teacher students to observe advanced technology applications in real-world settings is important for their future professional development (Gromseth et al., 2010 ). Hence, it is not sufficient to take formal courses in ICT or educational technology without applying these skills in the classroom.

Finnish pre-service teacher education stresses not only ICT skills but also teachers’ competences, such as strategic learning skills and collaboration competences. Häkkinen et al. ( 2017 ) identified five profiles among Finnish first-year pre-service teachers (N = 872) using perceptions of the teachers’ strategic learning skills and collaboration dispositions and investigated what background variables explained membership to those profiles. The most robust factor explaining membership in the profiles was life satisfaction. For example, pre-service teachers in a profile group with high strategic learning skills and high collaboration dispositions showed the highest anticipated life satisfaction after 5 years. Their results demonstrate the need to develop both ICT skills and learning competences in pre-service teacher education.

Use of technological innovations in organising teacher education

An analysis of the nature of experience and how practise can optimally enhance expertise has demonstrated the importance of deliberate practise (Ericsson et al., 1993 ), defined as an intensive practise that is purposefully focused on developing specific aspects of performance. To achieve this, it is necessary to have the opportunity to practise the most demanding aspects of performance with a large number of repetitions. Feedback from a tutor or coach also plays an important role in deliberate practise (Ericsson, 1993 ). Although deliberate practise has already been applied in some teacher education studies (e.g. Bronkhorst, 2011 ), its main aspect, repeated practise of challenging tasks and informed feedback, has been difficult to apply in traditional teacher education settings. Recent studies have shown that technology can contribute to the development of training methods that better reflect the main principles of deliberate practise.

There is a long tradition of using video technology in teacher education; such technology was first applied in a systematic manner in the 1970s (Nagro & Cornelius, 2013 ). Since then, a number of video-assisted instructional designs have been developed to provide teacher students with opportunities to learn from expert teachers, reflect on their own teaching behaviour and practise professional skills that would not otherwise be possible without this technology. Digital videos that are easy to use and various web platforms that facilitate the sharing and annotation of videos have opened up new opportunities for the development of novel learning environments in the field of teacher education (e.g. Sommerhoff et al., 2022 ). In the past few years, models for using videos recorded by mobile eye-tracking technology in teacher education have also been developed (e.g. Pouta et al., 2021 ).

Simulations are widely used in medical education, and a meta-analysis found that deliberate practise with simulations is superior to traditional clinical training in medical education (McGaghie et al., 2011 ). The use of simulations in teacher education is also gradually increasing. In their review of the use of simulations in teacher education, Theelen et al. ( 2019 ) synthesised the findings of 15 studies that applied computer-based simulations in teacher education. Several studies have demonstrated (Ferdig & Pytash, 2020 ; Samuelsson et al., 2022 ) that classroom simulations increase students’ self-efficacy and confidence in their teaching abilities. Classroom simulations were also found to have a positive impact on the development of classroom management skills.

New technologies can also be used in research on teaching and learning. Digitalisation has provided more ways to collect data and understand the teaching–learning process with multiple data channels and modalities. Multiple layers of data can be collected from contextual interactions, such as high-quality video data, psychophysiological measures and computer logs. With learning analytics, for example, these data can be used to create teacher dashboards, thus fulfilling students’ need for teacher scaffolds (Knoop-van Campen & Molenaar, 2020 ). Recently, eye-tracking technology has also been used to analyse teachers’ and student teachers’ abilities to notice relevant events in classrooms (Gegenfurtner et al., 2020 ; Pouta et al., 2021 ). Eye-movement technology, which has been used to model expert performance in other professional fields (Gegenfurtner et al., 2017 ), could also lead to promising training methods for teacher education.

Articles in this special issue

Three of the articles (Basilotta‑Gómez‑Pablos et al., 2022 ; Peciuliauskiene et al.’s, 2022 ; Kulaksız & Toran, 2022 ) included in this special issue deal with digital competences and interventions aimed at enhancing them.

Basilotta‑Gómez‑Pablos et al. (in this issue) synthesised 56 studies on higher education teachers’ digital competences. The authors used special software called SciMAT to analyse the content of the articles and to present thematic networks. Their review of the literature revealed that the topic is timely and that the number of relevant studies is increasing rapidly. The reviewed studies generally relied on teachers’ self-reports and self-evaluations of their abilities. Overall, the results indicated that the participants were aware of their insufficient knowledge and skills in the area of digital technology. According to the synthesis, many of the articles describe teachers’ experiences of various projects and activities aimed at improving their digital competences; however, many of these articles describe informal learning using internet tools and social networks. The authors conclude that their review clearly shows the gap in the evaluation of teachers’ competence in teaching and learning practice. Their recommendation is that more interventions and training programmes be created to support the development of teachers’ digital competence.

The recent challenges in education caused by the pandemic situation raised teachers’ awareness on the gap of their digital skills.

Despite the developed national or EU digital competences frameworks the trend remains that the development of digital skills is not systemic and lacks coherence in in-service and pre-service teacher education.

Further studies may bring more insights regarding more effective interventions to teaching practices with a wider application of digital technologies.

Peciuliauskiene et al.’s ( 2022 ) paper presents the results of their survey of two Lithuanian universities that offer teacher education programmes. Their questionnaire focused on information literacy (search and evaluation) and ICT self-efficacy. According to their results, both information literacy variables predicted teacher students’ ICT self-efficacy. Additionally, there was an indirect relationship between information evaluation and ICT self-efficacy. The findings of the study are discussed in terms of their theoretical and practical implications. The research indicates that information search ability does not depend on a person’s digital nativity, contrary to what is sometimes assumed when referring to the younger generation of pre-service teachers. As an ICT literacy component, information evaluation has become particularly pertinent during the COVID-19 situation and recent challenges related to distinguishing credible information from the vast amount of fake news and propaganda. It is also noted that optimal time and resources should be planned for the development of information search and evaluation abilities; however, more time should be allocated for the development of information search literacy, as it directly predicts pre-service teachers’ ICT self-efficacy. Based on the findings of this study, we identify the following trends and implications for further studies:

ICT self-efficacy of teachers contribute to the enhancement of teaching and learning process however, ICT self-efficacy should not be limited to specific ICT skills but rather on rethinking the organisation of the teaching process and rethinking the principles of teaching. In other words, the development of digital skills alone without integrating them with specific pedagogical content knowledge and teaching strategies would be less beneficial.

Further studies could be focused on how digital skills development could be better aligned with the development of teacher pedagogical strategies and specific subject areas.

The starting point of Kulaksız and Toran’s study ( 2022 ) was the observation that, despite pre-service teachers’ participation in courses on ICT integration, these teachers are still not confident about their competences to apply their knowledge in practice. In their study, Kulaksız and Toran used the so-called praxeological approach, which aims to produce beneficial knowledge and skills and to organise a democratic and participatory environment. The results indicate that the participants were prepared to transform their skills into practical pedagogical situations due to the personal development they experienced during and after the completion of the co-created course. As pre-service teachers could co-create the technology course, this allowed them to develop not only digital competences but also self-regulated learning skills, collaborative project development skills and peer mentoring skills, which contributed to building their sustainable motivation—an important component of teachers’ self-efficacy. This article highlights that.

Teachers’ motivation is increased through participatory design in their professional development practices which allows to achieve a more holistic development of digital skills in combination with cognitive and non-cognitive competences.

Further studies on how co-teaching contributes to digital skills development and innovative teaching strategies would allow to find more attractive models for teachers professional development.

The aforementioned three articles about digital competencies provide different perspectives on the issue, but they all emphasise the complexity of those competencies while also suggesting new approaches to deal with these challenges. In two of the articles, technology was not the focus of the studies but a method used in developing learning environments for teacher students.

In their study, Martin et al. (in this issue) investigated whether a video-based multimedia application about classroom teaching could be used to enhance teacher students’ professional vision. A teacher’s professional vision is their ability to observe and interpret important events in the classroom and determine the most appropriate teaching activities related to these events. Teacher education faces a variety of challenges because teacher students are unable to readily translate the knowledge they learn through formal teacher education into situation-specific skills that can be applied in actual classroom settings. The aim of the study was to help students make this translation with the aid of a video-based simulation developed using the findings of multimedia research. The simulation presented the classroom videos as short segments and provided prompts aimed at facilitating the students’ self-explanations. In the intervention study, they applied two versions of the video-based simulation: one with features based on multimedia research and one without these features. During the training, the segmented simulation with the self-explanation prompt resulted in increased noticing of relevant events in the teaching–learning process. In the comparison of pre-test and post-test results, all groups participating in the video training developed considerably in their professional vision, but the video simulation with the two multimedia elements did not differ significantly from the video training without these elements. The authors concluded that further research on the optimal implementation of the simulation is needed. Major issues raised by this article were:

It confirms previous studies which have shown that the importance of the use of classroom videos in teacher education.

When new methods (e.g. the multimedia elements added to the videos) are applied in interventions, it is important to pay attention to qualitative changes in learning processes and not only on immediate learning gains.

The results also indicate that methods which have been effective in one context do not necessarily work in a new environment.

Nickl et al.’s study (in this issue) examined how video-based simulations can be used to enhance pre-service teachers’ assessment skills. The aim was to analyse individual learning processes in a simulated environment by taking into account learners’ cognitive and motivational-affective characteristics. Their study applied a person-oriented approach to analyse how these learner characteristics relate to students’ situated learning experiences and performance. In the latent profile analysis, three profiles were identified: one with high knowledge and average motivation-affect, one with high motivation-affect and average knowledge and one with below average knowledge and motivation-affect. Based on the results, it was confirmed that the motivated profile resulted in positive motivational experiences in the situation, while the knowledgeable profile resulted in relevant cognitive demands when working on the tasks. Situational experiences were also found to be related to learning outcomes when working with the simulation. In comparison to the other profiles, the cognitive profile demonstrated the most effective navigation and deep learning processes. The authors concluded that the identification of learner profiles is a promising approach that can uncover individual learner needs when working in technology-based learning environments. There lessons to learn from this study include:

Learners prior learning and their personal characteristics can strongly mediate the outcomes of intervention programs

Person oriented statistical analyses are promising approaches to focus on sub-groups with unique profiles.

The challenge is to decide which individual characteristics are relevant in explaining varying effects of interventions.

Practices which stimulate change

This collection of papers disclosing different aspects of the application of technology in todays’ teacher education clearly highlights that the development of digital competences should become an integral part of pre-service and in-service teacher education. In line with the holistic view on teacher competencies (Metsäpelto et al., 2022 ), this special issue suggests that digital competences appear as a strong component within cognitive and non-cognitive competences that contribute to high-quality teaching.

The results of the studies presented in this special issue strongly reflect recent studies (Falloon, 2020 ; Lin et al., 2022 ) demonstrating that the development of mere digital skills is not sufficient; we should instead structure teacher education to promote the development of digital teaching competences, including ICT attitudes, ICT skills, data literacy and deep pedagogical understanding of the opportunities and limitations of the use of technology in education. Digitally competent teachers are more capable of integrating technologies into their regular teaching practices while also creating more appropriate conditions for personalised learning (Schmid & Petko, 2019 ). This is important because large international evaluation studies (OECD, 2014 , 2019 , 2020 ) have shown that the inadequate use of technology can be harmful for student learning.

It should also be noted that the COVID-19 pandemic created additional challenges for teachers and contributed to changes in their teaching practices and digital habits (Blume, 2020 ). Reflecting school situations caused by COVID-19, numerous studies from the last 2 years have revealed a much wider scope of application of digital technologies in education and a need to create active interactions with learners (Greenhow et al., 2021 ). They have also identified a widening gap between learners who are more digitally advanced and less digitally competent teachers (Blume, 2020 ).

Likewise, simulations and other technological applications can be used to provide richer learning opportunities in teacher education. These new tools can help develop more effective models to connect theoretical content and practical skills. A big challenge for teacher education is to create opportunities for students to deliberately practise skills that are needed in classroom teaching while at the same time deepening student teachers’ theoretical understanding of teaching–learning processes.

Availability of data and materials

The paper is based on openly available data.

Agyei, D. D., & Voogt, J. (2011). Exploring the potential of the Will Skill Tool model in Ghana: Predicting prospective and practicing teachers’ use of technology. Computers & Education, 56 , 91–100. https://doi.org/10.1016/j.compedu.2010.08.017

Article   Google Scholar  

Bakir, N. (2015). An exploration of contemporary realities of technology and teacher education: Lessons learned. Journal of Digital Learning in Teacher Education, 31 (3), 117–130. https://doi.org/10.1080/21532974.2015.1040930

Basilotta-Gómez-Pablos, V., Matarranz, M., Casado-Aranda, L. A., et al. (2022) Teachers’ digital competencies in higher education: a systematic literature review. International Journal of Educational Technology in Higher Education, 19 , 8 (2022). https://doi.org/10.1186/s41239-021-00312-8

Blume, C. (2020). German teachers’ digital habitus and their pandemic pedagogy. Post Digit Science Education, 2 , 879–905. https://doi.org/10.1007/s42438-020-00174-9

Bronkhorst, L. H., Meijer, P. C., Koster, B., & Vermung, J. D. (2011). Fostering meaning-oriented learning and deliberate practice in teacher education. Teaching and Teacher Education, 27 , 1120–1130. https://doi.org/10.1016/j.tate.2011.05.008

Cheung, A. C., & Slavin, R. E. (2013). The effectiveness of education technology applications for enhancing mathematics achievement in K-12 classrooms: A meta-analysis. Educational Research Review, 9 , 88–113.

Eickelmann, B., Gerick, J., & Koop, C. (2016). ICT use in mathematics lessons and the mathematics achievement of secondary school students by international comparison: Which role do school level factors play? Education and Information Technologies, 22 , 1527–1551. https://doi.org/10.1007/s10639-016-9498-5

Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100 , 363–406. https://doi.org/10.1037/0033-295X.87.3.215

Erstad, O., Kjällander, S., & Järvelä, S. (2021). Facing the challenges of ‘digital competence’—A Nordic agenda on curriculum development for the 21st century. Nordic Journal of Digital Literacy, 16 (2), 77–87. https://doi.org/10.18261/issn.1891-943x-2021-02-02

Fadda, D., Pellegrini, M., Vivanet, G., & Callegher, C. Z. (2022). Effects of digital games on student motivation in mathematics: A meta-analysis in K-12. Journal of Computer Assisted Learning, 38 , 304–325.

Falloon, G. (2020). From digital literacy to digital competence: The teacher digital competency (TDC) framework. Educational Technology Research and Development, 68 , 2449–2472. https://doi.org/10.1007/s11423-020-09767-4

Ferdig RE, Pytash KE. What teacher educators should have learned from 2020, AACE – Association for the Advancement of Computing in Education , 229–242; 2020.

Flecknoe, M. (2002). How can ICT help us to improve education? Innovations in Education and Teaching International, 39 (4), 271–279. https://doi.org/10.1080/13558000210161061

Gegenfurtner, A., Lehtinen, E., Jarodzka, H., & Säljö, R. (2017). Effects of eye movement modeling examples on adaptive expertise in medical image diagnosis. Computers & Education, 13 , 212–225. https://doi.org/10.1016/j.compedu.2017.06.001

Gegenfurtner, A., Lewalter, D., Lehtinen, E., Schmidt, M., & Gruber, H. (2020). Teacher expertise and professional vision: Examining knowledge-based reasoning of pre-service teachers, in-service teachers, and school principals. Frontiers in Education . https://doi.org/10.3389/feduc.2020.00059

Gloria, M. (2015). A Meta-Analysis of the Relationship between E-Learning and Students’ Academic Achievement in Higher Education. Journal of Education and Practice, 6 (9), 6–9.

Google Scholar  

Greenhow, Ch., Cathy, L. C., & Willet, B. S. (2021). The educational response to Covid-19 across two countries: A critical examination of initial digital pedagogy adoption. Technology, Pedagogy and Education, 30 (1), 7–25. https://doi.org/10.1080/1475939X.2020.1866654

Gronseth, S., Brush, T., Ottenbreit-Leftwich, A., Strycker, J., Abaci, S., Easterling, W., Roman, T., Shin, S., & van Leusen, P. (2010). Equipping the next generation of teachers. Journal of Digital Learning in Teacher Education, 27 (1), 30–36. https://doi.org/10.1080/21532974.2010.10784654

Häkkinen, P., Järvelä, S., Mäkitalo-Siegl, K., Ahonen, A., Näykki, P., & Valtonen, T. (2017). Preparing teacher students for 21st century learning practices (PREP 21): A framework for enhancing collaborative problem solving and strategic learning skills. Teachers and Teaching: Theory and Practice, 23 (1), 25–41. https://doi.org/10.1080/13540602.2016.1203772

Huang, Y., Miller, K. F., Cortina, K., & Richter, D. (2020). Teachers’ professional vision in action. Comparing expert and novice teacher’s real-life eye movements in the classroom. Zeitschrift Für Pädagogische Psychologie, 1 , 18.

Knoop-vanCampen, C., & Molenaar, I. (2020). How teachers integrate dashboards into their feedback practices. Frontline Learning Research, 8 (4), 37–51. https://doi.org/10.14786/flr.v8i4.641

Kulaksız, T., & Toran, M. (2022) Development of pre-service early childhood teachers’ technology integrations skills through a praxeological approach. International Journal of Educational Technology in Higher Education, 19 , 36. https://doi.org/10.1186/s41239-022-00344-8

Lin, R., Yang, J., Jiang, F., et al. (2022). Does teacher’s data literacy and digital teaching competence influence empowering students in the classroom? Evidence from China. Education and Information Technologies . https://doi.org/10.1007/s10639-022-11274-3

McGaghie, W. C., Issenberg, S. B., Cohen, E. R., Barsuk, J. H., & Wayne, D. B. (2011). Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Academic Medicine, 86 (6), 706–711. https://doi.org/10.1097/ACM.0b013e318217e119

Metsäpelto, R. L., Poikkeus, A. M., Heikkilä, M., et al. (2022). A multidimensional adapted process model of teaching. Educational Assessment, Evaluation and Accountability, 34 , 143–172. https://doi.org/10.1007/s11092-021-09373-9

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108 (6), 1017–1054.

Nagro, S. A., & Cornelius, K. E. (2013). Evaluating the evidence base of video analysis: A special education teacher development tool. Teacher Education and Special Education, 36 (4), 312–329. https://doi.org/10.1177/0888406413501090

OECD. (2014). TALIS 2013 results: An international perspective on teaching and learning. TALIS, OECD Publishing, Paris, . https://doi.org/10.1787/9789264196261-en

OECD. (2019). TALIS 2018 results (volume I): Teachers and school leaders as lifelong learners. TALIS, OECD Publishing, Paris, . https://doi.org/10.1787/1d0bc92a-en

OECD. (2020). TALIS 2018 results (volume II): Teachers and school leaders as valued professionals. TALIS, OECD Publishing, Paris, . https://doi.org/10.1787/19cf08df-en

Papanastasiou, E. C., Zembylas, M., & Vrasidas, C. (2003). Can computer use hurt science achievement? The USA results from PISA. Journal of Science Education and Technology, 12 (3), 325–332.

Peciuliauskiene, P., Tamoliune, G. & Trepule, E. (2022). Exploring the roles of information search and information evaluation literacy and pre-service teachers’ ICT self-efficacy in teaching. International Journal of Educational Technology in Higher Education, 19 , 33. https://doi.org/10.1186/s41239-022-00339-5

Polly, D., Mims, C., Shepherd, C. E., & Inan, F. (2010). Evidence of impact: Transforming teacher education with preparing tomorrow’s teachers to teach with technology (PT3) grants. Teaching and Teacher Education, 26 , 863–870.

Pouta, M., Palonen, T., & Lehtinen, E. (2021). Student teachers’ and experienced teachers’ professional vision of students’ rational number concept understanding. Educational Psychology Review., 33 (1), 109–128. https://doi.org/10.1007/s10648-020-09536-y

Roztocki, N., Soja, P., & Weistroffer, H. R. (2019). The role of information and communication technologies in socioeconomic development: Towards a multi-dimensional framework. Information Technology for Development, 25 (2), 171–183. https://doi.org/10.1080/02681102.2019.1596654

Samuelsson, M., Samuelsson, J., & Thorsten, A. (2022). Simulation training-a boost for pre-service teachers’ efficacy beliefs. Computers and Education Open, 3 , 100074. https://doi.org/10.1016/j.caeo.2022.100074

Schmid, R., & Petko, D. (2019). Does the use of educational technology in personalized learning environments correlate with self-reported digital skills and beliefs of secondary-school students? Computers & Education, 136 , 75–86. https://doi.org/10.1016/j.compedu.2019.03.006

Sommerhoff, D., Codreanu, E., Nickl, M., Ufer, S., & Seidel, T. (2022). Pre-service teachers’ learning of diagnostic skills in a video-based simulation: Effects of conceptual vs interconnecting prompts on judgment accuracy and the diagnostic process. Learning and Instruction . https://doi.org/10.1016/j.learninstruc.2022.101689

Theelen, H., Van den Beem, A., & den Brok, P. (2019). Classroom simulations in teacher education to support preservice teachers’ interpersonal competence: A systematic literature review. Computers & Education, 129 , 14–26. https://doi.org/10.1016/j.compedu.2018.10.015

Tondeur, J., Aesaert, K., Pynoo, B., van Braak, F., & J.N., & Erstad, O. (2017). Developing a validated instrument to measure preserviceteachers’ ICT competencies: Meeting the demands of the21st century. British Journal of Educational Technology, 48 (2), 462–472. https://doi.org/10.1111/bjet.12380

UNESCO ICT Competency Framework for Teachers. (2018). UNESCO, Paris: France. https://unesdoc.unesco.org/ark:/48223/pf0000265721

Valtonen, T., Sointu, E., Kukkonen, J., Häkkinen, P., Järvelä, S., Ahonen, A., Näykki, P., Pöysä-Tarhonen, J., & Mäkitalo-Siegl, K. (2016). Insights into Finnish first-year pre-service teachers’ twenty-first century skills. Education and Information Technologies, 22 , 2055–2069. https://doi.org/10.1007/s10639-016-9529-2

Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology, 105 (2), 249–265. https://doi.org/10.1037/a0031311

Download references

Author information

Authors and affiliations.

Education Academy, Vytautas Magnus University, Kaunas, Lithuania

Lina Kaminskienė & Erno Lehtinen

Faculty of Education and Psychology, University of Oulu, Oulu, Finland

Sanna Järvelä

Department of Teacher Education, University of Turku, Turku, Finland

Erno Lehtinen

You can also search for this author in PubMed   Google Scholar

Contributions

This is a collaboratively developed and discussed paper. The contribution of each author is equal. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lina Kaminskienė .

Ethics declarations

Competing interests.

The authors state that they have no conflicting interests regarding the ideas of this study.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kaminskienė, L., Järvelä, S. & Lehtinen, E. How does technology challenge teacher education?. Int J Educ Technol High Educ 19 , 64 (2022). https://doi.org/10.1186/s41239-022-00375-1

Download citation

Published : 27 December 2022

DOI : https://doi.org/10.1186/s41239-022-00375-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Teachers’ digital competences
  • Teachers’ initial and continuous training
  • Technology integration
  • Digital innovations in education

research paper for teachers

The Daring English Teacher on Teachers Pay Teachers

Teaching the Research Paper Part 1: Introducing the Research Paper and Preparing Students for the Assignment

Teaching the Research Paper Part 1: Introducing the Research Paper and Preparing Students for the Assignment

There are three things every teacher should do before taking their students to the computer lab to research information for their research papers: teach the difference between reliable and unreliable sources, check to make sure every student has a self-generated research question, and help prepare students with key phrases and words to search.

Whenever I begin teaching the research paper , I always share with my students the story of how I wrote my Master’s thesis paper. It was a 50 page paper with 50 different sources.

I don’t do this to toot my own horn. I don’t do this to scare my students away from post-secondary education. I don’t do this to make the students feel like their research assignment is petty and small. I do this so that I can explain the process of research to them and so that they know I was once in their shoes.

So how exactly do you write a 50-page research paper that has 50 unique, credible sources? One source at a time.

Teaching the Research Paper: 3 Critical Steps to Take

Teaching the research paper: find credible sources.

When teaching the research paper to my secondary ELA students, I first show them about research and credible sources. Before students can even begin looking for their sources, they have to know how to distinguish between reliable and unreliable sources. Being able to do so is the first step in finding a reliable source.

Slide31

Once I feel my students have a firm understanding of the sources they will be looking at, we then dive into the research topic, and the students select their issues related to the main topic.

Teaching the Research Paper: Create Questions

One of the critical parts of teaching the research paper to students is having them come up with their self-generated research questions. To do this, I encourage students to work collaboratively and talk about their research topics.

Students can work in small groups to see what their peers would like to know about that matter.

Working in small groups first provides extra support for EL and struggling students. From there, students come up with their questions to answer. There is also a graphic organizer in my Research Paper Writing resource that is especially helpful during this process.

Teaching the Research Paper: Brainstorm Key Words

Once students have a self-generated question, it is time to get students to think about keywords and phrases they will use in their search for sources. All too often I see students typing precise, wordy questions into a search engine. This only creates frustration for the students as well as the teacher.

Taking half a class to discuss keywords and phrases helps students tremendously, and it even speeds up the research process because students can find credible sources a lot easier. When teaching keywords and phrases to my students, I encourage them to type no more than four words into the search engine. I tell them that they must think of the most important words directly related to their topic.

To help students think about keywords and phrases they can use in the search engine, have them think about hashtags for their research topic. This fun, easy, and engaging strategy will get students thinking about what to research and what is explicitly related to their subject.

Teaching the Research Paper: A Research Paper Writing Instructional Unit

Take the stress out of teaching your students how to write a research paper with this complete research writing unit ! This comprehensive and complete research paper writing unit will help you teach your students how to write a research paper. Now available in print + digital!

This step-by-step resource teaches your students the eight steps of research writing, and it includes every single thing you could need for a successful research writing unit! Plus, it is updated for 9th edition MLA!

The editable teaching presentation (which comes in both PowerPoint and Google Slides®) is ideal for direct instruction and includes multiple days of guided instruction! The research writing presentation introduces students to the eight steps for completing a research project: selecting topics, generating questions, brainstorming, researching and gathering credible information, organizing and outlining, writing the first draft, peer editing, and finalizing the paper.

Research Paper Teaching Unit

Take the stress out of teaching your students how to write a research paper with  this complete research writing unit ! This comprehensive and complete research paper writing unit will help you teach your students how to write a research paper. Now available in print + digital!

Read more about teaching the research paper

Read more about research in the classroom with Part 2 which covers research paper topics and Part 3 which includes using Google Apps for research.

THANK YOU! I've had to sit through some painfully tedious COLLEGE classes because so many students aren't learning this in K12 that we're required to take classes on things like how to do a search. I greatly appreciate those of you who are teaching these important skills!

Is there a part 2?

Hi Deena, Thank you for reaching out. Yes. There is a part 2 and a part 3. I will link them to this post!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

The Daring English Teacher on Teachers Pay Teachers

SUBSCRIBE NOW

News Releases

St. bonaventure university, bynoe invited to present paper; to serve as section editor for new series.

  • Privacy Policy
  • Whistleblower Policy
  • Accessibility

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, 113 great research paper topics.

author image

General Education

feature_pencilpaper

One of the hardest parts of writing a research paper can be just finding a good topic to write about. Fortunately we've done the hard work for you and have compiled a list of 113 interesting research paper topics. They've been organized into ten categories and cover a wide range of subjects so you can easily find the best topic for you.

In addition to the list of good research topics, we've included advice on what makes a good research paper topic and how you can use your topic to start writing a great paper.

What Makes a Good Research Paper Topic?

Not all research paper topics are created equal, and you want to make sure you choose a great topic before you start writing. Below are the three most important factors to consider to make sure you choose the best research paper topics.

#1: It's Something You're Interested In

A paper is always easier to write if you're interested in the topic, and you'll be more motivated to do in-depth research and write a paper that really covers the entire subject. Even if a certain research paper topic is getting a lot of buzz right now or other people seem interested in writing about it, don't feel tempted to make it your topic unless you genuinely have some sort of interest in it as well.

#2: There's Enough Information to Write a Paper

Even if you come up with the absolute best research paper topic and you're so excited to write about it, you won't be able to produce a good paper if there isn't enough research about the topic. This can happen for very specific or specialized topics, as well as topics that are too new to have enough research done on them at the moment. Easy research paper topics will always be topics with enough information to write a full-length paper.

Trying to write a research paper on a topic that doesn't have much research on it is incredibly hard, so before you decide on a topic, do a bit of preliminary searching and make sure you'll have all the information you need to write your paper.

#3: It Fits Your Teacher's Guidelines

Don't get so carried away looking at lists of research paper topics that you forget any requirements or restrictions your teacher may have put on research topic ideas. If you're writing a research paper on a health-related topic, deciding to write about the impact of rap on the music scene probably won't be allowed, but there may be some sort of leeway. For example, if you're really interested in current events but your teacher wants you to write a research paper on a history topic, you may be able to choose a topic that fits both categories, like exploring the relationship between the US and North Korea. No matter what, always get your research paper topic approved by your teacher first before you begin writing.

113 Good Research Paper Topics

Below are 113 good research topics to help you get you started on your paper. We've organized them into ten categories to make it easier to find the type of research paper topics you're looking for.

Arts/Culture

  • Discuss the main differences in art from the Italian Renaissance and the Northern Renaissance .
  • Analyze the impact a famous artist had on the world.
  • How is sexism portrayed in different types of media (music, film, video games, etc.)? Has the amount/type of sexism changed over the years?
  • How has the music of slaves brought over from Africa shaped modern American music?
  • How has rap music evolved in the past decade?
  • How has the portrayal of minorities in the media changed?

music-277279_640

Current Events

  • What have been the impacts of China's one child policy?
  • How have the goals of feminists changed over the decades?
  • How has the Trump presidency changed international relations?
  • Analyze the history of the relationship between the United States and North Korea.
  • What factors contributed to the current decline in the rate of unemployment?
  • What have been the impacts of states which have increased their minimum wage?
  • How do US immigration laws compare to immigration laws of other countries?
  • How have the US's immigration laws changed in the past few years/decades?
  • How has the Black Lives Matter movement affected discussions and view about racism in the US?
  • What impact has the Affordable Care Act had on healthcare in the US?
  • What factors contributed to the UK deciding to leave the EU (Brexit)?
  • What factors contributed to China becoming an economic power?
  • Discuss the history of Bitcoin or other cryptocurrencies  (some of which tokenize the S&P 500 Index on the blockchain) .
  • Do students in schools that eliminate grades do better in college and their careers?
  • Do students from wealthier backgrounds score higher on standardized tests?
  • Do students who receive free meals at school get higher grades compared to when they weren't receiving a free meal?
  • Do students who attend charter schools score higher on standardized tests than students in public schools?
  • Do students learn better in same-sex classrooms?
  • How does giving each student access to an iPad or laptop affect their studies?
  • What are the benefits and drawbacks of the Montessori Method ?
  • Do children who attend preschool do better in school later on?
  • What was the impact of the No Child Left Behind act?
  • How does the US education system compare to education systems in other countries?
  • What impact does mandatory physical education classes have on students' health?
  • Which methods are most effective at reducing bullying in schools?
  • Do homeschoolers who attend college do as well as students who attended traditional schools?
  • Does offering tenure increase or decrease quality of teaching?
  • How does college debt affect future life choices of students?
  • Should graduate students be able to form unions?

body_highschoolsc

  • What are different ways to lower gun-related deaths in the US?
  • How and why have divorce rates changed over time?
  • Is affirmative action still necessary in education and/or the workplace?
  • Should physician-assisted suicide be legal?
  • How has stem cell research impacted the medical field?
  • How can human trafficking be reduced in the United States/world?
  • Should people be able to donate organs in exchange for money?
  • Which types of juvenile punishment have proven most effective at preventing future crimes?
  • Has the increase in US airport security made passengers safer?
  • Analyze the immigration policies of certain countries and how they are similar and different from one another.
  • Several states have legalized recreational marijuana. What positive and negative impacts have they experienced as a result?
  • Do tariffs increase the number of domestic jobs?
  • Which prison reforms have proven most effective?
  • Should governments be able to censor certain information on the internet?
  • Which methods/programs have been most effective at reducing teen pregnancy?
  • What are the benefits and drawbacks of the Keto diet?
  • How effective are different exercise regimes for losing weight and maintaining weight loss?
  • How do the healthcare plans of various countries differ from each other?
  • What are the most effective ways to treat depression ?
  • What are the pros and cons of genetically modified foods?
  • Which methods are most effective for improving memory?
  • What can be done to lower healthcare costs in the US?
  • What factors contributed to the current opioid crisis?
  • Analyze the history and impact of the HIV/AIDS epidemic .
  • Are low-carbohydrate or low-fat diets more effective for weight loss?
  • How much exercise should the average adult be getting each week?
  • Which methods are most effective to get parents to vaccinate their children?
  • What are the pros and cons of clean needle programs?
  • How does stress affect the body?
  • Discuss the history of the conflict between Israel and the Palestinians.
  • What were the causes and effects of the Salem Witch Trials?
  • Who was responsible for the Iran-Contra situation?
  • How has New Orleans and the government's response to natural disasters changed since Hurricane Katrina?
  • What events led to the fall of the Roman Empire?
  • What were the impacts of British rule in India ?
  • Was the atomic bombing of Hiroshima and Nagasaki necessary?
  • What were the successes and failures of the women's suffrage movement in the United States?
  • What were the causes of the Civil War?
  • How did Abraham Lincoln's assassination impact the country and reconstruction after the Civil War?
  • Which factors contributed to the colonies winning the American Revolution?
  • What caused Hitler's rise to power?
  • Discuss how a specific invention impacted history.
  • What led to Cleopatra's fall as ruler of Egypt?
  • How has Japan changed and evolved over the centuries?
  • What were the causes of the Rwandan genocide ?

main_lincoln

  • Why did Martin Luther decide to split with the Catholic Church?
  • Analyze the history and impact of a well-known cult (Jonestown, Manson family, etc.)
  • How did the sexual abuse scandal impact how people view the Catholic Church?
  • How has the Catholic church's power changed over the past decades/centuries?
  • What are the causes behind the rise in atheism/ agnosticism in the United States?
  • What were the influences in Siddhartha's life resulted in him becoming the Buddha?
  • How has media portrayal of Islam/Muslims changed since September 11th?

Science/Environment

  • How has the earth's climate changed in the past few decades?
  • How has the use and elimination of DDT affected bird populations in the US?
  • Analyze how the number and severity of natural disasters have increased in the past few decades.
  • Analyze deforestation rates in a certain area or globally over a period of time.
  • How have past oil spills changed regulations and cleanup methods?
  • How has the Flint water crisis changed water regulation safety?
  • What are the pros and cons of fracking?
  • What impact has the Paris Climate Agreement had so far?
  • What have NASA's biggest successes and failures been?
  • How can we improve access to clean water around the world?
  • Does ecotourism actually have a positive impact on the environment?
  • Should the US rely on nuclear energy more?
  • What can be done to save amphibian species currently at risk of extinction?
  • What impact has climate change had on coral reefs?
  • How are black holes created?
  • Are teens who spend more time on social media more likely to suffer anxiety and/or depression?
  • How will the loss of net neutrality affect internet users?
  • Analyze the history and progress of self-driving vehicles.
  • How has the use of drones changed surveillance and warfare methods?
  • Has social media made people more or less connected?
  • What progress has currently been made with artificial intelligence ?
  • Do smartphones increase or decrease workplace productivity?
  • What are the most effective ways to use technology in the classroom?
  • How is Google search affecting our intelligence?
  • When is the best age for a child to begin owning a smartphone?
  • Has frequent texting reduced teen literacy rates?

body_iphone2

How to Write a Great Research Paper

Even great research paper topics won't give you a great research paper if you don't hone your topic before and during the writing process. Follow these three tips to turn good research paper topics into great papers.

#1: Figure Out Your Thesis Early

Before you start writing a single word of your paper, you first need to know what your thesis will be. Your thesis is a statement that explains what you intend to prove/show in your paper. Every sentence in your research paper will relate back to your thesis, so you don't want to start writing without it!

As some examples, if you're writing a research paper on if students learn better in same-sex classrooms, your thesis might be "Research has shown that elementary-age students in same-sex classrooms score higher on standardized tests and report feeling more comfortable in the classroom."

If you're writing a paper on the causes of the Civil War, your thesis might be "While the dispute between the North and South over slavery is the most well-known cause of the Civil War, other key causes include differences in the economies of the North and South, states' rights, and territorial expansion."

#2: Back Every Statement Up With Research

Remember, this is a research paper you're writing, so you'll need to use lots of research to make your points. Every statement you give must be backed up with research, properly cited the way your teacher requested. You're allowed to include opinions of your own, but they must also be supported by the research you give.

#3: Do Your Research Before You Begin Writing

You don't want to start writing your research paper and then learn that there isn't enough research to back up the points you're making, or, even worse, that the research contradicts the points you're trying to make!

Get most of your research on your good research topics done before you begin writing. Then use the research you've collected to create a rough outline of what your paper will cover and the key points you're going to make. This will help keep your paper clear and organized, and it'll ensure you have enough research to produce a strong paper.

What's Next?

Are you also learning about dynamic equilibrium in your science class? We break this sometimes tricky concept down so it's easy to understand in our complete guide to dynamic equilibrium .

Thinking about becoming a nurse practitioner? Nurse practitioners have one of the fastest growing careers in the country, and we have all the information you need to know about what to expect from nurse practitioner school .

Want to know the fastest and easiest ways to convert between Fahrenheit and Celsius? We've got you covered! Check out our guide to the best ways to convert Celsius to Fahrenheit (or vice versa).

Need more help with this topic? Check out Tutorbase!

Our vetted tutor database includes a range of experienced educators who can help you polish an essay for English or explain how derivatives work for Calculus. You can use dozens of filters and search criteria to find the perfect person for your needs.

Connect With a Tutor Now

These recommendations are based solely on our knowledge and experience. If you purchase an item through one of our links, PrepScholar may receive a commission.

author image

Christine graduated from Michigan State University with degrees in Environmental Biology and Geography and received her Master's from Duke University. In high school she scored in the 99th percentile on the SAT and was named a National Merit Finalist. She has taught English and biology in several countries.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

research paper for teachers

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

American Mathematical Society

Publications — Over 100 years of publishing excellence

  • Book Author Resources
  • Submit a Book Proposal
  • AMS Rights, Licensing, and Permissions
  • Open Math Notes
  • Frequently asked questions
  • Member Journals
  • Research Journals
  • Translation Journals
  • Distributed Journals
  • Open Access Journals
  • Guidelines and Policies
  • Journal Author Resources

Librarian Resources

  • eBook Collections
  • COUNTER Usage Statistics
  • My Subscriptions
  • Subscription Information
  • Licensing Information

Mathematical Reviews/MathSciNet®

  • MathSciNet ®
  • Reviewer Home
  • MathSciNet ® Subscriptions

Membership — Welcome to your membership center

Join the ams, renew your membership, give a membership, individual membership.

  • Member Benefits
  • Member Directory
  • Reciprocating Societies
  • Members in Developing Countries

Institutional Membership

  • Domestic Institutions
  • International Institutions
  • Two-Year Institutions
  • Graduate Student Chapter Program

Other Member Types

  • Corporate Memberships
  • Associate Memberships

Meetings & Conferences — Engage with colleagues and the latest research

National meetings.

  • Joint Mathematics Meetings
  • Upcoming JMMs
  • Previous JMMs
  • Special Lectures
  • Professional Enhancement Programs (PEPs)

Sectional Meetings

  • Upcoming Sectionals
  • Previous Sectionals
  • Presenting Papers
  • Hosting Sectionals

Other Meetings, Conferences & Workshops

  • Mathematics Research Communities
  • Education Mini-conference
  • International Meetings
  • Mathematics Calendar
  • Short Courses
  • Workshop for Department Chairs and Leaders

Meetings Resources

  • Suggest a Speaker
  • AMS Meetings Grants
  • Submitting Abstracts
  • Welcoming Environment Policy
  • MathSafe – supporting safe meetings

News & Outreach — Explore news, images, posters, and mathematical essays

News from the ams.

  • AMS News Releases
  • Feature Stories
  • Information for Journalists
  • In Memory Of

Math Voices

  • Feature Column
  • Math in the Media
  • Column on Teaching and Learning

Explorations

  • Recognizing Diverse Mathematicians
  • AMS Posters
  • Mathematics & Music
  • Mathematical Imagery
  • Mathematical Moments

Professional Programs — Resources and opportunities to further your mathematical pursuits

Professional development.

  • Employment Services
  • Mathjobs.org
  • BEGIN Career Initiative
  • Mathprograms.org
  • Mathematical Opportunities Database
  • Research Seminars

Institutional Information and Data

  • Annual Survey of the Mathematical and Statistical Sciences
  • CBMS Survey
  • Other Sources of Data
  • Directory of Institutions in the Mathematical Sciences
  • Professional Directory

Grants & Support

  • AMS-Simons Grants for PUI Faculty
  • Travel Grants
  • Fellowships & Scholarships
  • Epsilon Fund
  • Child Care Grants

Awards & Recognition

  • AMS Prizes & Awards
  • Fellows of the AMS

Education — Resources to support advanced mathematics teaching and learning

For students.

  • Information for Undergraduate and High School Students
  • Research Experiences for Undergraduates (REUs)
  • Considering Grad School
  • Find Grad Programs
  • Applying to Grad School
  • What do Mathematicians Do?

For Teachers

  • Teaching Online
  • Teaching Resources
  • Inclusive Classrooms
  • Assessing Student Learning
  • Education Webinars

For Department Leaders & Mentors

  • Information for Department Leaders
  • paraDIGMS (Diversity in Graduate Mathematical Sciences)

Government Relations — Advocating for the mathematical sciences

Elevating mathematics in congress.

  • Our Mission
  • Letters, Statements, & Legislation
  • Congressional Briefings

Legislative Priorities

  • Federal Issues of Concern
  • Federal Budget Process

Get Involved

  • Advocacy Resources
  • Take Action

DC-Based Fellowships

  • Congressional Fellowship
  • Mass Media Fellowship
  • Catalyzing Advocacy in Science & Engineering (CASE) Fellowship

Giving to the AMS — Your gifts make great things happen for mathematics   Make a Gift

What you can support.

  • The 2020 Fund
  • Next Generation Fund
  • Birman Fellowship for Women Scholars
  • JMM Child Care Grants
  • MathSciNet for Developing Countries

Create a Legacy

  • Make a Tribute Gift
  • Create a Permanent Fund
  • Establish a Prize, Award or Fellowship
  • Bequests and Charitable Estate Planning

Honoring Your Gift

  • Donor Stories
  • Donor Wall of Honor
  • Thomas S. Fiske Society
  • AMS Contributors Society
  • AMS Gardens

Giving Resources

  • AMS Development Committee
  • AMS Gift Acceptance Policy

About the AMS — Advancing research. Connecting the mathematics community.

Our organization.

  • Executive Staff
  • Equity, Diversity, & Inclusion
  • Jobs at AMS
  • Customer Service

Our Governance

  • Board of Trustees
  • Executive Committee

Governance Operations

  • Calendar of Meetings
  • Policy Statements & Guidelines

JOURNAL OF THE AMS

Mathematics of Computation

Published by the American Mathematical Society since 1960 (published as Mathematical Tables and other Aids to Computation 1943-1959), Mathematics of Computation is devoted to research articles of the highest quality in computational mathematics.

ISSN 1088-6842 (online) ISSN 0025-5718 (print)

The 2020 MCQ for Mathematics of Computation is 1.78 . What is MCQ? The Mathematical Citation Quotient (MCQ) measures journal impact by looking at citations over a five-year period. Subscribers to MathSciNet may click through for more detailed information.

  • Articles in press
  • Recently published
  • All issues : 1943 – Present

Contents of Volume 93, Number 347 HTML articles powered by AMS MathViewer View front and back matter from the print issue

IMAGES

  1. Writing a Research Paper

    research paper for teachers

  2. A research paper. How to Start a Research Paper: Guide with Examples

    research paper for teachers

  3. Sample Research Paper Mla Style

    research paper for teachers

  4. 😀 Research paper format. The Basics of a Research Paper Format. 2019-02-10

    research paper for teachers

  5. (PDF) Action research in teacher education

    research paper for teachers

  6. (PDF) Example Student Research Paper

    research paper for teachers

VIDEO

  1. Research Paper Writing online Workshop

  2. Workshop on how to write a research paper. Registration Link in comments #research #lawstudent #law

  3. Research Paper Topics 😮😮😯 Best for Beginners 👍

  4. How Technology Has Affected Education?

  5. DIY Teacher's Day Gift from Paper

  6. How to Write Research Paper in One Day

COMMENTS

  1. Journal of Teacher Education: Sage Journals

    Journal of Teacher Education This journal is a member of the Committee on Publication Ethics (COPE). Browse by Most recent Most read Most cited Trending Collections Podcasts Articles most recently published online for this journal. Restricted access Research article First published Feb 14, 2024

  2. Teacher well-being: A systematic review of the research literature from

    Teacher education 1. Introduction Teacher well-being (TWB) is a crucial issue for schools and society. It is seen as relating to teaching effectiveness, student outcomes, and educational governance ( Duckworth, Quinn, & Seligman, 2009; Sutton & Wheatley, 2003 ).

  3. The 10 Most Significant Education Studies of 2021

    From reframing our notion of "good" schools to mining the magic of expert teachers, here's a curated list of must-read research from 2021. By Youki Terada, Stephen Merrill, Sarah Gonser December 9, 2021 It was a year of unprecedented hardship for teachers and school leaders.

  4. Improving 21st-century teaching skills: The key to effective 21st

    This paper focuses on: (a) the need to measure the social quality of teaching processes in a contextualized manner; (b) the efforts that we have made to develop and measure, with reliability and concurrent validity, teacher practices and classroom processes in secondary, primary, and pre-school classrooms in Uganda, India, and Ghana, respectivel...

  5. A Review of the Literature on Teacher Effectiveness and Student

    Researchers agree that teachers are one of the most important school-based resources in determining students' future academic success and lifetime outcomes (Chetty et al. 2014; Rivkin et al. 2005; Rockoff 2004 ). As a consequence, there has been a strong emphasis on improving teacher effectiveness as a means to enhancing student learning.

  6. Research Papers in Education: Vol 39, No 1 (Current issue)

    Pages: 1-23 Published online: 18 Jun 2022 265 Views 5 CrossRef citations 0 Altmetric Article Be true to your school? Teachers' turnover intentions: the role of socioeconomic composition, teachability perceptions, emotional exhaustion and teacher efficacy Lennart Van Eycken, Ama Amitai & Mieke Van Houtte Pages: 24-49 Published online: 23 Jun 2022

  7. Full article: Reviews of teaching methods

    The overview format. This study is situated within the frames of a research project with the overall aim of increasing and refining our knowledge about teaching and teaching research (Hirsh & Nilholm, Citation 2019; Roman, Sundberg, Hirsh, Nilholm, & Forsberg, Citation 2018).In order to clarify the context in which the present study has emerged, a brief description of starting points and ...

  8. Teacher and Teaching Effects on Students' Attitudes and Behaviors

    1. Introduction. Empirical research on the education production function traditionally has examined how teachers and their background characteristics contribute to students' performance on standardized tests (Hanushek & Rivkin, 2010; Todd & Wolpin, 2003).However, a substantial body of evidence indicates that student learning is multidimensional, with many factors beyond their core academic ...

  9. (PDF) Developing teachers' research capacity: the essential role of

    (Tatto, 2021: 28) While recognising that some econometric analyses in education are conducted in partnership with teachers, Tatto's argument highlights how most analyses are undertaken away...

  10. PDF Effective Teacher Professional Development (research brief)

    RESEARCH BRIEF MAY 2017 Effective Teacher Professional Development Linda Darling-Hammond, Maria E. Hyler, and Madelyn Gardner, with assistance from Danny Espinoza Abstract Teacher professional learning is of increasing interest as one way to support the increasingly complex skills students need to succeed in the 21st century.

  11. What is Teacher Research?

    Teacher research is intentional, systematic inquiry by teachers with the goals of gaining insights into teaching and learning, becom­ing more reflective practitioners, effecting changes in the classroom or school, and improving the lives of children.... Teacher research stems from teachers' own questions and seeks practical solutions to issues ...

  12. Teacher education research, policy and practice: finding future

    Teacher education research. In this policy context, reviews of teacher education research have often concluded that it is underdeveloped, small scale, often undertheorised, fragmentary, and somewhat parochial (e.g. Menter, Hulme, Elliot et al., Citation 2010; Sleeter, Citation 2014).As such, a large section of teacher education research has minimal influence on policy other than being used as ...

  13. The Promises and Challenges of Artificial Intelligence for Teachers: a

    The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research Original Paper Open access Published: 25 March 2022 Volume 66 , pages 616-630, ( 2022 ) Cite this article Download PDF You have full access to this open access article TechTrends Aims and scope Submit manuscript Ismail Celik, Muhterem Dindar,

  14. Teachers' professional development in school: A review study

    This review study includes 43 articles from 2016 and 2017 focusing on teachers' professional development, as guided by the following twofold research question: "What characterizes teachers' professional development in school, and how does this development influence school improvement?"

  15. Research Papers in Education

    Research Papers in Education has developed an international reputation for publishing significant research findings across the discipline of education. The distinguishing feature of the journal is that we publish longer articles than most other journals, to a limit of 12,000 words.

  16. PDF Reflective Practice in Teacher Education: Issues, Challenges, and

    teaching experience based on their understanding of their previous experiences. These two notions have been a foundation for research on reflective practice for a long time; however, they are too restricted and are unable to provide an understanding of the life of teachers as a whole. Teachers investigate what they have experienced during their

  17. Understanding the role of digital technologies in education: A review

    The primary research objectives of this paper are as under: RO1: ... Online education provides freely available material for learning, teaching, and research. It enables students to engage with a wide range of study material publicly available on the internet, therefore establishing a self-learning environment. ...

  18. How does technology challenge teacher education?

    The paper presents an overview of challenges and demands related to teachers' digital skills and technology integration into educational content and processes. The paper raises a debate how technologies have created new skills gaps in pre-service and in-service teacher training and how that affected traditional forms of teacher education. Accordingly, it is discussed what interventions might ...

  19. The Role of Teacher Performance in School Effectiveness

    Abstract and Figures. This research is based on the idea that teachers' performance at school level directly contributes to school effectiveness by achieving their educational objectives. In the ...

  20. (PDF) Action Research: A Tool for Improving Teacher Quality and

    Action research (AR) is a practical and iterative research methodology and tool used by educators to conduct research in classrooms to identify strategies to examine, and ultimately improve,...

  21. How teachers can use research effectively in their classroom

    Lucas Walsh Monash University It's important for teachers to be able to use the latest evidence from research in their classroom practice, but how can they use that research well to create meaningful impact? Researchers from the Monash Q project provide some tips and resources for educators.

  22. Shifting images of teaching in student teachers' talk about

    Microteaching is one of the clearest approximations of teaching within teacher education. Past research has examined the benefits and challenges that the activity of microteaching affords to student teachers. ... Research Papers in Education, 30 (4) (2015), pp. 469-491, 10.1080/02671522.2014.932830. View in Scopus Google Scholar.

  23. Teachers' Positive Feedback Practices on Struggling Readers in ...

    The research employed a survey-based design, collecting data from 199 junior secondary teachers, randomly selected from various teaching experiences, subjects, and school locations. A purpose-built survey is administered online via Google forms, and data were analyzed using SPSS, involving descriptive statistics, thematic content analysis, and ...

  24. Teaching the Research Paper Part 1: Introducing the Research Paper and

    Teaching the Research Paper: Find Credible Sources. When teaching the research paper to my secondary ELA students, I first show them about research and credible sources. Before students can even begin looking for their sources, they have to know how to distinguish between reliable and unreliable sources. Being able to do so is the first step in ...

  25. Bynoe invited to present paper; To serve as section editor for new series

    His research in this paper has been partially supported by a Keenan Grant for the 2023-2024 academic year. Bynoe has also been recruited to serve as a section editor of an emerging Springer publication titled the International Handbook for Governance, Leadership, Administration, and Management in Education, featuring an anticipated publication ...

  26. 113 Great Research Paper Topics

    113 Great Research Paper Topics. Posted by Christine Sarikas. General Education. One of the hardest parts of writing a research paper can be just finding a good topic to write about. Fortunately we've done the hard work for you and have compiled a list of 113 interesting research paper topics. They've been organized into ten categories and ...

  27. The teacher's role and professional development

    Jan 2024 Liu Yang View Show abstract ... To achieve this goal, professional, competent, and qualified PAI teachers are needed. The professional ability of teachers, especially in teaching...

  28. AMS :: Math. Comp. -- Volume 93, Number 347

    Advancing research. Creating connections. CURRENT ISSUE: Mathematics of Computation. Published by the American Mathematical Society since 1960 (published as Mathematical Tables and other Aids to Computation 1943-1959), Mathematics of Computation is devoted to research articles of the highest quality in computational mathematics.

  29. Full article: Teacher job satisfaction: the importance of school

    Research Article Teacher job satisfaction: the importance of school working conditions and teacher characteristics Anna Toropova , Eva Myrberg & Stefan Johansson Pages 71-97 | Received 13 Mar 2019, Accepted 28 Nov 2019, Published online: 08 Jan 2020 Cite this article https://doi.org/10.1080/00131911.2019.1705247 In this article Full Article