• Reference Manager
  • Simple TEXT file

People also looked at

Conceptual analysis article, the past, present and future of educational assessment: a transdisciplinary perspective.

educational assessment journal

  • 1 Department of Applied Educational Sciences, Umeå Universitet, Umeå, Sweden
  • 2 Faculty of Education and Social Work, The University of Auckland, Auckland, New Zealand

To see the horizon of educational assessment, a history of how assessment has been used and analysed from the earliest records, through the 20th century, and into contemporary times is deployed. Since paper-and-pencil assessments validity and integrity of candidate achievement has mattered. Assessments have relied on expert judgment. With the massification of education, formal group-administered testing was implemented for qualifications and selection. Statistical methods for scoring tests (classical test theory and item response theory) were developed. With personal computing, tests are delivered on-screen and through the web with adaptive scoring based on student performance. Tests give an ever-increasing verisimilitude of real-world processes, and analysts are creating understanding of the processes test-takers use. Unfortunately testing has neglected the complicating psychological, cultural, and contextual factors related to test-taker psychology. Computer testing neglects school curriculum and classroom contexts, where most education takes place and where insights are needed by both teachers and learners. Unfortunately, the complex and dynamic processes of classrooms are extremely difficult to model mathematically and so remain largely outside the algorithms of psychometrics. This means that technology, data, and psychometrics have become increasingly isolated from curriculum, classrooms, teaching, and the psychology of instruction and learning. While there may be some integration of these disciplines within computer-based testing, this is still a long step from where classroom assessment happens. For a long time, educational, social, and cultural psychology related to learning and instruction have been neglected in testing. We are now on the cusp of significant and substantial development in educational assessment as greater emphasis on the psychology of assessment is brought into the world of testing. Herein lies the future for our field: integration of psychological theory and research with statistics and technology to understand processes that work for learning, identify how well students have learned, and what further teaching and learning is needed. The future requires greater efforts by psychometricians, testers, data analysts, and technologists to develop solutions that work in the pressure of living classrooms and that support valid and reliable assessment.

Introduction

In looking to the horizon of educational assessment, I would like to take a broad chronological view of where we have come from, where we are now, and what the horizons are. Educational assessment plays a vital role in the quality of student learning experiences, teacher instructional activities, and evaluation of curriculum, school quality, and system performance. Assessments act as a lever for both formative improvement of teaching and learning and summative accountability evaluation of teachers, schools, and administration. Because it is so powerful, a nuanced understanding of its history, current status, and future possibilities seems a useful exercise. In this overview I begin with a brief historical journey from assessments past through the last 3000 years and into the future that is already taking place in various locations and contexts.

Early records of the Chinese Imperial examination system can be found dating some 2,500 to 3,000 years ago ( China Civilisation Centre, 2007 ). That system was used to identify and reward talent wherever it could be found in the sprawling empire of China. Rather than rely solely on recommendations, bribery, or nepotism, it was designed to meritocratically locate students with high levels of literacy and memory competencies to operate the Emperor’s bureaucracy of command and control of a massive population. To achieve those goals, the system implemented standardised tasks (e.g., completing an essay according to Confucian principles) under invigilated circumstances to ensure integrity and comparability of performances ( Feng, 1995 ). The system had a graduated series of increasingly more complex and demanding tests until at the final examination no one could be awarded the highest grade because it was reserved for the Emperor alone. Part of the rationale for this extensive technology related to the consequences attached to selection; not only did successful candidates receive jobs with substantial economic benefits, but they were also recognised publicly on examination lists and by the right to wear specific colours or badges that signified the level of examination the candidate had passed. Unsurprisingly, given the immense prestige and possibility of social advancement through scholarship, there was an industry of preparing cheat materials (e.g., miniature books that replicated Confucian classics) and catching cheats (e.g., ranks of invigilators in high chairs overlooking desks at which candidates worked; Elman, 2013 ).

In contrast, as described by Encyclopedia Brittanica (2010a) , European educational assessment grew out of the literary and oratorical remains of the Roman empire such as schools of grammarians and rhetoricians. At the same time, schools were formed in the various cathedrals, monasteries (especially, the Benedictine monasteries), and episcopal schools throughout Europe. Under Charlemagne, church priests were required to master Latin so that they could understand scripture correctly, leading to more advanced religious and academic training. As European society developed in the early Renaissance, schools were opened under the authority of a bishop or cathedral officer or even from secular guilds to those deemed sufficiently competent to teach. Students and teachers at these schools were given certain protection and rights to ensure safe travel and free thinking. European universities from the 1100s adopted many of the clerical practices of reading important texts and scholars evaluating the quality of learning by student performance in oral disputes, debates, and arguments relative to the judgement of higher ranked experts. The subsequent centuries added written tasks and performances to the oral disputes as a way of judging the quality of learning outcomes. Nonetheless, assessment was based, as the Chinese Imperial system, on the expertise and judgment of more senior scholars or bureaucrats.

These mechanisms were put in place to meet the needs of society or religion for literate and numerate bureaucrats, thinkers, and scholars. The resource of further education, or even basic education, was generally rationed and limited. Standardised assessments, even if that were only the protocol rather than the task or the scoring, were carried out to select candidates on a relatively meritocratic basis. Families and students engaged in these processes because educational success gave hope of escape from lives of poverty and hard labour. Consequently, assessment was fundamentally a summative judgement of the student’s abilities, schooling was preparation for the final examination, and assessments during the schooling process were but mimicry of a final assessment.

With the expansion of schooling and higher education through the 1800s, more efficient methods were sought to the workload surrounding hearing memorized recitations ( Encyclopedia Brittanica, 2010b ). This led to the imposition of leaving examinations as an entry requirement to learned professions (e.g., being a teacher), the civil service, and university studies. As more and more students attended universities in the 1800s, more efficient ways collecting information were established, most especially the essay examination and the practice of answering in writing by oneself without aids. This tradition can still be seen in ordered rows of desks in examination halls as students complete written exam papers under scrutiny and time pressure.

The 20th century

By the early 1900s, however, it became apparent that the scoring of these important intellectual exercises was highly problematic. Markers did not agree with each other nor were they consistent within themselves across items or tasks and over time so that their scores varied for the same work. Consequently, early in the 20th century, multiple-choice question tests were developed so that there would be consistency in scoring and efficiency in administration ( Croft and Beard, 2022 ). It is also worth noting that considerable cost and time efficiencies were obtained through using multiple-choice test methods. This aspect led, throughout the century, to increasingly massive use of standardised machine scoreable tests for university entrance, graduate school selection, and even school evaluation. The mechanism of scoring items dichotomously (i.e., right or wrong), within classical test theory statistical modelling, resulted in easy and familiar numbers (e.g., mean, standard deviation, reliability, and standard error of measurement; Clauser, 2022 ).

As the 20th century progressed, the concepts of validity have grown increasingly expansive, and the methods of validation have become increasingly complex and multi-faceted to ensure validity of scores and their interpretation ( Zumbo and Chan, 2014 ). These included scale reliability, factor analysis, item response theory, equating, norming, and standard setting, among others ( Kline, 2020 ). It is worth noting here that statistical methods for test score analysis grew out of the early stages of the discipline of psychology. As psychometric methods became increasingly complex, the world of educational testing began to look much more like the world of statistics. Indeed, Cronbach (1954) noted that the world of psychometrics (i.e., statistical measurement of psychological phenomena) was losing contact with the world of psychology which was the most likely user of psychometric method and research. Interestingly, the world of education makes extensive use of assessment, but few educators are adept at the statistical methods necessary to evaluate their own tests, let alone those from central authorities. Indeed, few teachers are taught statistical test analysis techniques, even fewer understand them, and almost none make use of them.

Of course, assessment is not just a scored task or set of questions. It is legitimately an attempt to operationalize a sample of a construct or content or curriculum domain. The challenge for assessment lies in the conundrum that the material that is easy to test and score tends to be the material that is the least demanding or valuable in any domain. Learning objectives for K-12 schooling, let alone higher education, expect students to go beyond remembering, recalling, regurgitating lists of terminology, facts, or pieces of data. While recall of data pieces is necessary for deep processing, recall of those details is not sufficient. Students need to exhibit complex thinking, problem-solving, creativity, and analysis and synthesis. Assessment of such skills is extremely complex and difficult to achieve.

However, with the need to demonstrate that teachers are effective and that schools are achieving society’s goals and purposes it becomes easy to reduce the valued objectives of society to that which can be incorporated efficiently into a standardised test. Hence, in many societies the high-stakes test becomes the curriculum. If we could be sure that what was on the test is what society really wanted, this would not be such a bad thing; what Resnick and Resnick (1989) called measurement driven reform. However, research over extensive periods since the middle of the 20 th century has shown that much of what we test does not add value to the learning of students ( Nichols and Harris, 2016 ).

An important development in the middle of the 20th century was Scriven’s (1967) work on developing the principles and philosophy of evaluation. A powerful aspect to evaluation that he identified was the distinction between formative evaluation taking place early enough in a process to make differences to the end points of the process and summative evaluation which determined the amount and quality or merit of what the process produced. The idea of formative evaluation was quickly adapted into education as a way of describing assessments that teachers used within classrooms to identify which children needed to be taught what material next ( Bloom et al., 1971 ). This contrasted nicely with high-stakes end-of-unit, end-of-course, or end-of-year formal examinations that summatively judged the quality of student achievement and learning. While assessment as psychometrically validated tests and examinations historically focused on the summative experience, Scriven’s formative evaluation led to using assessment processes early in the educational course of events to inform learners as to what they needed to learn and instructors as to what they needed to teach.

Nonetheless, since the late 1980s (largely thanks to Sadler, 1989 ) the distinction between summative and formative transmogrified from timing to one of type. Formative assessments began to be only those which were not formal tests but were rather informal interactions in classrooms. This perspective was extended by the UK Assessment Reform Group (2002) which promulgated basic principles of formative assessment around the world. Those classroom assessment practices focused much more on what could be seen as classroom teaching practices ( Brown, 2013 , 2019 , 2020a ). Instead of testing, teachers interacted with students on-the-fly, in-the-moment of the classroom through questions and feedback that aimed to help students move towards the intended learning outcomes established at the beginning of lessons or courses. Thus, assessment for learning has become a child-friendly approach ( Stobart, 2006 ) to involving learners in their learning and developing rich meaningful outcomes without the onerous pressure of testing. Much of the power of this approach was that it came as an alternative to the national curriculum of England and Wales that incorporated high-stakes standardised assessment tasks of children at ages 7, 9, 11, and 14 (i.e., Key Stages 1 to 4; Wherrett, 2004 ).

In line with increasing access to schooling worldwide throughout the 20 th century, there is concern that success on high-consequence, summative tests simply reinforced pre-existing social status and hierarchy ( Bourdieu, 1974 ). This position argues tests are not neutral but rather tools of elitism ( Gipps, 1994 ). Unfortunately, when assessments have significant consequences, much higher proportions of disadvantaged students (e.g., minority students, new speakers of the language-medium of assessment, special needs students, those with reading difficulties, etc.) do not experience such benefits ( Brown, 2008 ). This was a factor in the development of using assessment high-quality formative assessment to accelerate the learning progression of disadvantaged students. Nonetheless, differences in group outcomes do not always mean tests are the problem; group score differences can point out that there is sociocultural bias in the provision of educational resources in the school system ( Stobart, 2005 ). This would be rationale for system monitoring assessments, such as Hong Kong’s Territory Wide System Assessment, 1 the United States’ National Assessment of Educational Progress, 2 or Australia’s National Assessment Program Literacy and Numeracy. 3 The challenge is how to monitor a system without blaming those who have been let down by it.

Key Stage tests were put in place, not only to evaluate student learning, but also to assure the public that teachers and schools were achieving important goals of education. This use of assessment put focus on accountability, not for the student, but for the school and teacher ( Nichols and Harris, 2016 ). The decision to use tests of student learning to evaluate schools and teachers was mimicked, especially in the United States, in various state accountability tests, the No Child Left Behind legislation, and even such innovative programs of assessment as Race to the Top and PARCC. It should be noted that the use of standardised tests to evaluate teachers and schools is truly a global phenomenon, not restricted to the UK and the USA ( Lingard and Lewis, 2016 ). In this context, testing became a summative evaluation of teachers and school leaders to demonstrate school effectiveness and meet accountability requirements.

The current situation is that assessment is perceived quite differently by experts in different disciplines. Psychometricians tend to define assessment in terms of statistical modelling of test scores. Psychologists use assessments for diagnostic description of client strengths or needs. Within schooling, leaders tend to perceive assessment as jurisdiction or state-mandated school accountability testing, while teachers focus on assessment as interactive, on-the-fly experiences with their students, and parents ( Buckendahl, 2016 ; Harris and Brown, 2016 ) understand assessment as test scores and grades. The world of psychology has become separated from the worlds of classroom teaching, curriculum, psychometrics and statistics, and assessment technologies.

This brief history bringing us into early 21 st century shows that educational assessment is informed by multiple disciplines which often fail to talk with or even to each other. Statistical analysis of testing has become separated from psychology and education, psychology is separated from curriculum, teaching is separated from testing, and testing is separated from learning. Hence, we enter the present with many important facets that inform effective use of educational assessment siloed from one another.

Now and next

Currently the world of educational statistics has become engrossed in the large-scale data available through online testing and online learning behaviours. The world of computational psychometrics seeks to move educational testing statistics into the dynamic analysis of big data with machine learning and artificial intelligence algorithms potentially creating a black box of sophisticated statistical models (e.g., neural networks) which learners, teachers, administrators, and citizens cannot understand ( von Davier et al., 2019 ). The introduction of computing technologies means that automation of item generation ( Gierl and Lai, 2016 ) and scoring of performances ( Shin et al., 2021 ) is possible, along with customisation of test content according to test-taker performance ( Linden and Glas, 2000 ). The Covid-19 pandemic has rapidly inserted online and distance testing as a commonplace practice with concerns raised about how technology is used to assure the integrity of student performance ( Dawson, 2021 ).

The ecology of the classroom is not the same as that of a computerised test. This is especially notable when the consequence of a test (regardless of medium) has little relevance to a student ( Wise and Smith, 2016 ). Performance on international large-scale assessments (e.g., PISA, TIMSS) may matter to government officials ( Teltemann and Klieme, 2016 ) but these tests have little value for individual learners. Nonetheless, governmental responses to PISA or TIMSS results may create policies and initiatives that have trickle-down effect on schools and students ( Zumbo and Forer, 2011 ). Consequently, depending on the educational and cultural environment, test-taking motivation on tests that have consequences for the state can be similar to a test with personal consequence in East Asia ( Zhao et al., 2020 ), but much lower in a western democracy ( Zhao et al., 2022 ). Hence, without surety that in any educational test learners are giving full effort ( Thorndike, 1924 ), the information generated by psychometric analysis is likely to be invalid. Fortunately, under computer testing conditions, it is now possible to monitor reduced or wavering effort during an actual test event and provide support to such a student through a supervising proctor ( Wise, 2019 ), though this feature is not widely prevalent.

Online or remote teaching, learning, and assessment have become a reality for many teachers and students, especially in light of our educational responses to the Covid-19 pandemic. Clearly, some families appreciate this because their children can progress rapidly, unencumbered by the teacher or classmates. For such families, continuing with digital schooling would be seen as a positive future. However, reliance on a computer interface as the sole means of assessment or teaching may dehumanise the very human experience of learning and teaching. As Asimov (1954) described in his short story of a future world in which children are taught individually by machines, Margie imagined what it must have been like to go to school with other children:

Margie …was thinking about the old schools they had when her grandfather's grandfather was a little boy. All the kids from the whole neighborhood came, laughing and shouting in the schoolyard, sitting together in the schoolroom, going home together at the end of the day. They learned the same things so they could help one another on the homework and talk about it.
And the teachers were people...
The mechanical teacher was flashing on the screen: "When we add the fractions ½ and ¼ -"
Margie was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had.

As Brown (2020b) has argued the option of a de-schooled society through computer-based teaching, learning, and assessment is deeply unattractive on the grounds that it is likely to be socially unjust. The human experience of schooling matters to the development of humans. We learn through instruction ( Bloom, 1976 ), culturally located experiences ( Cole et al., 1971 ), inter-personal interaction with peers and adults ( Vygotsky, 1978 ; Rogoff, 1991 ), and biogenetic factors ( Inhelder and Piaget, 1958 ). Schooling gives us access to environments in which these multiple processes contribute to the kinds of citizens we want. Hence, we need confidence in the power of shared schooling to do more than increase the speed by which children acquire knowledge and learning; it helps us be more human.

This dilemma echoes the tension between in vitro and in vivo biological research. Within the controlled environment of a test tube (vitro) organisms do not necessarily behave the same way as they do when released into the complexity of human biology ( Autoimmunity Research Foundation, 2012 ). This analogy has been applied to educational assessment ( Zumbo, 2015 ) indicating that how students perform in a computer-mediated test may not have validity for how students perform in classroom interactions or in-person environments.

The complexity of human psychology is captured in Hattie’s (2004) ROPE model which posits that the various aspects of human motivation, belief, strategy, and values interact as threads spun into a rope. This means it is hard to analytically separate the various components and identify aspects that individually explain learning outcomes. Indeed, Marsh et al. (2006) showed that of the many self-concept and control beliefs used to predict performance on the PISA tests, almost all variables have relations to achievement less than r  = 0.35. Instead, interactions among motivation, beliefs about learning, intelligence, assessment, the self, and attitudes with and toward others, subjects, and behaviours all matter to performance. Aspects that create growth-oriented pathways ( Boekaerts and Niemivirta, 2000 ) and strategies include inter alia mastery goals ( Deci and Ryan, 2000 ), deep learning ( Biggs et al., 2001 ) beliefs, malleable intelligence ( Duckworth et al., 2011 ) beliefs, improvement-oriented beliefs about assessment ( Brown, 2011 ), internal, controllable attributes ( Weiner, 1985 ), effort ( Wise and DeMars, 2005 ), avoiding dishonesty ( Murdock et al., 2016 ), trusting one’s peers ( Panadero, 2016 ), and realism in evaluating one’s own work ( Brown and Harris, 2014 ). All these adaptive aspects of learning stand in contrast to deactivating and maladaptive beliefs, strategies, and attitudes that serve to protect the ego and undermine learning. What this tells us that psychological research matters to understanding the results of assessment and that no one single psychological construct is sufficient to explain very much of the variance in student achievement. However, it seems we are as yet unable to identify which specific processes matter most to better performance for all students across the ability spectrum, given that almost all the constructs that have been reported in educational psychology seem to have a positive contribution to better performance. Here is the challenge for educational psychology within an assessment setting —which constructs are most important and effectual before, during, and after any assessment process ( Mcmillan, 2016 ) and how should they be operationalised.

A current enthusiasm is to use ‘big data’ from computer-based assessments to examine in more detail how students carry out the process of responding to tasks. Many large-scale testing programs through computer testing collect, utilize, and report on test-taker engagement as part of their process data collection (e.g., the United States National Assessment of Educational Progress 4 ). These test systems provide data about what options were clicked on, in what order, what pages were viewed, and the timings of these actions. Several challenges to using big data in educational assessment exist. First, computerised assessments need to capture the processes and products we care about. That means we need a clear theoretical model of the underlying cognitive mechanisms or processes that generate the process data itself ( Zumbo et al., in press ). Second, we need to be reminded that data do not explain themselves; theory and insight about process are needed to understand data ( Pearl and Mackenzie, 2018 ). Examination of log files can give some insight into effective vs. ineffective strategies, once the data were analysed using theory to create a model of how a problem should be done ( Greiff et al., 2015 ). Access to data logs that show effort and persistence on a difficult task can reveal that, despite failure to successfully resolve a problem, such persistence is related to overall performance ( Lundgren and Eklöf, 2020 ). But data by itself will not tell us how and why students are successful and what instruction might need to do to encourage students to use the scientific method of manipulating one variable at a time or not giving up quickly.

Psychometric analyses of assessments can only statistically model item difficulty, item discrimination, and item chance parameters to estimate person ability ( Embretson and Reise, 2000 ). None of the other psychological features of how learners relate to themselves and their environment are included in score estimation. In real classroom contexts, teachers make their best efforts to account for individual motivation, affect, and cognition to provide appropriate instruction, feedback, support, and questioning. However, the nature of these factors varies across time (cohorts), locations (cultures and societies), policy priorities for schooling and assessment, and family values ( Brown and Harris, 2009 ). This means that what constitutes a useful assessment to inform instruction in a classroom context (i.e., identify to the teacher who needs to be taught what next) needs to constantly evolve and be incredibly sensitive to individual and contextual factors. This is difficult if we keep psychology, curriculum, psychometrics, and technology in separate silos. It seems highly desirable that these different disciplines interact, but it is not guaranteed that the technology for psychometric testing developments will cross-pollinate with classroom contexts where teachers have to relate to and monitor student learning across all important curricular domains.

It is common to treat what happens in the minds and emotions of students when they are assessed as a kind of ‘black box’ implying that the processes are opaque or unknowable. This is an approach I have taken previously in examining what students do when asked to self-assess ( Yan and Brown, 2017 ). However, the meaning of a black box is quite different in engineering. In aeronautics, the essential constructs related to flight (e.g., engine power, aileron settings, pitch and yaw positions, etc.) are known very deeply, otherwise flight would not happen. The black box in an airplane records the values of those important variables and the only thing unknown (i.e., black) is what the values were at the point of interest. If we are to continue to use this metaphor as a way of understanding what happens when students are assessed or assess, then we need to agree on what the essential constructs are that underlie learning and achievement. Our current situation seems to be satisfied with everything is correlated and everything matters. It may be that data science will help us sort through the chaff for the wheat provided we design and implement sensors appropriate to the constructs we consider hypothetically most important. It may be that measuring timing of mouse clicks and eye tracking do connect to important underlying mechanisms, but at this stage data science in testing seems largely a case of crunch the ‘easy to get’ numbers and hope that the data mean something.

To address this concern, we need to develop for education’s sake, assessments that have strong alignment with curricular ambitions and values and which have applicability to classroom contexts and processes ( Bennett, 2018 ). This will mean technology that supports what humans must do in schooling rather than replace them with teaching/testing machines. Fortunately, some examples of assessment technology for learning do exist. One supportive technology is PeerWise ( Denny et al., 2008 ; Hancock et al., 2018 ) in which students create course related multiple-choice questions and use them as a self-testing learning strategy. A school-based technology is the e-asTTle computer assessment system that produces a suite of diagnostic reports to support teachers’ planning and teaching in response to what the system indicated students need to be taught ( Hattie and Brown, 2008 ; Brown and Hattie, 2012 ; Brown et al., 2018 ). What these technologies do is support rather than supplant the work that teachers and learners need to do to know what they need to study or teach and to monitor their progress. Most importantly they are well-connected to what students must learn and what teachers are teaching. Other detailed work uses organised learning models or dynamic learning maps to mark out routes for learners and teachers using cognitive and curriculum insights with psychometric tools for measuring status and progress ( Kingston et al., 2022 ). The work done by Wise (2019) shows that it is possible in a computer assisted testing environment to monitor student effort based on their speed of responding and give prompts that support greater effort and less speed.

Assessment needs to exploit more deeply the insights educational psychology has given us into human behavior, attitudes, inter- and intra-personal relations, emotions, and so on. This was called for some 20 years ago ( National Research Council, 2001 ) but the underlying disciplines that inform this integration seem to have grown away from each other. Nonetheless, the examples given above suggest that the gaps can be closed. But assessments still do not seem to consider and respond to these psychological determinants of achievement. Teachers have the capability of integrating curriculum, testing, psychology, and data at a superficial level but with some considerable margin of error ( Meissel et al., 2017 ). To overcome their own error, teachers need technologies that support them in making useful and accurate interpretations of what students need to be taught next that work with them in the classroom. As Bennett (2018) pointed out more technology will happen, but perhaps not more tests on computers. This is the assessment that will help teachers rather than replace them and give us hope for a better future.

Author contributions

GB wrote this manuscript and is solely responsible for its content.

Support for the publication of this paper was received from the Publishing and Scholarly Services of the Umeå University Library.

Acknowledgments

A previous version of this paper was presented as a keynote address to the 2019 biennial meeting of the European Association for Research in Learning and Instruction, with the title Products, Processes, Psychology, and Technology: Quo Vadis Educational Assessment ?

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. ^ https://www.hkeaa.edu.hk/en/sa_tsa/tsa/

2. ^ https://www.nationsreportcard.gov/

3. ^ https://nap.edu.au/

4. ^ https://www.nationsreportcard.gov/process_data/

Asimov, I. (1954). Oh the fun they had. Fantasy Sci. Fiction 6, 125–127.

Google Scholar

Assessment Reform Group (2002). Assessment for learning: 10 principles Research-based Principles to Guide Classroom Practice Cambridge: Assessment Reform Group.

Autoimmunity Research Foundation. (2012). Differences between in vitro, in vivo, and in silico studies [online]. The Marshall Protocol Knowledge Base. Available at: http://mpkb.org/home/patients/assessing_literature/in_vitro_studies (Accessed November 12, 2015).

Bennett, R. E. (2018). Educational assessment: what to watch in a rapidly changing world. Educ. Meas. Issues Pract. 37, 7–15. doi: 10.1111/emip.12231

CrossRef Full Text | Google Scholar

Biggs, J., Kember, D., and Leung, D. Y. (2001). The revised two-factor study process questionnaire: R-SPQ-2F. Br. J. Educ. Psychol. 71, 133–149. doi: 10.1348/000709901158433

PubMed Abstract | CrossRef Full Text | Google Scholar

Bloom, B. S. (1976). Human Characteristics and School Learning . New York: McGraw-Hill.

Bloom, B., Hastings, J., and Madaus, G. (1971). Handbook on Formative and Summative Evaluation of Student Learning . New York:McGraw Hill.

Boekaerts, M., and Niemivirta, M. (2000). “Self-regulated learning: finding a balance between learning goals and ego-protective goals,” in Handbook of Self-regulation . eds. M. Boekaerts, P. R. Pintrich, and M. Zeidner (San Diego, CA: Academic Press).

Bourdieu, P. (1974). “The school as a conservative force: scholastic and cultural inequalities,” in Contemporary Research in the Sociology of Education . ed. J. Eggleston (London: Methuen).

Brown, G. T. L. (2008). Conceptions of Assessment: Understanding what Assessment Means to Teachers and Students . New York: Nova Science Publishers.

Brown, G. T. L. (2011). Self-regulation of assessment beliefs and attitudes: a review of the Students' conceptions of assessment inventory. Educ. Psychol. 31, 731–748. doi: 10.1080/01443410.2011.599836

Brown, G. T. L. (2013). “Assessing assessment for learning: reconsidering the policy and practice,” in Making a Difference in Education and Social Policy . eds. M. East and S. May (Auckland, NZ: Pearson).

Brown, G. T. L. (2019). Is assessment for learning really assessment? Front. Educ. 4:64. doi: 10.3389/feduc.2019.00064

Brown, G. T. L. (2020a). Responding to assessment for learning: a pedagogical method, not assessment. N. Z. Annu. Rev. Educ. 26, 18–28. doi: 10.26686/nzaroe.v26.6854

Brown, G. T. L. (2020b). Schooling beyond COVID-19: an unevenly distributed future. Front. Educ. 5:82. doi: 10.3389/feduc.2020.00082

Brown, G. T. L., and Harris, L. R. (2009). Unintended consequences of using tests to improve learning: how improvement-oriented resources heighten conceptions of assessment as school accountability. J. MultiDisciplinary Eval. 6, 68–91.

Brown, G. T. L., and Harris, L. R. (2014). The future of self-assessment in classroom practice: reframing self-assessment as a core competency. Frontline Learn. Res. 3, 22–30. doi: 10.14786/flr.v2i1.24

Brown, G. T. L., O'leary, T. M., and Hattie, J. A. C. (2018). “Effective reporting for formative assessment: the asTTle case example,” in Score Reporting: Research and Applications . ed. D. Zapata-Rivera (New York: Routledge).

Brown, G. T., and Hattie, J. (2012). “The benefits of regular standardized assessment in childhood education: guiding improved instruction and learning,” in Contemporary Educational Debates in Childhood Education and Development . eds. S. Suggate and E. Reese (New York: Routledge).

Buckendahl, C. W. (2016). “Public perceptions about assessment in education,” in Handbook of Human and Social Conditions in Assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

China Civilisation Centre (2007). China: Five Thousand Years of History and Civilization . Hong Kong: City University of Hong Kong Press.

Clauser, B. E. (2022). “A history of classical test theory,” in The History of Educational Measurement: Key Advancements in Theory, Policy, and Practice . eds. B. E. Clauser and M. B. Bunch (New York: Routledge).

Cole, M., Gay, J., Glick, J., and Sharp, D. (1971). The Cultural Context of Learning and Thinking: An Exploration in Experimental Anthropology . New York: Basic Books.

Croft, M., and Beard, J. J. (2022). “Development and evolution of the SAT and ACT,” in The History of Educational Measurement: Key Advancements in Theory, Policy, and Practice . eds. B. E. Clauser and M. B. Bunch (New York: Routledge).

Cronbach, L. J. (1954). Report on a psychometric mission to Clinicia. Psychometrika 19, 263–270. doi: 10.1007/BF02289226

Dawson, P. (2021). Defending Assessment Security in a Digital World: Preventing e-cheating and Supporting Academic Integrity in Higher Education . London: Routledge.

Deci, E. L., and Ryan, R. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 55, 68–78.

Denny, P., Hamer, J., Luxton-Reilly, A., and Purchase, H. PeerWise: students sharing their multiple choice questions. ICER '08: Proceedings of the Fourth international Workshop on Computing Education Research; September6–7 (2008). Sydney, Australia, 51-58.

Duckworth, A. L., Quinn, P. D., and Tsukayama, E. (2011). What no child left behind leaves behind: the roles of IQ and self-control in predicting standardized achievement test scores and report card grades. J. Educ. Psychol. 104, 439–451. doi: 10.1037/a0026280

Elman, B. A. (2013). Civil Examinations and Meritocracy in Late IMPERIAL China . Cambridge: Harvard University Press.

Embretson, S. E., and Reise, S. P. (2000). Item Response Theory for Psychologists . Mahwah: LEA.

Encyclopedia Brittanica (2010a). Europe in the middle ages: the background of early Christian education. Encyclopedia Britannica.

Encyclopedia Brittanica (2010b). Western education in the 19th century. Encyclopedia Britannica.

Feng, Y. (1995). From the imperial examination to the national college entrance examination: the dynamics of political centralism in China's educational enterprise. J. Contemp. China 4, 28–56. doi: 10.1080/10670569508724213

Gierl, M. J., and Lai, H. (2016). A process for reviewing and evaluating generated test items. Educ. Meas. Issues Pract. 35, 6–20. doi: 10.1111/emip.12129

Gipps, C. V. (1994). Beyond Testing: Towards a Theory of Educational Assessment . London: Falmer Press.

Greiff, S., Wüstenberg, S., and Avvisati, F. (2015). Computer-generated log-file analyses as a window into students' minds? A showcase study based on the PISA 2012 assessment of problem solving. Comput. Educ. 91, 92–105. doi: 10.1016/j.compedu.2015.10.018

Hancock, D., Hare, N., Denny, P., and Denyer, G. (2018). Improving large class performance and engagement through student-generated question banks. Biochem. Mol. Biol. Educ. 46, 306–317. doi: 10.1002/bmb.21119

Harris, L. R., and Brown, G. T. L. (2016). “Assessment and parents,” in Encyclopedia of Educational Philosophy And theory . ed. M. A. Peters (Springer: Singapore).

Hattie, J. Models of self-concept that are neither top-down or bottom-up: the rope model of self-concept. 3rd International Biennial Self Research Conference; July, (2004). Berlin, DE.

Hattie, J. A., and Brown, G. T. L. (2008). Technology for school-based assessment and assessment for learning: development principles from New Zealand. J. Educ. Technol. Syst. 36, 189–201. doi: 10.2190/ET.36.2.g

Inhelder, B., and Piaget, J. (1958). The Growth of Logical Thinking from Childhood to Adolescence . New York; Basic Books

Kingston, N. M., Alonzo, A. C., Long, H., and Swinburne Romine, R. (2022). Editorial: the use of organized learning models in assessment. Front. Education 7:446. doi: 10.3389/feduc.2022.1009446

Kline, R. B. (2020). “Psychometrics,” in SAGE Research Methods Foundations . eds. P. Atkinson, S. Delamont, A. Cernat, J. W. Sakshaug, and R. A. Williams (London: Sage).

Linden, W. J. V. D., and Glas, G. A. W. (2000). Computerized Adaptive Testing: Theory and Practice . London: Kluwer Academic Publishers.

Lingard, B., and Lewis, S. (2016). “Globalization of the Anglo-American approach to top-down, test-based educational accountability,” in Handbook of Human and Social Conditions in Assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

Lundgren, E., and Eklöf, H. (2020). Within-item response processes as indicators of test-taking effort and motivation. Educ. Res. Eval. 26, 275–301. doi: 10.1080/13803611.2021.1963940

Marsh, H. W., Hau, K.-T., Artelt, C., Baumert, J., and Peschar, J. L. (2006). OECD's brief self-report measure of educational psychology's most useful affective constructs: cross-cultural, psychometric comparisons across 25 countries. Int. J. Test. 6, 311–360. doi: 10.1207/s15327574ijt0604_1

Mcmillan, J. H. (2016). “Section discussion: student perceptions of assessment,” in Handbook of Human and Social Conditions in Assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

Meissel, K., Meyer, F., Yao, E. S., and Rubie-Davies, C. M. (2017). Subjectivity of teacher judgments: exploring student characteristics that influence teacher judgments of student ability. Teach. Teach. Educ. 65, 48–60. doi: 10.1016/j.tate.2017.02.021

Murdock, T. B., Stephens, J. M., and Groteweil, M. M. (2016). “Student dishonesty in the face of assessment: who, why, and what we can do about it,” in Handbook of Human and Social Conditions in assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

National Research Council (2001). Knowing what students know: The science and design of educational assessment. The National Academies Press.

Nichols, S. L., and Harris, L. R. (2016). “Accountability assessment’s effects on teachers and schools,” in Handbook of human and Social Conditions in Assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

Panadero, E. (2016). “Is it safe? Social, interpersonal, and human effects of peer assessment: a review and future directions,” in Handbook of Human and Social Conditions in Assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

Pearl, J., and Mackenzie, D. (2018). The Book of why: The New Science of Cause and Effect . New York: Hachette Book Group.

Resnick, L. B., and Resnick, D. P. (1989). Assessing the Thinking Curriculum: New Tools for Educational Reform . Washington, DC: National Commission on Testing and Public Policy.

Rogoff, B. (1991). “The joint socialization of development by young children and adults,” in Learning to Think: Child Development in Social Context 2 . eds. P. Light, S. Sheldon, and M. Woodhead (London: Routledge).

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instr. Sci. 18, 119–144. doi: 10.1007/BF00117714

Scriven, M. (1967). “The methodology of evaluation,” in Perspectives of Curriculum Evaluation . eds. R. W. Tyler, R. M. Gagne, and M. Scriven (Chicago, IL: Rand McNally).

Shin, J., Guo, Q., and Gierl, M. J. (2021). “Automated essay scoring using deep learning algorithms,” in Handbook of Research on Modern Educational Technologies, Applications, and Management . ed. D. B. A. M. Khosrow-Pour (Hershey, PA, USA: IGI Global).

Stobart, G. (2005). Fairness in multicultural assessment systems. Assess. Educ. Principles Policy Pract. 12, 275–287. doi: 10.1080/09695940500337249

Stobart, G. (2006). “The validity of formative assessment,” in Assessment and Learning . ed. J. Gardner (London: Sage).

Teltemann, J., and Klieme, E. (2016). “The impact of international testing projects on policy and practice,” in Handbook of Human and Social Conditions in Assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

Thorndike, E. L. (1924). Measurement of intelligence. Psychol. Rev. 31, 219–252. doi: 10.1037/h0073975

Von Davier, A. A., Deonovic, B., Yudelson, M., Polyak, S. T., and Woo, A. (2019). Computational psychometrics approach to holistic learning and assessment systems. Front. Educ. 4:69. doi: 10.3389/feduc.2019.00069

Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes . Cambridge, MA:Harvard University Press.

Weiner, B. (1985). An Attributional theory of achievement motivation and emotion. Psychol. Rev. 92, 548–573. doi: 10.1037/0033-295X.92.4.548

Wherrett, S. (2004). The SATS story. The Guardian, 24 August.

Wise, S. L. (2019). Controlling construct-irrelevant factors through computer-based testing: disengagement, anxiety, & cheating. Educ. Inq. 10, 21–33. doi: 10.1080/20004508.2018.1490127

Wise, S. L., and Demars, C. E. (2005). Low examinee effort in low-stakes assessment: problems and potential solutions. Educ. Assess. 10, 1–17. doi: 10.1207/s15326977ea1001_1

Wise, S. L., and Smith, L. F. (2016). “The validity of assessment when students don’t give good effort,” in Handbook of Human and Social Conditions in Assessment . eds. G. T. L. Brown and L. R. Harris (New York: Routledge).

Yan, Z., and Brown, G. T. L. (2017). A cyclical self-assessment process: towards a model of how students engage in self-assessment. Assess. Eval. High. Educ. 42, 1247–1262. doi: 10.1080/02602938.2016.1260091

Zhao, A., Brown, G. T. L., and Meissel, K. (2020). Manipulating the consequences of tests: how Shanghai teens react to different consequences. Educ. Res. Eval. 26, 221–251. doi: 10.1080/13803611.2021.1963938

Zhao, A., Brown, G. T. L., and Meissel, K. (2022). New Zealand students’ test-taking motivation: an experimental study examining the effects of stakes. Assess. Educ. 29, 1–25. doi: 10.1080/0969594X.2022.2101043

Zumbo, B. D. (2015). Consequences, side effects and the ecology of testing: keys to considering assessment in vivo. Plenary Address to the 2015 Annual Conference of the Association for Educational Assessment—Europe (AEA-E). Glasgow, Scotland.

Zumbo, B. D., and Chan, E. K. H. (2014). Validity and Validation in Social, Behavioral, and Health Sciences . Cham, CH: Springer Press.

Zumbo, B. D., and Forer, B. (2011). “Testing and measurement from a multilevel view: psychometrics and validation,” in High Stakes Testing in Education-Science and Practice in K-12 Settings . eds. J. A. Bovaird, K. F. Geisinger, and C. W. Buckendahl (Washington: American Psychological Association Press).

Zumbo, B. D., Maddox, B., and Care, N. M. (in press). Process and product in computer-based assessments: clearing the ground for a holistic validity framework. Eur. J. Psychol. Assess.

Keywords: assessment, testing, technology, psychometrics, psychology, curriculum, classroom

Citation: Brown GTL (2022) The past, present and future of educational assessment: A transdisciplinary perspective. Front. Educ . 7:1060633. doi: 10.3389/feduc.2022.1060633

Received: 03 October 2022; Accepted: 25 October 2022; Published: 11 November 2022.

Reviewed by:

Copyright © 2022 Brown. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Gavin T. L. Brown, [email protected] ; [email protected]

This article is part of the Research Topic

Horizons in Education 2022

Educational Assessment

educational assessment journal

Subject Area and Category

Publication type.

10627197, 15326977

1993-1995, 2004-2023

Information

How to publish in this journal

educational assessment journal

The set of journals have been ranked according to their SJR and divided into four equal groups, four quartiles. Q1 (green) comprises the quarter of the journals with the highest values, Q2 (yellow) the second highest values, Q3 (orange) the third highest values and Q4 (red) the lowest values.

The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is.

Evolution of the number of published documents. All types of documents are considered, including citable and non citable documents.

This indicator counts the number of citations received by documents from a journal and divides them by the total number of documents published in that journal. The chart shows the evolution of the average number of times documents published in a journal in the past two, three and four years have been cited in the current year. The two years line is equivalent to journal impact factor ™ (Thomson Reuters) metric.

Evolution of the total number of citations and journal's self-citations received by a journal's published documents during the three previous years. Journal Self-citation is defined as the number of citation from a journal citing article to articles published by the same journal.

Evolution of the number of total citation per document and external citation per document (i.e. journal self-citations removed) received by a journal's published documents during the three previous years. External citations are calculated by subtracting the number of self-citations from the total number of citations received by the journal’s documents.

International Collaboration accounts for the articles that have been produced by researchers from several countries. The chart shows the ratio of a journal's documents signed by researchers from more than one country; that is including more than one country address.

Not every article in a journal is considered primary research and therefore "citable", this chart shows the ratio of a journal's articles including substantial research (research articles, conference papers and reviews) in three year windows vs. those documents other than research articles, reviews and conference papers.

Ratio of a journal's items, grouped in three years windows, that have been cited at least once vs. those not cited during the following year.

Evolution of the percentage of female authors.

Evolution of the number of documents cited by public policy documents according to Overton database.

Evoution of the number of documents related to Sustainable Development Goals defined by United Nations. Available from 2018 onwards.

Scimago Journal & Country Rank

Leave a comment

Name * Required

Email (will not be published) * Required

* Required Cancel

The users of Scimago Journal & Country Rank have the possibility to dialogue through comments linked to a specific journal. The purpose is to have a forum in which general doubts about the processes of publication in the journal, experiences and other issues derived from the publication of papers are resolved. For topics on particular articles, maintain the dialogue through the usual channels with your editor.

Scimago Lab

Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®

educational assessment journal

Cookie settings

Cookie Policy

Legal Notice

Privacy Policy

Large-scale Assessments in Education

An IERI – International Educational Research Institute Journal

Large-scale Assessments in Education Cover Image

Special issue on leveraging large-scale assessments for effective and equitable school practices: The case of the Nordic countries

What makes a school practice effective and equitable for all students? The answer to this question is the Holy Grail of educational research. Researchers seek  to find out how different aspects of school systems—including teacher competence, school characteristics, and educational policies—can be optimized to provide excellent learning opportunities across different groups of students.

New Content Item (1)

Special Issue on investigating the effect of COVID-19 disruption in education using REDS data

LSAE is publishing a Special Issue on the educational disruption due to the COVID-19 crisis in different areas of the world, as well as on the responses of national governments, schools, teachers, students and their families.

  • Most accessed
  • Collections (Special Issues)

Measuring and explaining political tolerance among adolescents: insights from the International Civic and Citizenship Education Study 2016

Authors: Johanna F. Ziemes

The relationship between mathematics self-efficacy and mathematics achievement: multilevel analysis with NAEP 2019

Authors: Yao Yang, Yukiko Maeda and Marcia Gentry

Evaluating German PISA stratification designs: a simulation study

Authors: Julia Mang, Helmut Küchenhoff and Sabine Meinck

Reciprocal relationship between self-efficacy and achievement in mathematics among high school students

Authors: Ruixue Liu, Cindy Jong and Meng Fan

The use of process data in large-scale assessments: a literature review

Authors: Ella Anghel, Lale Khorramdel and Matthias von Davier

Most recent articles RSS

View all articles

The relationship between students’ use of ICT for social communication and their computer and information literacy

Authors: Meral Alkan and Sabine Meinck

Effect size measures for multilevel models: definition, interpretation, and TIMSS example

Authors: Julie Lorah

Introduction to instrumental variables and their application to large-scale assessment data

Authors: Artur Pokropek

The measure of socio-economic status in PISA: a review and some suggested improvements

Authors: Francesco Avvisati

Gender, affect, and math: a cross-national meta-analysis of Trends in International Mathematics and Science Study 2015 outcomes

Authors: Ehsan Ghasemi and Hansel Burley

Most accessed articles RSS

New Content Item

This is a Special Issue on the educational disruption due to the COVID-19 crisis in different areas of the world, as well as on the responses of national governments, schools, teachers, students and their families.

Edited by: Alice Bertoletti, Zbigniew Karpiński,  European Commission Joint Research Centre Collection published:  26 December 2022

Leveraging large-scale assessments for effective and equitable school practices: The case of the Nordic countries

This issue showcases innovative research and development in the area of log-file and process data with a focus in international large-scale assessments, including the latest methodological and theoretical approaches, and critical ethical considerations.

Edited by: Nani Teig, Univeristy of Olso & Isa Steinmann, University of Olso  Collection published: 5 April 2022

Exploring Usage of Log-File and Process Data in International Large-Scale Assessments

Edited by: Qiwei (Britt) He, ETS, Irwin Kisch, ETS, Caroline McKeown, ERC, Jude Cosgrove, ERC Collection published: 29 May 2020

Results, methodological aspects and advancements of the Programme for the International Assessment of Adult Competencies (PIAAC)

This special issue presents results of the PIAAC study regarding socio-economic differences such as education and gender. Results imply recommendations for policy actions to approach problems associated with skill inequality.

Edited by Anja Perry, Débora B. Maehler, Beatrice Rammstedt Collection published:  31 January 2017

Causal inferences with cross-sectional large scale assessment data

Researchers and policy makers in education want to know if anything can be done to improve achievement overall and for particular groups of test takers. This, in turn, has motivated interest in making connections between a host of potential causes and achievement.

Edited by: Prof. Leslie Rutkowski Collection published: 3 February 2016

Discover the latest article collections

New Content Item

This issue showcases innovative research and development in the area of log-file and process data with a focus in international large-scale assessments, including the latest methodological and theoretical approaches, and critical ethical considerations.  

educational assessment journal

New Content Item

Celebrating LSAE’s 10th Anniversary

It has already been 10 years since Large-Scale-Assessments in Education is publishing with Springer. Today, we would like to celebrate this wonderful occasion together with all of you.

New Content Item

Call for Papers: The National Educational Panel Study (NEPS) and Methodological Innovations in Longitudinal Large-Scale Assessments

LSAE is launching a special issue bringing together contributions from different disciplines that can help improve methodological practices in longitudinal large-scale assessments in education. 

LSAE is indexed by ESCI and SCOPUS

Large-Scale Assessments in Education  has been accepted by the Emerging Sources Citation Index (ESCI) and SCOPUS . It is also ranked in the  top quartile  of the  Scimago Journal Rank (SJR) .

New Content Item

by Joanna Sikora  

New Content Item

by Kerstin Drossel

Software articles

Software articles

Aims and scope

Large-scale Assessments in Education  is a publication from the IERI – International Education Research Institute. Articles considered for publication in this journal are those that make use of the databases published by national and international large-scale assessment programs, present results of interest to a broad national or international audience, or contribute to enhance and improve the methods and procedures, implementation, or use of large-scale assessments in education and their data.

Large-scale assessments are those designed with the purpose of reporting results at the group level and, as such, generally rely on sampling techniques, make use of sampling weights and replication methods, and resort to item response theory for the calculation of scale scores. Examples of large-scale assessments include but are not limited to IEA’s TIMSS, PIRLS, ICCS, and ICILS; U.S.-NAEP; OECD’s PISA, PIAAC, IELS, TALIS, and TALIS Starting Strong Survey; SACMEC, PASEC, and ERCE. Papers reporting on very small subsets of respondents, or very specific local issues outside the general goals of the assessment, will be considered out of scope.

Latest Tweets

Your browser needs to have JavaScript enabled to view this timeline

New Content Item

Open Access Books

9783319142210

Related book series

New Content Item (2)

  • Editorial Board
  • Sign up for article alerts and news from this journal
  • Follow us on Twitter

Annual Journal Metrics

2022 Citation Impact 3.1 - 2-year Impact Factor 4.0 - 5-year Impact Factor 1.623 - SNIP (Source Normalized Impact per Paper) 0.996 - SJR (SCImago Journal Rank)

2023 Speed 31 days submission to first editorial decision for all manuscripts (Median) 324 days submission to accept (Median)

2023 Usage  418,474 downloads 133 Altmetric mentions

  • More about our metrics

This journal is indexed by

  • Emerging Sources Citation Index
  • Google Scholar
  • EBSCO Discovery Service
  • OCLC WorldCat Discovery Service
  • ProQuest Central
  • ProQuest Education Database
  • ProQuest Professional Education
  • ProQuest Social Science Collection
  • ProQuest-ExLibris Summon
  • ProQuest-ExLibris Primo
  • UGC-CARE List (India)
  • TD Net Discovery Service
  • Chinese Academy of Sciences (CAS) - GoOA

Society affiliation

New Content Item

Large-scale Assessments in Education is fully sponsored by IERI – International Educational Research Institute, and the International Association for the Evaluation of Educational Achievement  (IEA) that focuses on improving the science of large-scale assessments. Authors do not need to pay an article-processing charge.

  • ISSN: 2196-0739 (electronic)
  • Open access
  • Published: 14 May 2024

Protocol for a scoping review study on learning plan use in undergraduate medical education

  • Anna Romanova   ORCID: orcid.org/0000-0003-1118-1604 1 ,
  • Claire Touchie 1 ,
  • Sydney Ruller 2 ,
  • Victoria Cole 3 &
  • Susan Humphrey-Murto 4  

Systematic Reviews volume  13 , Article number:  131 ( 2024 ) Cite this article

120 Accesses

Metrics details

The current paradigm of competency-based medical education and learner-centredness requires learners to take an active role in their training. However, deliberate and planned continual assessment and performance improvement is hindered by the fragmented nature of many medical training programs. Attempts to bridge this continuity gap between supervision and feedback through learner handover have been controversial. Learning plans are an alternate educational tool that helps trainees identify their learning needs and facilitate longitudinal assessment by providing supervisors with a roadmap of their goals. Informed by self-regulated learning theory, learning plans may be the answer to track trainees’ progress along their learning trajectory. The purpose of this study is to summarise the literature regarding learning plan use specifically in undergraduate medical education and explore the student’s role in all stages of learning plan development and implementation.

Following Arksey and O’Malley’s framework, a scoping review will be conducted to explore the use of learning plans in undergraduate medical education. Literature searches will be conducted using multiple databases by a librarian with expertise in scoping reviews. Through an iterative process, inclusion and exclusion criteria will be developed and a data extraction form refined. Data will be analysed using quantitative and qualitative content analyses.

By summarising the literature on learning plan use in undergraduate medical education, this study aims to better understand how to support self-regulated learning in undergraduate medical education. The results from this project will inform future scholarly work in competency-based medical education at the undergraduate level and have implications for improving feedback and supporting learners at all levels of competence.

Scoping review registration:

Open Science Framework osf.io/wvzbx.

Peer Review reports

Competency-based medical education (CBME) has transformed the approach to medical education to focus on demonstration of acquired competencies rather than time-based completion of rotations [ 1 ]. As a result, undergraduate and graduate medical training programs worldwide have adopted outcomes-based assessments in the form of entrustable professional activities (EPAs) comprised of competencies to be met [ 2 ]. These assessments are completed longitudinally by multiple different evaluators to generate an overall impression of a learner’s competency.

In CBME, trainees will progress along their learning trajectory at individual speeds and some may excel while others struggle to achieve the required knowledge, skills or attitudes. Therefore, deliberate and planned continual assessment and performance improvement is required. However, due to the fragmented nature of many medical training programs where learners rotate through different rotations and work with many supervisors, longitudinal observation is similarly fragmented. This makes it difficult to determine where trainees are on their learning trajectories and can affect the quality of feedback provided to them, which is a known major influencer of academic achievement [ 3 ]. As a result, struggling learners may not be identified until late in their training and the growth of high-performing learners may be stifled [ 4 , 5 , 6 ].

Bridging this continuity gap between supervision and feedback through some form of learner handover or forward feeding has been debated since the 1970s and continues to this day [ 5 , 7 , 8 , 9 , 10 , 11 ]. The goal of learner handover is to improve trainee assessment and feedback by sharing their performance and learning needs between supervisors or across rotations. However, several concerns have been raised about this approach including that it could inappropriately bias subsequent assessments of the learner’s abilities [ 9 , 11 , 12 ]. A different approach to keeping track of trainees’ learning goals and progress along their learning trajectories is required. Learning plans (LPs) informed by self-regulated learning (SRL) theory may be the answer.

SRL has been defined as a cyclical process where learners actively control their thoughts, actions and motivation to achieve their goals [ 13 ]. Several models of SRL exist but all entail that the trainee is responsible for setting, planning, executing, monitoring and reflecting on their learning goals [ 13 ]. According to Zimmerman’s SRL model, this process occurs in three stages: forethought phase before an activity, performance phase during an activity and self-reflection phase after an activity [ 13 ]. Since each trainee leads their own learning process and has an individual trajectory towards competence, this theory relates well to the CBME paradigm which is grounded in learner-centredness [ 1 ]. However, we know that medical students and residents have difficulty identifying their own learning goals and therefore need guidance to effectively partake in SRL [ 14 , 15 , 16 , 17 ]. Motivation has also emerged as a key component of SRL, and numerous studies have explored factors that influence student engagement in learning [ 18 , 19 ]. In addition to meeting their basic psychological needs of autonomy, relatedness and competence, perceived learning relevance through meaningful learning activities has been shown to increase trainee engagement in their learning [ 19 ].

LPs are a well-known tool across many educational fields including CBME that can provide trainees with meaningful learning activities since they help them direct their own learning goals in a guided fashion [ 20 ]. Also known as personal learning plans, learning contracts, personal action plans, personal development plans, and learning goals, LPs are documents that outline the learner’s roadmap to achieve their learning goals. They require the learner to self-identify what they need to learn and why, how they are going to do it, how they will know when they are finished, define the timeframe for goal achievement and assess the impact of their learning [ 20 ]. In so doing, LPs give more autonomy to the learner and facilitate objective and targeted feedback from supervisors. This approach has been described as “most congruent with the assumptions we make about adults as learners” [ 21 ].

LP use has been explored across various clinical settings and at all levels of medical education; however, most of the experience lies in postgraduate medical education [ 22 ]. Medical students are a unique learner population with learning needs that appear to be very well suited for using LPs for two main reasons. First, their education is often divided between classroom and clinical settings. During clinical training, students need to be more independent in setting learning goals to meet desired competencies as their education is no longer outlined for them in a detailed fashion by the medical school curriculum [ 23 ]. SRL in the workplace is also different than in the classroom due to additional complexities of clinical care that can impact students’ ability to self-regulate their learning [ 24 ]. Second, although most medical trainees have difficulty with goal setting, medical students in particular need more guidance compared to residents due to their relative lack of experience upon which they can build within the SRL framework [ 25 ]. LPs can therefore provide much-needed structure to their learning but should be guided by an experienced tutor to be effective [ 15 , 24 ].

LPs fit well within the learner-centred educational framework of CBME by helping trainees identify their learning needs and facilitating longitudinal assessment by providing supervisors with a roadmap of their goals. In so doing, they can address current issues with learner handover and identification as well as remediation of struggling learners. Moreover, they have the potential to help trainees develop lifelong skills with respect to continuing professional development after graduation which is required by many medical licensing bodies.

An initial search of the JBI Database, Cochrane Database, MEDLINE (PubMed) and Google Scholar conducted in July–August 2022 revealed a paucity of research on LP use in undergraduate medical education (UGME). A related systematic review by van Houten–Schat et al. [ 24 ] on SRL in the clinical setting identified three interventions used by medical students and residents in SRL—coaching, LPs and supportive tools. However, only a couple of the included studies looked specifically at medical students’ use of LPs, so this remains an area in need of more exploration. A scoping review would provide an excellent starting point to map the body of literature on this topic.

The objective of this scoping review will therefore be to explore LP use in UGME. In doing so, it will address a gap in knowledge and help determine additional areas for research.

This study will follow Arksey and O’Malley’s [ 26 ] five-step framework for scoping review methodology. It will not include the optional sixth step which entails stakeholder consultation as relevant stakeholders will be intentionally included in the research team (a member of UGME leadership, a medical student and a first-year resident).

Step 1—Identifying the research question

The overarching purpose of this study is to “explore the use of LPs in UGME”. More specifically we seek to achieve the following:

Summarise the literature regarding the use of LPs in UGME (including context, students targeted, frameworks used)

Explore the role of the student in all stages of the LP development and implementation

Determine existing research gaps

Step 2—Identifying relevant studies

An experienced health sciences librarian (VC) will conduct all searches and develop the initial search strategy. The preliminary search strategy is shown in Appendix A (see Additional file 2). Articles will be included if they meet the following criteria [ 27 ]:

Participants

Medical students enrolled at a medical school at the undergraduate level.

Any use of LPs by medical students. LPs are defined as a document, usually presented in a table format, that outlines the learner’s roadmap to achieve their learning goals [ 20 ].

Any stage of UGME in any geographic setting.

Types of evidence sources

We will search existing published and unpublished (grey) literature. This may include research studies, reviews, or expert opinion pieces.

Search strategy

With the assistance of an experienced librarian (VC), a pilot search will be conducted to inform the final search strategy. A search will be conducted in the following electronic databases: MEDLINE, Embase, Education Source, APA PsycInfo and Web of Science. The search terms will be developed in consultation with the research team and librarian. The search strategy will proceed according to the JBI Manual for Evidence Synthesis three-step search strategy for reviews [ 27 ]. First, we will conduct a limited search in two appropriate online databases and analyse text words from the title, abstracts and index terms of relevant papers. Next, we will conduct a second search using all identified key words in all databases. Third, we will review reference lists of all included studies to identify further relevant studies to include in the review. We will also contact the authors of relevant papers for further information if required. This will be an iterative process as the research team becomes more familiar with the literature and will be guided by the librarian. Any modifications to the search strategy as it evolves will be described in the scoping review report. As a measure of rigour, the search strategy will be peer-reviewed by another librarian using the PRESS checklist [ 28 ]. No language or date limits will be applied.

Step 3—Study selection

The screening process will consist of a two-step approach: screening titles/abstracts and, if they meet inclusion criteria, this will be followed by a full-text review. All screening will be done by two members of the research team and any disagreements will be resolved by an independent third member of the team. Based on preliminary inclusion criteria, the whole research team will first pilot the screening process by reviewing a random sample of 25 titles/abstracts. The search strategy, eligibility criteria and study objectives will be refined in an iterative process. We anticipate several meetings as the topic is not well described in the literature. A flowchart of the review process will be generated. Any modifications to the study selection process will be described in the scoping review report. The papers will be excluded if a full text is not available. The search results will be managed using Covidence software.

Step 4—Charting the data

A preliminary data extraction tool is shown in Appendix B (see Additional file 3 ). Data will be extracted into Excel and will include demographic information and specific details about the population, concept, context, study methods and outcomes as they relate to the scoping review objectives. The whole research team will pilot the data extraction tool on ten articles selected for full-text review. Through an iterative process, the final data extraction form will be refined. Subsequently, two members of the team will independently extract data from all articles included for full-text review using this tool. Charting disagreements will be resolved by the principal and senior investigators. Google Translate will be used for any included articles that are not in the English language.

Step 5—Collating, summarising and reporting the results

Quantitative and qualitative analyses will be used to summarise the results. Quantitative analysis will capture descriptive statistics with details about the population, concept, context, study methods and outcomes being examined in this scoping review. Qualitative content analysis will enable interpretation of text data through the systematic classification process of coding and identifying themes and patterns [ 29 ]. Several team meetings will be held to review potential themes to ensure an accurate representation of the data. The PRISMA Extension for Scoping Reviews (PRISMA-ScR) will be used to guide the reporting of review findings [ 30 ]. Data will be presented in tables and/or diagrams as applicable. A descriptive summary will explain the presented results and how they relate to the scoping review objectives.

By summarising the literature on LP use in UGME, this study will contribute to a better understanding of how to support SRL amongst medical students. The results from this project will also inform future scholarly work in CBME at the undergraduate level and have implications for improving feedback as well as supporting learners at all levels of competence. In doing so, this study may have practical applications by informing learning plan incorporation into CBME-based curricula.

We do not anticipate any practical or operational issues at this time. We assembled a team with the necessary expertise and tools to complete this project.

Availability of data and materials

All data generated or analysed during this study will be included in the published scoping review article.

Abbreviations

  • Competency-based medical education

Entrustable professional activity

  • Learning plan
  • Self-regulated learning
  • Undergraduate medical education

Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638–45.

Article   PubMed   Google Scholar  

Shorey S, Lau TC, Lau ST, Ang E. Entrustable professional activities in health care education: a scoping review. Med Educ. 2019;53(8):766–77.

Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112.

Article   Google Scholar  

Dudek NL, Marks MB, Regehr G. Failure to fail: the perspectives of clinical supervisors. Acad Med. 2005;80(10 Suppl):S84–7.

Warm EJ, Englander R, Pereira A, Barach P. Improving learner handovers in medical education. Acad Med. 2017;92(7):927–31.

Spooner M, Duane C, Uygur J, et al. Self-regulatory learning theory as a lens on how undergraduate and postgraduate learners respond to feedback: a BEME scoping review : BEME Guide No. 66. Med Teach. 2022;44(1):3–18.

Frellsen SL, Baker EA, Papp KK, Durning SJ. Medical school policies regarding struggling medical students during the internal medicine clerkships: results of a National Survey. Acad Med. 2008;83(9):876–81.

Humphrey-Murto S, LeBlanc A, Touchie C, et al. The influence of prior performance information on ratings of current performance and implications for learner handover: a scoping review. Acad Med. 2019;94(7):1050–7.

Morgan HK, Mejicano GC, Skochelak S, et al. A responsible educational handover: improving communication to improve learning. Acad Med. 2020;95(2):194–9.

Dory V, Danoff D, Plotnick LH, et al. Does educational handover influence subsequent assessment? Acad Med. 2021;96(1):118–25.

Humphrey-Murto S, Lingard L, Varpio L, et al. Learner handover: who is it really for? Acad Med. 2021;96(4):592–8.

Shaw T, Wood TJ, Touchie T, Pugh D, Humphrey-Murto S. How biased are you? The effect of prior performance information on attending physician ratings and implications for learner handover. Adv Health Sci Educ Theory Pract. 2021;26(1):199–214.

Artino AR, Brydges R, Gruppen LD. Chapter 14: Self-regulated learning in health professional education: theoretical perspectives and research methods. In: Cleland J, Duning SJ, editors. Researching Medical Education. 1st ed. John Wiley & Sons; 2015. p. 155–66.

Chapter   Google Scholar  

Cleland J, Arnold R, Chesser A. Failing finals is often a surprise for the student but not the teacher: identifying difficulties and supporting students with academic difficulties. Med Teach. 2005;27(6):504–8.

Reed S, Lockspeiser TM, Burke A, et al. Practical suggestions for the creation and use of meaningful learning goals in graduate medical education. Acad Pediatr. 2016;16(1):20–4.

Wolff M, Stojan J, Cranford J, et al. The impact of informed self-assessment on the development of medical students’ learning goals. Med Teach. 2018;40(3):296–301.

Sawatsky AP, Halvorsen AJ, Daniels PR, et al. Characteristics and quality of rotation-specific resident learning goals: a prospective study. Med Educ Online. 2020;25(1):1714198.

Article   PubMed   PubMed Central   Google Scholar  

Pintrich PR. Chapter 14: The role of goal orientation in self-regulated learning. In: Boekaerts M, Pintrich PR, Zeidner M, editors. Handbook of self-regulation. 1st ed. Academic Press; 2000. p. 451–502.

Kassab SE, El-Sayed W, Hamdy H. Student engagement in undergraduate medical education: a scoping review. Med Educ. 2022;56(7):703–15.

Challis M. AMEE medical education guide No. 19: Personal learning plans. Med Teach. 2000;22(3):225–36.

Knowles MS. Using learning contracts. 1 st ed. San Francisco: Jossey Bass; 1986.

Parsell G, Bligh J. Contract learning, clinical learning and clinicians. Postgrad Med J. 1996;72(847):284–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Teunissen PW, Scheele F, Scherpbier AJJA, et al. How residents learn: qualitative evidence for the pivotal role of clinical activities. Med Educ. 2007;41(8):763–70.

Article   CAS   PubMed   Google Scholar  

van Houten-Schat MA, Berkhout JJ, van Dijk N, Endedijk MD, Jaarsma ADC, Diemers AD. Self-regulated learning in the clinical context: a systematic review. Med Educ. 2018;52(10):1008–15.

Taylor DCM, Hamdy H. Adult learning theories: Implications for learning and teaching in medical education: AMEE Guide No. 83. Med Teach. 2013;35(11):e1561–72.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalol H. Chapter 11: Scoping reviews. In: Aromataris E, Munn Z, eds. JBI Manual for Evidence Synthesis. JBI; 2020. https://synthesismanual.jbi.global. . Accessed 30 Aug 2022.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–6.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Venables M, Larocque A, Sikora L, Archibald D, Grudniewicz A. Understanding indigenous health education and exploring indigenous anti-racism approaches in undergraduate medical education: a scoping review protocol. OSF; 2022. https://osf.io/umwgr/ . Accessed 26 Oct 2022.

Download references

Acknowledgements

Not applicable.

This study will be supported through grants from the Department of Medicine at the Ottawa Hospital and the University of Ottawa. The funding bodies had no role in the study design and will not have any role in the collection, analysis and interpretation of data or writing of the manuscript.

Author information

Authors and affiliations.

The Ottawa Hospital – General Campus, 501 Smyth Rd, PO Box 209, Ottawa, ON, K1H 8L6, Canada

Anna Romanova & Claire Touchie

The Ottawa Hospital Research Institute, Ottawa, Canada

Sydney Ruller

The University of Ottawa, Ottawa, Canada

Victoria Cole

The Ottawa Hospital – Riverside Campus, Ottawa, Canada

Susan Humphrey-Murto

You can also search for this author in PubMed   Google Scholar

Contributions

AR designed and drafted the protocol. CT and SH contributed to the refinement of the research question, study methods and editing of the manuscript. VC designed the initial search strategy. All authors reviewed the manuscript for final approval. The review guarantors are CT and SH. The corresponding author is AR.

Authors’ information

AR is a clinician teacher and Assistant Professor with the Division of General Internal Medicine at the University of Ottawa. She is also the Associate Director for the internal medicine clerkship rotation at the General campus of the Ottawa Hospital.

CT is a Professor of Medicine with the Divisions of General Internal Medicine and Infectious Diseases at the University of Ottawa. She is also a member of the UGME Competence Committee at the University of Ottawa and an advisor for the development of a new school of medicine at Toronto Metropolitan University.

SH is an Associate Professor with the Department of Medicine at the University of Ottawa and holds a Tier 2 Research Chair in Medical Education. She is also the Interim Director for the Research Support Unit within the Department of Innovation in Medical Education at the University of Ottawa.

CT and SH have extensive experience with medical education research and have numerous publications in this field.

SR is a Research Assistant with the Division of General Internal Medicine at the Ottawa Hospital Research Institute.

VC is a Health Sciences Research Librarian at the University of Ottawa.

SR and VC have extensive experience in systematic and scoping reviews.

Corresponding author

Correspondence to Anna Romanova .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1. prisma-p 2015 checklist., 13643_2024_2553_moesm2_esm.docx.

Additional file 2: Appendix A. Preliminary search strategy [ 31 ].

Additional file 3: Appendix B. Preliminary data extraction tool.

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Romanova, A., Touchie, C., Ruller, S. et al. Protocol for a scoping review study on learning plan use in undergraduate medical education. Syst Rev 13 , 131 (2024). https://doi.org/10.1186/s13643-024-02553-w

Download citation

Received : 29 November 2022

Accepted : 03 May 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s13643-024-02553-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

educational assessment journal

educational assessment journal

The Unique Burial of a Child of Early Scythian Time at the Cemetery of Saryg-Bulun (Tuva)

<< Previous page

Pages:  379-406

In 1988, the Tuvan Archaeological Expedition (led by M. E. Kilunovskaya and V. A. Semenov) discovered a unique burial of the early Iron Age at Saryg-Bulun in Central Tuva. There are two burial mounds of the Aldy-Bel culture dated by 7th century BC. Within the barrows, which adjoined one another, forming a figure-of-eight, there were discovered 7 burials, from which a representative collection of artifacts was recovered. Burial 5 was the most unique, it was found in a coffin made of a larch trunk, with a tightly closed lid. Due to the preservative properties of larch and lack of air access, the coffin contained a well-preserved mummy of a child with an accompanying set of grave goods. The interred individual retained the skin on his face and had a leather headdress painted with red pigment and a coat, sewn from jerboa fur. The coat was belted with a leather belt with bronze ornaments and buckles. Besides that, a leather quiver with arrows with the shafts decorated with painted ornaments, fully preserved battle pick and a bow were buried in the coffin. Unexpectedly, the full-genomic analysis, showed that the individual was female. This fact opens a new aspect in the study of the social history of the Scythian society and perhaps brings us back to the myth of the Amazons, discussed by Herodotus. Of course, this discovery is unique in its preservation for the Scythian culture of Tuva and requires careful study and conservation.

Keywords: Tuva, Early Iron Age, early Scythian period, Aldy-Bel culture, barrow, burial in the coffin, mummy, full genome sequencing, aDNA

Information about authors: Marina Kilunovskaya (Saint Petersburg, Russian Federation). Candidate of Historical Sciences. Institute for the History of Material Culture of the Russian Academy of Sciences. Dvortsovaya Emb., 18, Saint Petersburg, 191186, Russian Federation E-mail: [email protected] Vladimir Semenov (Saint Petersburg, Russian Federation). Candidate of Historical Sciences. Institute for the History of Material Culture of the Russian Academy of Sciences. Dvortsovaya Emb., 18, Saint Petersburg, 191186, Russian Federation E-mail: [email protected] Varvara Busova  (Moscow, Russian Federation).  (Saint Petersburg, Russian Federation). Institute for the History of Material Culture of the Russian Academy of Sciences.  Dvortsovaya Emb., 18, Saint Petersburg, 191186, Russian Federation E-mail:  [email protected] Kharis Mustafin  (Moscow, Russian Federation). Candidate of Technical Sciences. Moscow Institute of Physics and Technology.  Institutsky Lane, 9, Dolgoprudny, 141701, Moscow Oblast, Russian Federation E-mail:  [email protected] Irina Alborova  (Moscow, Russian Federation). Candidate of Biological Sciences. Moscow Institute of Physics and Technology.  Institutsky Lane, 9, Dolgoprudny, 141701, Moscow Oblast, Russian Federation E-mail:  [email protected] Alina Matzvai  (Moscow, Russian Federation). Moscow Institute of Physics and Technology.  Institutsky Lane, 9, Dolgoprudny, 141701, Moscow Oblast, Russian Federation E-mail:  [email protected]

Shopping Cart Items: 0 Cart Total: 0,00 € place your order

Price pdf version

student - 2,75 € individual - 3,00 € institutional - 7,00 €

We accept

Copyright В© 1999-2022. Stratum Publishing House

2 Westchester high schools to help NYS pilot new ways to assess what students know

educational assessment journal

Two Westchester high schools will participate in a state pilot program aimed at helping schools shift toward performance-based assessments, which evaluate what students know by having them perform tasks or create something.

The state Education Department announced Hendrick Hudson High School and Port Chester High School as two of 23 New York schools that will take part. The state intends to study how to help schools make the shift away from traditional forms of testing students.

"Pilot schools will develop best practices that go beyond traditional teaching and assessment models and prepare students for success post-graduation," Chancellor Lester Young said in a statement earlier this month.

K-12 education in New York is headed away from its reliance on traditional tests and toward alternative ways of assessing what students know. In November, a commission tasked with reimagining the state's requirements to receive a high school diploma recommended the state allow for more kinds of assessment , including performance-based assessments, capstone projects and experiential learning.

But it's unclear how long such a shift will take. New York schools' assessments of students to determine graduation readiness have largely revolved around its Regents exams, which date back to the 1800s.

Three approaches to assessment that will be piloted

Starting in the fall, pilot schools will begin to shift some of their courses to performance-based approaches to assessing students. During the 2025-26 school year, schools plan to expand that approach to more courses. By the 2026-27 school year, pilot schools will be working toward implementing the approach throughout their schools.

Results and recommendations from the pilot aren't expected until the 2026-27 school year.

The pilot will focus on three approaches:

  • In career and technical education, students will learn through internships and work settings.
  • Some classes will have students learn by asking questions and developing critical thinking. Educators refer to this approach as "inquiry-based learning." The state Education Department gave two examples of such programs: the International Baccalaureate program and Big Picture Learning, both of which develop "learner profiles" for students and incorporate them into their approaches. IB's learner profiles help students develop characteristics like being open-minded, reflective and principled, while Big Picture Learning's learner profiles center around developing individualized learning plans and assessments for students.
  • Some classes will have students learning through projects, with assessments based on those projects, such as a final presentation, product, or even an event.

Port Chester Superintendent Aurelia Henriquez said in a statement the district was eager to collaborate with other schools in the pilot and that the district was particularly interested in project-based learning and performance-based assessment tasks.

Such tasks could include completing a science experiment , writing a research paper in history or designing something using engineering and math.

Hendrick Hudson High School will focus on career and technical education, Lauren Scollins, Hendrick Hudson High School's principal, said in an email.

“As a district, we hope to use this experience as an opportunity to redesign teaching and learning," Scollins said. "We are also excited about the opportunity to collaborate with state-wide professionals who wish to adapt their practice to teach students, individually and collectively, rather than to a test."

To help students explore all their options for after high school, Hendrick Hudson plans on introducing "an expanded database of summer opportunities; lunch career series; alumni engagement; career days and fairs; job readiness learning experiences; and field trips," Scollins said.

Contact Diana Dombrowski at [email protected]. Follow her on Twitter at  @domdomdiana .

Rusmania

  • Yekaterinburg
  • Novosibirsk
  • Vladivostok

educational assessment journal

  • Tours to Russia
  • Practicalities
  • Russia in Lists
Rusmania • Deep into Russia

Out of the Centre

Savvino-storozhevsky monastery and museum.

Savvino-Storozhevsky Monastery and Museum

Zvenigorod's most famous sight is the Savvino-Storozhevsky Monastery, which was founded in 1398 by the monk Savva from the Troitse-Sergieva Lavra, at the invitation and with the support of Prince Yury Dmitrievich of Zvenigorod. Savva was later canonised as St Sabbas (Savva) of Storozhev. The monastery late flourished under the reign of Tsar Alexis, who chose the monastery as his family church and often went on pilgrimage there and made lots of donations to it. Most of the monastery’s buildings date from this time. The monastery is heavily fortified with thick walls and six towers, the most impressive of which is the Krasny Tower which also serves as the eastern entrance. The monastery was closed in 1918 and only reopened in 1995. In 1998 Patriarch Alexius II took part in a service to return the relics of St Sabbas to the monastery. Today the monastery has the status of a stauropegic monastery, which is second in status to a lavra. In addition to being a working monastery, it also holds the Zvenigorod Historical, Architectural and Art Museum.

Belfry and Neighbouring Churches

educational assessment journal

Located near the main entrance is the monastery's belfry which is perhaps the calling card of the monastery due to its uniqueness. It was built in the 1650s and the St Sergius of Radonezh’s Church was opened on the middle tier in the mid-17th century, although it was originally dedicated to the Trinity. The belfry's 35-tonne Great Bladgovestny Bell fell in 1941 and was only restored and returned in 2003. Attached to the belfry is a large refectory and the Transfiguration Church, both of which were built on the orders of Tsar Alexis in the 1650s.  

educational assessment journal

To the left of the belfry is another, smaller, refectory which is attached to the Trinity Gate-Church, which was also constructed in the 1650s on the orders of Tsar Alexis who made it his own family church. The church is elaborately decorated with colourful trims and underneath the archway is a beautiful 19th century fresco.

Nativity of Virgin Mary Cathedral

educational assessment journal

The Nativity of Virgin Mary Cathedral is the oldest building in the monastery and among the oldest buildings in the Moscow Region. It was built between 1404 and 1405 during the lifetime of St Sabbas and using the funds of Prince Yury of Zvenigorod. The white-stone cathedral is a standard four-pillar design with a single golden dome. After the death of St Sabbas he was interred in the cathedral and a new altar dedicated to him was added.

educational assessment journal

Under the reign of Tsar Alexis the cathedral was decorated with frescoes by Stepan Ryazanets, some of which remain today. Tsar Alexis also presented the cathedral with a five-tier iconostasis, the top row of icons have been preserved.

Tsaritsa's Chambers

educational assessment journal

The Nativity of Virgin Mary Cathedral is located between the Tsaritsa's Chambers of the left and the Palace of Tsar Alexis on the right. The Tsaritsa's Chambers were built in the mid-17th century for the wife of Tsar Alexey - Tsaritsa Maria Ilinichna Miloskavskaya. The design of the building is influenced by the ancient Russian architectural style. Is prettier than the Tsar's chambers opposite, being red in colour with elaborately decorated window frames and entrance.

educational assessment journal

At present the Tsaritsa's Chambers houses the Zvenigorod Historical, Architectural and Art Museum. Among its displays is an accurate recreation of the interior of a noble lady's chambers including furniture, decorations and a decorated tiled oven, and an exhibition on the history of Zvenigorod and the monastery.

Palace of Tsar Alexis

educational assessment journal

The Palace of Tsar Alexis was built in the 1650s and is now one of the best surviving examples of non-religious architecture of that era. It was built especially for Tsar Alexis who often visited the monastery on religious pilgrimages. Its most striking feature is its pretty row of nine chimney spouts which resemble towers.

educational assessment journal

Plan your next trip to Russia

Ready-to-book tours.

Your holiday in Russia starts here. Choose and book your tour to Russia.

REQUEST A CUSTOMISED TRIP

Looking for something unique? Create the trip of your dreams with the help of our experts.

Spatial Variations of the Activity of 137 Cs and the Contents of Heavy Metals and Petroleum Products in the Polluted Soils of the City of Elektrostal

  • DEGRADATION, REHABILITATION, AND CONSERVATION OF SOILS
  • Open access
  • Published: 15 June 2022
  • Volume 55 , pages 840–848, ( 2022 )

Cite this article

You have full access to this open access article

educational assessment journal

  • D. N. Lipatov 1 ,
  • V. A. Varachenkov 1 ,
  • D. V. Manakhov 1 ,
  • M. M. Karpukhin 1 &
  • S. V. Mamikhin 1  

1468 Accesses

2 Citations

Explore all metrics

The levels of specific activity of 137 Cs and the contents of mobile forms (1 M ammonium acetate extraction) of heavy metals (Zn, Cu, Ni, Co, Cr, Pb) and petroleum products were studied in the upper soil horizon of urban landscapes of the city of Elektrostal under conditions of local radioactive and chemical contamination were studied. In the soils within a short radius (0–100 m) around the heavy engineering plant, the specific activity of 137 Cs and the contents of mobile forms of Pb, Cu, and Zn were increased. The lognormal distribution law of 137 Cs was found in the upper (0–10 cm) soil layer; five years after the radiation accident, the specific activity of 137 Cs varied from 6 to 4238 Bq/kg. The coefficients of variation increased with an increase in the degree of soil contamination in the following sequence: Co < Ni < petroleum products < Cr < 137 Cs < Zn < Pb < Cu ranging from 50 to 435%. Statistically significant direct correlation was found between the specific activity of 137 Cs and the contents of mobile forms of Pb, Cu, and Zn in the upper horizon of urban soils, and this fact indicated the spatial conjugacy of local spots of radioactive and polymetallic contamination in the studied area. It was shown that the specific activity of 137 Cs, as well as the content of heavy metals and petroleum products in the upper layer (0–10 cm) of the soils disturbed in the course of decontamination, earthwork and reclamation is reduced.

Similar content being viewed by others

Accumulation and migration of heavy metals in soils of the rostov region, south of russia.

educational assessment journal

Geographical Features of Pollution of the Territory of Yakutia With Cesium-137

educational assessment journal

Activity Concentration of Natural Radionuclides and Total Heavy Metals Content in Soils of Urban Agglomeration

Avoid common mistakes on your manuscript.

INTRODUCTION

Contaminants migrate and accumulate in urban ecosystems under the impact of both natural and technogenic factors. The processes of technogenic migration of 137 Cs are most pronounced in radioactively contaminated territories. It was found in urboecological studies that the intensity of sedimentation of aerosol particles containing radionuclides and heavy metals is determined by the types of the surfaces of roofs, walls, roads, lawns, and parks and by their position within the urban wind field [ 12 , 26 ]. Traffic in the cities results in significant transport of dust and associated contaminants and radionuclides [ 15 , 24 ]. During decontamination measures in the areas of Chernobyl radioactive trace, not only the decrease in the level of contamination but also the possibility of secondary radioactive contamination because of the transportation of contaminated soil particles by wind or water, or anthropogenic transfer of transferring of ground were observed [ 5 , 6 ]. Rainstorm runoff and hydrological transport of dissolved and colloidal forms of 137 Cs can result in the accumulation of this radionuclide in meso- and microdepressions, where sedimentation takes place [ 10 , 16 ]. Different spatial distribution patterns of 137 Cs in soils of particular urban landscapes were found in the city of Ozersk near the nuclear fuel cycle works [ 17 ]. Natural character of 137 Cs migration in soils of Moscow forest-parks and a decrease in its specific activity in industrial areas have been revealed [ 10 ]. Determination of the mean level and parameters of spatial variations of 137 Cs in soils is one of primary tasks of radioecological monitoring of cities, including both unpolluted (background) and contaminated territories.

Emissions and discharges from numerous sources of contamination can cause the accumulation of a wide range of toxicants in urban soils: heavy metals (HMs), oil products (OPs), polycyclic aromatic hydrocarbons (PAHs), and other chemical substances. Soil contamination by several groups of toxicants is often observed in urban landscapes [ 20 , 23 ] because of the common contamination source or close pathways of the migration of different contaminants. A comprehensive analysis of contamination of urban soils by radionuclides and heavy metals has been performed in some studies [ 21 , 25 ]. The determination of possible spatial interrelationships between radioactive and chemical contaminations in urban soils is an important problem in urban ecology.

A radiation accident took place in the Elektrostal heavy engineering works (EHEW) in April 2013: a capacious source of 137 Cs entered the smelt furnace, and emission of radioactive aerosols from the aerating duct into the urban environment took place. The activity of molten source was estimated at about 1000–7000 Ci [ 14 ]. The area of contamination in the territory of the plant reached 7500 m 2 . However, radioactive aerosols affected a much larger area around the EHEW, including Krasnaya and Pervomaiskaya streets, and reached Lenin Prospect.

Geochemical evaluation of contamination of the upper soil horizon in the city of Elektrostal was carried out in 1989–1991. This survey indicated the anomalies of concentrations of wolfram, nickel, molybdenum, chromium, and other heavy metals related to accumulation of alloying constituent and impurities of non-ferrous metals in the emissions of steelmaking works [ 19 ].

The aim of our work was to determine the levels of specific activity of 137 Cs, concentrations of mobile forms of heavy metals (Zn, Cu, Ni, Co, Cr, and Pb) and oil products in the upper soil horizons in different urban landscapes of the city of Elektrostal under the conditions of local radioactive and chemical contamination.

Author information

Authors and affiliations.

Lomonosov Moscow State University, 119991, Moscow, Russia

D. N. Lipatov, V. A. Varachenkov, D. V. Manakhov, M. M. Karpukhin & S. V. Mamikhin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to D. N. Lipatov .

Ethics declarations

The authors declare that they have no conflicts of interest.

Additional information

Translated by T. Chicheva

Rights and permissions

Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Lipatov, D.N., Varachenkov, V.A., Manakhov, D.V. et al. Spatial Variations of the Activity of 137 Cs and the Contents of Heavy Metals and Petroleum Products in the Polluted Soils of the City of Elektrostal. Eurasian Soil Sc. 55 , 840–848 (2022). https://doi.org/10.1134/S1064229322060072

Download citation

Received : 21 October 2021

Revised : 22 December 2021

Accepted : 30 December 2021

Published : 15 June 2022

Issue Date : June 2022

DOI : https://doi.org/10.1134/S1064229322060072

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • urban soils
  • urban ecosystems
  • radiation monitoring
  • decontamination
  • Urban Technosols
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Early Childhood Self-Assessment: Journal Writing by Authentic Preschool

    educational assessment journal

  2. General Education Assessment Report Doc Template

    educational assessment journal

  3. Large-scale Assessments in Education Journal

    educational assessment journal

  4. (PDF) Journal of Educational Evaluation for Health Professions will be

    educational assessment journal

  5. PEP Educational Assessment

    educational assessment journal

  6. 3-2-1 Interactive Journal Entry Cards (Formative Assessment)

    educational assessment journal

VIDEO

  1. Classroom Assessment

  2. Educational Assessment by Oladele Babatunde Practical 10

  3. Assessment and its Types: Online Recorded Lecture-8.1

  4. reflective journal/ unit 5/assessment of learning/sem2

  5. Making Moves: What's next for our student affairs assessment journal (JSAIII)?

  6. Launch Event: Early Learner Assessment

COMMENTS

  1. Educational Assessment

    Journal overview. Educational Assessment publishes original research on the design, analysis and use of assessment to deepen understanding of the performance and quality of individuals, groups, and programs in educational settings. It welcomes rigorous theoretical pieces and empirical studies (quantitative or qualitative) that can inform ...

  2. Home

    Educational Assessment, Evaluation and Accountability is an international journal focused on advancing knowledge in assessment, evaluation, and accountability across all educational fields.. Connects research, policy-making, and practice. Explores and discusses theories, roles, impacts, and methods of evaluation, assessment, and accountability.

  3. Educational Assessment: Vol 29, No 1 (Current issue)

    Educational Assessment, Volume 29, Issue 1 (2024) See all volumes and issues. Volume 29, 2024 Vol 28, 2023 Vol 27, 2022 Vol 26, 2021 Vol 25, 2020 Vol 24, 2019 Vol 23, 2018 Vol 22, 2017 Vol 21, 2016 Vol 20, 2015 Vol 19, 2014 Vol 18, 2013 Vol 17, 2012 Vol 16, 2011 Vol 15, 2010 Vol 14, 2009 Vol 13, 2008 Vol 12, 2007 Vol 11, 2006 Vol 10, 2005 Vol 9 ...

  4. The past, present and future of educational assessment: A

    To see the horizon of educational assessment, a history of how assessment has been used and analysed from the earliest records, through the 20th century, and into contemporary times is deployed. Since paper-and-pencil assessments validity and integrity of candidate achievement has mattered. Assessments have relied on expert judgment. With the massification of education, formal group ...

  5. Educational Assessment Aims & Scope

    Aims and scope. Educational Assessment publishes original research on the design, analysis and use of assessment to deepen understanding of the performance and quality of individuals, groups, and programs in educational settings. It welcomes rigorous theoretical pieces and empirical studies (quantitative or qualitative) that can inform ...

  6. Journals in Assessment, Evaluation, Measurement, Psychometrics and

    Journal of Psychoeducational Assessment. Sage. Provides psychologists with current information about psychological and educational assessment practices and instrumentation. JPA is known internationally for the quality of its assessment-related research, theory and position papers, practice applications, and book and test reviews.

  7. Journal of Psychoeducational Assessment: Sage Journals

    Journal of Psychoeducational Assessment (JPA) provides psychologists current information on psychological and educational assessment practices and instrumentation.JPA is known internationally for quality of assessment-related research, theory and position papers, practice applications, and book and test reviews.JPA's topics include "best practices" in assessment, cross-cultural assessment ...

  8. Studies in Educational Evaluation

    Four types of articles are published by the journal: (a) Empirical evaluation studies representing assessment and evaluation practice in educational systems around the world; (b) Empirical studies related to …. View full aims & scope. $2930. Article publishing charge.

  9. Articles

    Thomas Kötter. Johanna Christine Schulz. Nadine Janis Pohontsch. OriginalPaper Open access 20 May 2022 Pages: 533 - 552. 1. 2. …. 18. Educational Assessment, Evaluation and Accountability is an international journal focused on advancing knowledge in assessment, evaluation, and accountability ...

  10. Educational Assessment

    The journal is particularly interested in research that contributes new knowledge and encourages innovation in the areas of: use and consequences of large-scale data, including national and international data, to support educational decision-making and improvement; classroom assessment, including formative assessment practices, summative uses ...

  11. Assessment and the Future of Teacher Education

    The Organization for Economic Cooperation and Development (2009) emphasizes the importance of including a range of stakeholders in teacher assessment such as "parents, students, teachers, school leaders, teacher unions, educational administrators and policy makers in the development and implementation of teacher evaluation and assessment ...

  12. Home page

    Large-scale Assessments in Education is a publication from the IERI - International Education Research Institute.Articles considered for publication in this journal are those that make use of the databases published by national and international large-scale assessment programs, present results of interest to a broad national or international audience, or contribute to enhance and improve the ...

  13. Classroom Assessment to Support Teaching and Learning

    Classroom assessment includes both formative assessment, used to adapt instruction and help students to improve, and summative assessment, used to assign grades.These two forms of assessment must be coherently linked through a well-articulated model of learning. Sociocultural theory is an encompassing grand theory that integrates motivation and cognitive development, and it enables the design ...

  14. Journal of Assessment in Higher Education

    The journal is an open-access, semi-annual publication that presents articles on current philosophy, research, teaching, learning and scholarship in higher education assessment. The journal is hosted on the Florida Open Journal Systems platform, as a service of the Smathers Libraries to the university community.

  15. Educational Assessment, Evaluation and Accountability

    Keep lettering consistently sized throughout your final-sized artwork, usually about 2-3 mm (8-12 pt). Variance of type size within an illustration should be minimal, e.g., do not use 8-pt type on an axis and 20-pt type for the axis label. Avoid effects such as shading, outline letters, etc.

  16. Assessment & Evaluation in Higher Education

    Assessment & Evaluation in Higher Education. is an established international peer-reviewed journal which publishes papers and reports on all aspects of assessment and evaluation within higher education. Its purpose is to advance understanding of assessment and evaluation practices and processes, particularly the contribution that these make to ...

  17. Protocol for a scoping review study on learning plan use in

    The current paradigm of competency-based medical education and learner-centredness requires learners to take an active role in their training. However, deliberate and planned continual assessment and performance improvement is hindered by the fragmented nature of many medical training programs. Attempts to bridge this continuity gap between supervision and feedback through learner handover ...

  18. Artificial intelligence to develop outcomes for critical thinking: A

    European Journal of Dental Education is a medical education journal covering the curriculum, teaching, ... The narrative response of GTP is 'The articulation of learning outcomes is a crucial step in educational design and assessment. Learning outcomes define what students are expected to know, understand, and be able to do by the end of a ...

  19. Definition of The Strategic Directions for Regional Economic

    Dmitriy V. Mikheev, Karina A. Telyants, Elena N. Klochkova, Olga V. Ledneva; Affiliations Dmitriy V. Mikheev

  20. Cambridge International Education Declares IGCSE Results 2024 ...

    5h. C ambridge Assessment International Education has announced the IGCSE board results 2024. The results for the IGCSE can be obtained from the official website of "Cambridge International" using ...

  21. Educational Assessment, Evaluation and Accountability

    Volume 20 June - December 2007. Issue 3-4 December 2007. Special Issue: Improving Teaching and Learning through Evaluation: A Special Issue on the National Evaluation Institute, Consortium for Research on Educational Accountability and Teacher Evaluation (CREATE) ; Guest Editor: Marco A. Munoz. Issue 1-2 June 2007.

  22. Nursing Reports

    Sexual violence (SV) can deeply impact victims' physical and psychosocial well-being. Yet many healthcare providers, including registered nurses (RNs), hesitate to screen patients due to a lack of confidence and knowledge. The SATELLITE Sexual Violence Assessment and Care Guide was developed to address this gap; however, the guide's educational effectiveness remained untested. This pilot ...

  23. Assessment in Education: Principles, Policy & Practice

    Assessment in Education provides a focus for scholarly output in the field of assessment. The journal is explicitly international in focus and encourages contributions from a wide range of assessment systems and cultures. The journal's intention is to explore both commonalities and differences in policy and practice.

  24. Assessment of Smartphone Medical Applications as a Self ...

    The majority of medical students use smartphone medical apps at least once a day, and agreed that it makes a positive effect on their education. 2,3,6-14 According to a survey from India (90%) and England (84%), which was published in the year 2017 and 2012, medical students believed that smartphone and related medical apps have great ...

  25. The Unique Burial of a Child of Early Scythian Time at the Cemetery of

    Burial 5 was the most unique, it was found in a coffin made of a larch trunk, with a tightly closed lid. Due to the preservative properties of larch and lack of air access, the coffin contained a well-preserved mummy of a child with an accompanying set of grave goods. The interred individual retained the skin on his face and had a leather ...

  26. Two Westchester high schools join New York pilot on testing students

    The state Education Department announced Hendrick Hudson High School and Port Chester High School as two of 23 New York schools that will take part. The state intends to study how to help schools ...

  27. List of issues Educational Assessment

    About this journal About. Journal metrics Aims & scope Journal information Editorial board News & calls for papers Advertising information; ... Browse the list of issues and latest articles from Educational Assessment. All issues Special issues . Latest articles Volume 28 2023 Volume 27 2022 Volume 26 2021 ...

  28. IJCM_183A: Assessment of treatment adherence among type 2 di

    and morbidity. Therefore, assessing adherence is the need of the hour to tide over this crisis and hence present study was conducted Objectives: : 1. To assess the prevalence of treatment adherence among Type 2 DM patients >18years attending tertiary care hospital, Madurai 2. To identify the factors associated with treatment adherence among the study population Methodology: A cross-sectional ...

  29. Savvino-Storozhevsky Monastery and Museum

    Zvenigorod's most famous sight is the Savvino-Storozhevsky Monastery, which was founded in 1398 by the monk Savva from the Troitse-Sergieva Lavra, at the invitation and with the support of Prince Yury Dmitrievich of Zvenigorod. Savva was later canonised as St Sabbas (Savva) of Storozhev. The monastery late flourished under the reign of Tsar ...

  30. Spatial Variations of the Activity of 137Cs and the Contents of Heavy

    Abstract The levels of specific activity of 137Cs and the contents of mobile forms (1 M ammonium acetate extraction) of heavy metals (Zn, Cu, Ni, Co, Cr, Pb) and petroleum products were studied in the upper soil horizon of urban landscapes of the city of Elektrostal under conditions of local radioactive and chemical contamination were studied. In the soils within a short radius (0-100 m ...