Formative Assessment of Teaching

What is formative assessment of teaching.

How do you know if your teaching is effective? How can you identify areas where your teaching can improve? What does it look like to assess teaching?

Formative Assessment

Formative assessment of teaching consists of different approaches to continuously evaluate your teaching. The insight gained from this assessment can support revising your teaching strategies, leading to better outcomes in student learning and experiences. Formative assessment can be contrasted with summative assessment, which is usually part of an evaluative decision-making process. The table below outlines some of the key differences between formative and summative assessment: 

By participating in formative assessment, instructors connect with recent developments in the space of teaching and learning, as well as incorporate new ideas into their practice. Developments may include changes in the students we serve, changes in our understanding of effective teaching, and changes in expectations of the discipline and of higher education as a whole.

Formative assessment of teaching ultimately should guide instructors towards using more effective teaching practices. What does effectiveness mean in terms of teaching?

Effectiveness in Teaching

Effective teaching can be defined as teaching that leads to the intended outcomes in student learning and experiences. In this sense, there is no single perfect teaching approach. Effective teaching looks will depend on the stated goals for student learning and experiences. A course that aims to build student confidence in statistical analysis and a course that aims to develop student writing could use very different teaching strategies, and still both be effective at accomplishing their respective goals. 

Assessing student learning and experiences is critical to determining if teaching is truly effective in its context. This assessment can be quite complex, but it is doable. In addition to measuring the impacts of your teaching, you may also consider evaluating your teaching as it aligns with best practices for evidence-based teaching especially in the disciplinary and course context or aligns with your intended teaching approach. The table below outlines these three approaches to assessing the effectiveness of your teaching:

What are some strategies that I might try? 

There are multiple ways that instructors might begin to assess their teaching. The list below includes approaches that may be done solo, with colleagues, or with the input of students. Instructors may pursue one or more of these strategies at different points in time. With each possible strategy, we have included several examples of the strategy in practice from a variety of institutions and contexts.

Teaching Portfolios

Teaching portfolios are well-suited for formative assessment of teaching, as the portfolio format lends itself to documenting how your teaching has evolved over time. Instructors can use their teaching portfolios as a reflective practice to review past teaching experiences, what worked and what did not.

Teaching portfolios consist of various pieces of evidence about your teaching such as course syllabi, outlines, lesson plans, course evaluations, and more. Instructors curate these pieces of evidence into a collection, giving them the chance to highlight their own growth and focus as educators. While student input may be incorporated as part of the portfolio, instructors can contextualize and respond to student feedback, giving them the chance to tell their own teaching story from a more holistic perspective.

Teaching portfolios encourage self-reflection, especially with guided questions or rubrics to review your work. In addition, an instructor might consider sharing their entire teaching portfolio or selected materials for a single course with colleagues and engaging in a peer review discussion. 

Examples and Resources:

Teaching Portfolio - Career Center

Developing a Statement of Teaching Philosophy and Teaching Portfolio - GSI Teaching & Resource Center

Self Assessment - UCLA Center for Education, Innovation, and Learning in the Sciences

Advancing Inclusion and Anti-Racism in the College Classroom Rubric and Guide

Course Design Equity and Inclusion Rubric

Teaching Demos or Peer Observation

Teaching demonstrations or peer classroom observation provide opportunities to get feedback on your teaching practice, including communication skills or classroom management.

Teaching demonstrations may be arranged as a simulated classroom environment in front of a live audience who take notes and then deliver summarized feedback. Alternatively, demonstrations may involve recording an instructor teaching to an empty room, and this recording can be subjected to later self-review or peer review. Evaluation of teaching demos will often focus on the mechanics of teaching especially for a lecture-based class, e.g. pacing of speech, organization of topics, clarity of explanations.

In contrast, instructors may invite a colleague to observe an actual class session to evaluate teaching in an authentic situation. This arrangement gives the observer a better sense of how the instructor interacts with students both individually or in groups, including their approach to answering questions or facilitating participation. The colleague may take general notes on what they observe or evaluate the instructor using a teaching rubric or other structured tool.

Peer Review of Course Instruction

Preparing for a Teaching Demonstration - UC Irvine Center for Educational Effectiveness

Based on Peer Feedback - UCLA Center for Education, Innovation, and Learning in the Sciences

Teaching Practices Equity and Inclusion Rubric

Classroom Observation Protocol for Undergraduate STEM (COPUS)

Student Learning Assessments

Student learning can vary widely across courses or even between academic terms. However, having a clear benchmark for the intended learning objectives and determining whether an instructor’s course as implemented helps students to reach that benchmark can be an invaluable piece of information to guide your teaching. The method for measuring student learning will depend on the stated learning objective, but a well-vetted instrument can provide the most reliable data.

Recommended steps and considerations for using student learning assessments to evaluate your teaching efficacy include:

Identify a small subset of course learning objectives to focus on, as it is more useful to accurately evaluate one objective vs. evaluating many objectives inaccurately.

Find a well-aligned and well-developed measure for each selected course learning objective, such as vetted exam questions, rubrics, or concept inventories.

If relevant, develop a prompt or assignment that will allow students to demonstrate the learning objective to then be evaluated against the measure.

Plan the timing of data collection to enable useful comparison and interpretation.

Do you want to compare how students perform at the start of your course compared to the same students at the end of your course?

Do you want to compare how the same students perform before and after a specific teaching activity?

Do you want to compare how students in one term perform compared to students in the next term, after changing your teaching approach?

Implement the assignment/prompt and evaluate a subset or all of the student work according to the measure.

Reflect on the results and compare student performance measures.

Are students learning as a result of your teaching activity and course design?

Are students learning to the degree that you intended?

Are students learning more when you change how you teach?

This process can be repeated as many times as needed or the process can be restarted to instead focus on a different course learning objective.

List of Concept Inventories (STEM)

Best Practices for Administering Concept Inventories (Physics)

AAC&U VALUE Rubrics

Rubric Bank | Assessment and Curriculum Support Center - University of Hawaiʻi at Mānoa

Rubrics - World Languages Resource Collection - Kennesaw State University

Student Surveys or Focus Groups

Surveys or focus groups are effective tools to better understand the student experience in your courses, as well as to solicit feedback on how courses can be improved. Hearing student voices is critical as students themselves can attest to how course activities made them feel, e.g. whether they perceive the learning environment to be inclusive, or what topics they find interesting.

Some considerations for using student surveys in your teaching include:

Surveys collect individual and anonymous input from as many students as possible.

Surveys can gather both quantitative and qualitative data.

Surveys that are anonymous avoid privileging certain voices over others.

Surveys can enable students to share about sensitive experiences that they may be reluctant to discuss publicly.

Surveys that are anonymous may lend to negative response bias.

Survey options at UC Berkeley include customized course evaluation questions or anonymous surveys on bCourses, Google Forms, or Qualtrics. 

Some considerations for using student focus groups in your teaching include:

Focus groups leverage the power of group brainstorming to identify problems and imagine possible solutions.

Focus groups can gather both rich and nuanced qualitative data.

Focus groups with a skilled facilitator tend to have more moderated responses given the visibility of the discussion.

Focus groups take planning, preparation, and dedicated class time.

Focus group options at UC Berkeley include scheduling a Mid-semester Inquiry (MSI) to be facilitated by a CTL staff member.

Instructions for completing question customization for your evaluations as an instructor

Course Evaluations Question Bank

Student-Centered Evaluation Questions for Remote Learning

Based on Student Feedback - UCLA Center for Education, Innovation, and Learning in the Sciences

How Can Instructors Encourage Students to Complete Course Evaluations and Provide Informative Responses?

Student Views/Attitudes/Affective Instruments - ASBMB

Student Skills Inventories - ASBMB

How might I get started?

Self-assess your own course materials using one of the available rubrics listed above.

Schedule a teaching observation with CTL to get a colleague’s feedback on your teaching practices and notes on student engagement.

Schedule an MSI with CTL to gather directed student feedback with the support of a colleague.

Have more questions? Schedule a general consultation with CTL or send us your questions by email ( [email protected] )!

References:

Evaluating Teaching - UCSB Instructional Development

Documenting Teaching - UCSC Center for Innovations in Teaching and Learning

Other Forms of Evaluation - UCLA Center for Education, Innovation, and Learning in the Sciences

Evaluation Of Teaching Committee on Teaching, Academic Senate

Report of the Academic Council Teaching Evaluation Task Force

Teaching Quality Framework Initiative Resources - University of Colorado Boulder

Benchmarks for Teaching Effectiveness - University of Kansas  Center for Teaching Excellence

Teaching Practices Instruments - ASBMB

  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, formative assessment in higher education: an exploratory study within programs for professionals in education.

formative assessments in tertiary education

  • 1 University of Genoa, Genoa, Italy
  • 2 St. Thomas University, Fredericton, NB, Canada

This study explores how prospective professionals in higher education can learn about and apply formative assessment methods relevant to their future educational workplaces. In the academic year 2022–23, 156 pre-service teachers, social workers, and heads of social services took part in a three-stage mixed-method study on university learning experiences involving formative assessment practices. They were exposed to self-, peer-, and group-assessment strategies. Data collected after each stage revealed participants’ perspectives on each method. Findings show that students who engaged in formative assessment comprehended assessment complexity and were motivated to use diverse assessment forms. Formative assessment proves effective for both evaluation and development, supporting higher education students in honing assessment competencies for future professional roles in educational and social sectors.

1 Introduction and rationale

The growing emphasis on formative assessment represents a challenge for universities, schools, and educational institutions within traditionally summative assessment cultures. In 2005, the Organization for Economic Cooperation and Development (OECD) declared that schools could support academic development through formative assessment, mainly for underachieving students ( OECD, 2005 ). In addition, the OECD (2005) underscored that formative assessment improves the retention of learning and the quality of students’ work. Additionally, the United Nations Educational, Scientific and Cultural Organization (UNESCO) emphasized how formative assessment can help students’ learning during emergencies and crises ( Bawane and Sharma, 2020 ). A recent study commissioned by the European Commission underscored that “formative assessment needs to place the learners themselves at the center of the learning and assessment processes, taking a more active and central role both as individual, self-regulated learners, and as critical peers” ( Cefai et al., 2021 , p. 7). These studies indicate the potential of formative assessment strategies to create and build meaningful learning environments in a variety of educational contexts.

In Europe, there has been discussion of the correspondence between the 1999 implementation of the Bologna Process and the increase in learner-centered assessment approaches in educational systems that typically employ top-down, test-based methods ( Pereira et al., 2015 ). Founded on a skills-based learning model, the European Higher Education Area (EHEA) currently guides the operations of European universities, prompting the adoption of new methodologies and assessment systems. Systematic reviews of formative assessment reveal that despite evidence that learner-centred formative assessment practices are effective, such practices have been researched predominantly in school-based contexts, unevenly implemented in European higher education, and dominated by studies from the United Kingdom ( Pereira et al., 2015 ; Morris et al., 2021 ). Moreover, a recent study of Italian higher education assessment practices underscores a culture of assessment that places emphasis on the summative or concluding phase of teaching rather than developmental supports for ongoing learning through formative assessment ( Doria et al., 2023 ). In this study, we answer calls for broader empirical work on formative assessment in higher education; specifically, we focus our attention on the implementation of embedded formative assessment within programs that prepare university students to work as pre-service teachers, social workers and heads of social services in Italy.

This study addresses a gap in the research on how formative assessment can be used to develop and evaluate students who are preparing for professional contexts where formative assessment knowledge is both a beneficial and expected skill. This goal is achieved through the following specific research questions: can formative assessment help students to achieve deep learning of several assessment methods and strategies, support students to understand the complexity of assessment procedures, promote students’ growth from both a personal and professional point of view, and motivate them to use multiple forms of assessment when they become professionals in their respective fields?

The next section provides an overview of how conceptual understandings from the literature on formative assessment provide a rationale for the research and inform the study design. Drawing on existing literature on formative assessment in higher education contexts, we explores how such concepts and practices might be used to explore how to foster pre-service teachers, social workers, and heads of social services understanding of formative assessment. Thereafter, the study aims are explored through qualitative and quantitative data that support and sustain reasonable and realistic ways to introduce formative assessment strategies and activities in higher education contexts ( Crossouard and Pryor, 2012 ). Finally, the limitations and difficulties of combining and effectively balancing summative and formative assessment methods ( Lui and Andrade, 2022 ) are discussed, and recommendations for future practice are made.

2 Theoretical framework and literature review

This research derives from a theoretical understanding of learning as an ongoing and developmental process that is effective when the learner is engaged in and metacognitive about the assessment process ( Bruner, 1970 ). Like Crossouard and Pryor (2012) , we resist the separation of theory from practice and consider them entangled. This framework underpins both the concepts explored in the next sections of this paper, and how these concepts have influenced the design, analysis and reporting of the study.

When exploring the potential of formative assessment in a higher education context that is largely summative, we considered three main areas of literature to support and situate the work: formative assessment as a developmental process in education, the potential of formative assessment within programs for professionals in education, and the principles of formative assessment that could inform the study design. Throughout this section we demonstrate how, in alignment with the learner-centred nature of formative assessment, this study aims to create and investigate the conditions for building higher education learning environments where students can experience several assessment types and can reflect on and improve their own learning processes and competence development ( Dann, 2014 ; Ibarra-Sáiz et al., 2020 ). We also establish how the student profile that inspires us to conduct this study is represented by learners who are ultimately able to monitor, self-reflect, modify their learning strategies according to different situations, and seek multiple creative solutions suitable for their future professional tasks ( Tillema, 2010 ; Evans et al., 2013 ; Ozan and Kıncal, 2018 ).

2.1 Formative assessment as a developmental process for higher education

Educational theorist Jerome Bruner (1970) stated that “learning depends on knowledge of results, at a time when, and at a place where, the knowledge can be used for correction” (p. 120), indicating that learning processes are not only aimed at remembering content and information but also involve modification and understanding the best ways to learn. Higgins et al. (2010) defined formative assessment as a task to be carried out while students are learning so that they can have several forms of feedback aimed at improving their own learning, whether marked or not. From this basis, and like Petty (2004) , we assert that the main goal of formative assessment is developmental. Essentially, formative assessment is intended to assist students in diagnosing and monitoring their own progress by identifying their strengths and weaknesses and spending their efforts on trying to improve their learning processes ( Petty, 2004 , p. 463). To summarize, the key concept of formative assessment is represented by the fact that the reflexivity of students is not static but it can be enhanced or further developed at any time ( Hadrill, 1995 ).

To be effective in higher education contexts, formative assessment requires certain characteristics. First of all, formative assessment should be continuous ( Brown, 1999 ). An episodic and occasional activity with formative assessment cannot allow students a meaningful opportunity to self-reflect on their own learning. Instead, “regular formative assessment can be motivational, as continuous feedback is integral to the learning experience, stimulating and challenging students” ( Leach et al., 1998 , p. 204). The effectiveness of formative assessment in higher education contexts is well described by Yorke (2003) when the author specifies that, through formative assessment practices, students have the opportunity to understand the meaning of formative comments made either by the teachers or their peers. Additionally, students can further modify their learning approaches based on their own understandings.

2.2 Formative assessment within programs for professionals working in educational contexts

A key question regarding the use of formative assessment within programs for professionals in educational contexts relates to how understanding informs future practice. If students have the opportunity to experience formative assessment strategies while studying at university, will they be more likely to practice formative assessment when they are teachers or social workers? This inquiry was made by Hamodi et al. (2017) , who identified a need for greater understanding in this under-researched area. They found that formative assessment within programs for professionals in educational contexts represents a three-fold opportunity. Firstly, formative assessment can be used as a strategy to improve all students’ learning processes during their university programs. Secondly, students in initial teacher education programs should learn and practice several formative assessment strategies because they will have to create many educational opportunities in their future practice. Lastly, students should be encouraged to use formative assessment when they become professionals in their respective working fields, inside and outside school. While the first opportunity is beneficial for all university students, the last two are particularly evident for the educational professionals who are the participants of our study: pre-service teachers, social workers, and heads of social services.

Formative assessment has cognitive and metacognitive benefits for all university students, and for prospective teachers, social workers and heads of social services, there are additional professional benefits, including the ability to implement formative assessment in educational contexts ( Kealey, 2010 ; Montgomery et al., 2023 ). Specifically, formative assessment represents a vital strategy for professionals working in educational settings because it provides several feedback opportunities to develop self-regulated learning ( Clark, 2012 ; Xiao and Yang, 2019 ).

Feedback and self-regulation are two important dimensions of formative assessment. Regarding the first dimension, feedback, Jensen et al. (2023) emphasized that feedback should be directed toward the development of instrumental and substantive learning goals. Instrumental feedback relates to whether the work has accomplished the task criteria, while substantive learning feedback, which is less common, directs students to “reflect critically on their own assumptions and leads to a new level of understanding or quality of performance” (p. 7). Furthermore, feedback should be based on comments that stimulate reflection ( Dekker et al., 2013 ) with the following characteristics. According to Suhoyo et al. (2017) , the comments should be: strength (what students did well); weakness (aspects which need improvement); comparison to standard (similarities and differences with the requested task); correct performance (whether students’ performance approximates optimal performance); action plan (indicate future improvements). Ultimately, feedback should be timely, specific, actionable, respectful, and non-judgmental ( Watson and Kenny, 2014 ).

Regarding self-regulation, formative assessment is aimed at setting up activities to allow students to react to feedback so as to enrich and increase the final outcomes ( Ng, 2016 ). In other words, Pintrich and Zusho (2002) suggested that self-regulation is based on an active learning process in which learners are able to monitor and regulate their own cognition (p. 64). To do so, Nicol and Macfarlane-Dick (2006) indicated seven main principles for feedback that facilitates positive self-regulation: clarifying what good performance is (goals, criteria, expected standards); facilitating the development of self-assessment (reflection) in learning; delivering high quality information to students about their learning; encouraging teacher and peer dialogue around learning; encouraging positive motivational beliefs and self-esteem; providing opportunities to close the gap between current and desired performance; providing information to teachers that can be used to help shape teaching (p. 205).

While summative assessment is necessary to indicate the level of student performance, it remains an extrinsic assessment that does not foster students’ deep reflection on their own learning progressions ( Ismail et al., 2022 ). Formative assessment represents an opportunity for prospective workers in education to deeply reflect on both their personal and professional development through cognitive and metacognitive benefits and feedback that facilitates positive self-regulation.

2.3 Formative assessment strategies: self-, peer-, and group-assessment

From the literature, we identified three main formative assessments strategies: self-, peer-, and group-assessment to be incorporated into our study. Panadero et al. (2015) defined self-assessment as an activity through which students can explain and underline the “qualities of their own learning processes and products” (p. 804). As specified by Andrade (2019) , it is always important to clarify the purpose of formative assessment: in this study, self-assessment was not aimed at giving a grade. The activity was intended to support students’ reflection on their learning processes, to improve student learning and, in particular, develop students’ capacities for giving feedback to themselves or others, as described by Wanner and Palmer (2018) .

Regarding peer-assessment, Biesma et al. (2019) specified that peer activities can allow students to analyze the quality of a task completed by other students highlighting the quality of their learning processes and their professional development. Similarly, van Gennip et al. (2010) defined peer assessment as a learning intervention, consequently, peer assessment can be considered as a supplemental strategy in the development of assessment as learning. Moreover, Yin et al. (2022) specified that peer-assessment “is not merely about transferring information from a knowledgeable person to a rookie learner, but an active process where learners are engaged with continuous assessment of knowledge needs and learn to re-construct relevant cognitive understanding in context” (p. 2).

The third main formative assessment strategy is a particular form of peer-assessment carried out in groups. In this case, feedback is provided by a group of students for developmental purposes ( Baker, 2007 ). This form of peer-assessment is particularly useful for professionals in education when they have to present, argue and discuss with peers the description and the design of educational activities ( Homayouni, 2022 ). Essentially, group-assessment is a form of “co-operative or peer-assisted learning that encourages individual students in small groups to coach each other in turn so that the outcome of the process is a more rounded understanding and a more skillful execution of the task in hand than if the student was learning in isolation” ( Asghar, 2010 , p. 403). For all formative assessment strategies, a rubric designed by the teacher to lead the students’ reflection and assessment is fundamental ( Andrade and Boulay, 2003 ).

Morris et al. (2021) highlighted the evidence on university students’ academic performance when instructors use formative assessment strategies on a regular basis. These authors identified the four main points of an effective formative assessment strategy: (a) content, detail and delivery, (b) timing and spacing, (c) peers, and (d) technology. They found that formative activities can support positive educational experiences and enhance student learning and development.

2.4 Critical views of formative assessment in higher education contexts

In addition to the above aspects of formative assessment, it is important to consider criticisms related to the use of formative assessment strategies in higher education and how they influence this work. For instance, Brown (2019) posed the question: is formative assessment really assessment? According to this author, formative assessment has some aspects of assessment but it cannot be considered assessment. A series of reasons included these points: feedback occurs in ephemeral contexts; it is not possible to recognize the interpretations of teachers and, additionally, it is not possible to understand if those interpretations are adequately accurate; ultimately, stakeholders cannot know if the conclusions are valid or not.

Hamodi et al. (2017) emphasized that formative assessment strategies risk conflicts between students when aimed at giving or affecting grades. However, students also “recognize that the formative assessment they experienced as university students has proved valuable in their professional practice in schools” (p. 186). The social difficulties that can arise from formative assessment is a main concern ( Biesma et al., 2019 ). Additionally, both Koka et al. (2017) and Bond et al. (2020) stressed that formative assessment is not effective if not used regularly and Crossouard and Pryor (2012) emphasized that unexamined theories related to formative assessment can be implicit in practice and potentially narrow the educational possibilities of an intervention. Similarly, Morris et al. (2021) , following Yorke (2003) , question the most effective assessment approaches for higher education students, because there is a lack of clarity across higher education.

Ultimately, the critical views of formative assessment are focused on the following questions: is it necessary to link formative assessment with a grading scale? In what ways should the informal interactions raised during the formative assessment activities be connected with instructors’ formal grading? Is there evidence regarding the effectiveness of formative assessment strategies? How might university students be motivated to use formative assessment in their future workplace environments? Lastly, how can summative and formative strategies be effectively integrated throughout the learning activities? Situated within these theories, concepts, critiques, and questions, this study aims to provide views and clarity from the Italian context.

3 Context of the study and research questions

3.1 context.

Before presenting the aims and research questions, it is useful to describe Italian university programs for pre-service teachers, social workers, and heads of social services. Pre-service teachers must have a 5-year degree to teach either kindergarten (pupils aged 3–5) or the first five grades of primary school (pupils aged 6–10). Social workers must have a 3-year bachelor degree in educational sciences with a focus in one of two main programs. The first program trains professionals to work in educational contexts outside schools: educational services for minors with family difficulties; educational services for migrants; counselling centers; educational centers for unaccompanied foreign minors; anti-violence centers; centers for minors who committed crimes; etc. The second program is dedicated to social workers who will be employed as early childhood educators for pupils aged 0 to 3 in kindergarten. It is important to note that these professionals are called educators for early childhood services, but do not hold teacher status. Lastly, heads of social services must have a 2-year masters degree in educational sciences. These students go on to work in two main fields: as designers of local/national/international educational projects carried out by private and public bodies/centers; or, as coordinators of private and public social and educational services focused on different sectors such as early childhood, minors, migrants, etc. As indicated earlier, the Italian context is particularly suitable for this study since it has a higher education system based mainly on summative assessment. Further, given the learner-centred nature of formative assessment, it also conceptually and methodologically fitting to solicit the voices of and feedback comments made by the student participants regarding the effectiveness of formative strategies in this predominantly summative assessment culture.

3.2 Aims and research questions

This exploratory study aimed to investigate the role of formative assessment strategies carried out in university programs for pre-service teachers, social workers, and heads of social services. The overall aim of this study was to explore the benefits and the limitations related to the use of formative assessment methods in the higher education contexts. Specifically, the study aimed to examine the characteristics and practices of self-, peer- and group-assessment. Consequently, the overall research question can be expressed as follows: to what extent can formative assessment methods help higher education students reflect on their assessment competences as future professionals in educational and social fields? Further, we identified specific research questions: did the use of formative assessment.

(RQ #1) help students to achieve deep learning of several assessment methods and strategies?
(RQ #2) support students to understand the complexity of assessment procedures?
(RQ #3) promote the students’ growth from both a personal and professional point of view?
(RQ #4) motivate the students to use multiple forms of assessment when they become professionals in their respective fields?

In addition, we specified two supplementary research questions: (RQ #5) were there differences and/or specificities in the use of formative assessment in the programs for pre-service teachers, social workers, and heads of social services? (RQ #6) were there differences, in terms of effectiveness, between the formative assessment methods: self-, peer- and group-assessment?

4 Research design

To answer the research questions, a mixed method research design was chosen. Specifically, we followed the indications by Tashakkori and Teddlie (2009) , adopting a mixed method multistrand design since the exploratory study provided several stages. Additionally, as stated by Creswell and Clark (2011) , the timing of data collection was concurrent since we implemented both the quantitative and qualitative strands during each phase of the study. The interpretation of the results was based on a triangulation of quantitative and qualitative data ( Creswell et al., 2003 ) since there were no specific priorities between the kinds of data. Both data typologies were utilized to gain a deep understanding of the phenomenon. We have chosen a sequential design because it allowed us to implement the different formative assessment strategies with all students at the same time. In this way, the students could express their ideas on the effectiveness of each strategy. Conceptually this methodological approach serves to amplify the points from the theoretical framework: we sought student-centred data that was sequential, over time, with multiple access points for understanding to gain a deeper understanding of this student-centred assessment approach for higher education.

4.1 Participants, procedure, and instruments

4.1.1 participants.

As explained previously, we involved four main groups of students in the study: pre-service teachers, social workers, early childhood educators, and heads of social services. Each of these groups had specific courses included in their programs during the academic year 2022/23. Pre-service teachers (including kindergarten teachers 3-6/Primary teachers) were divided into two sub-groups. The first one had a course named “Curriculum development” scheduled at the second year of the five-year teacher education program. The second course was called “Play as educational strategy” scheduled in the third year of the same program. The social workers also took the “Curriculum development” course, but in the first year of their program. The early childhood educators had the course “Play as educational strategy” scheduled in the second year of their program. Lastly, the heads of social services had a course called “Designing and assessing learning environments” scheduled in the first year of the masters degree. All the courses contained common content and topics (learning environments, educational and assessment strategies, etc.) in addition to themes and issues relevant to the specific professional development of the students’ programs.

Table 1 indicates the demographic and educational characteristics of participants. Almost all participants are female between 19 and 22 years of age. In fact, 70% of participants were in the first two years of the bachelor degree. The groups of students, divided according to prospective job, were mainly pre-service teachers (36.54%), followed by social workers (28.21%) and early childhood educators (21.15%). More than half participants had no prior teaching or work experiences before the study. Around 30% of participants already had some teaching or work experiences and 17.31% of participants were experienced teachers or social workers. All students were invited to take part in the study, and out of 106 pre-service teachers in the courses 57 of them (53.76%) accepted to be involved. Similarly, 44 out of 105 social workers (41.91%); 33 out of 81 early childhood educators (40.74%); and 22 out of 35 heads of social services (62.86%) participated in the study.

www.frontiersin.org

Table 1 . Demographic and educational characteristics of participants.

The principles of research ethics were strictly followed. All students enrolled in the different programs were informed about the aims, activities and procedures for the study. Participation was optional, and those who agreed to participate gave written informed consent.

4.1.2 Procedure

The procedure was split into three main phases carried out throughout the academic year 2022/23. The first phase consisted of a self-assessment activity. After a mid-term written test, composed of open-ended questions and assigned four weeks from the beginning of the course, the students had to self-assess their own exam performance following a rubric designed by the instructor. The second phase consisted of a peer-assessment activity. After another mid-term written test, similar to the first, assigned after eight weeks from the beginning of the course, the students had to assess the answers written by another student following a rubric designed by the instructor.

Table 2 summarizes the procedure, specifying that the first and the second mid-term written tests were taken by each student individually and composed of three open-ended questions focused on the topics covered by the instructor in the respective period. For instance, the first test included the concepts of learning environment and educational space. The second test contained the idea of competence, and strategies to design and assess an educational path. Each question required the students to develop two main aspects: content and argumentation. Students had to write responses that both presented topic-related content concerning the topics and that argued ideas and connections with a high-level of coherence. The first two phases involved the following levels of Bloom’s taxonomy: remembering, understanding, analyzing, and evaluating.

www.frontiersin.org

Table 2 . The detailed procedure.

After the first mid-term test, the students had to reflect on their own answers, following the rubric in Table 3 . They had to self-assess their three answers, so they completed the rubric three times. The levels followed the levels used in the Italian educational system: from A (advanced level) to D (beginning level). The sub-division into two sub-levels was necessary to give more opportunities to assess the nuances of their own learning. In addition, the students could add qualitative comments. Similarly, after the second mid-term test, the students had to peer-assess the answers of a classmate, using the same rubric. The peer-assessment was random and blinded, so students could not identify their peer-reviewer.

www.frontiersin.org

Table 3 . The rubric for self- and peer-assessment.

The third phase consisted of a group-assessment activity. In the last mid-term test, the students had to design an educational action plan based on a strategy such as cooperative learning, problem-based learning, case study, etc. The strategy was randomly assigned to the students by the instructor the day before the test. During the test, the students had to present their action plan to a group of peers (each group was composed of 4–5 students). After the presentation, the group students had to assess the presentation following the rubric shown in Table 4 .

www.frontiersin.org

Table 4 . The rubric for group-assessment.

4.1.3 Instrument

The research procedure involved three observational moments. After each mid-term test, the students were asked to fill in an online questionnaire focused on the research questions. The questionnaire was composed of three sections. The first section contained the demographic and educational characteristics of participants (see Table 1 ). The second included three scales dedicated to each typology of formative assessment (self-, peer-, and group-). Each scale was composed of five items aimed to underscore the participants’ ideas regarding the different techniques of formative assessment. The items were linked to the following factors: this formative assessment activity (self, peer, or group) supported students’ development in: (a) learning assessment strategies (Learning); (b) understanding complexity of assessment procedures (Complexity); (c) growing from a personal point of view since I’m more aware of my capacities and limitations (Personal); (d) growing from a professional point of view since I’m more aware of my competences as teacher/social worker (Professional); (e) increasing my motivation to use formative assessment strategies in the future (Motivation).

These items were rated by the students with a four-point Likert scale, from 4 (Yes, this formative assessment activity has been very useful/effective) to 1 (No, this formative assessment activity has not been useful/effective at all). In addition, in the third section, they were able to add free qualitative comments and, ultimately, they were asked to suggest whether the instructor should use this strategy again the following year with new students.

4.2 The qualitative–quantitative data analysis procedure

The data analyses were performed both from a qualitative and quantitative point of view. The qualitative data were coded with NVivo 14 following the three steps suggested by grounded theory: open coding, axial coding and selective coding ( Charmaz, 2014 ; Corbin and Strauss, 2015 ; De Smet et al., 2019 ). The quantitative data were analyzed with SPSS 29 and focused on reliability analyses, ANOVA for repeated measures, the Friedman test, Exploratory Factor Analysis (EFA), and non-parametric tests to highlight potential statistically significant differences between the demographic and educational characteristics of the participants, considering gender, age, year of attendance, education area, working/teaching experience as variables.

5 Data analysis and findings

The data analysis is structured as follows. On the basis of the study aims indicated in the introduction, both quantitative and qualitative findings will underline, firstly, which formative assessment activities are more suitable in the higher education contexts. So, the findings will highlight the favorite techniques by the students: self-, peer-, and/or group-assessment. Secondly, the findings will emphasize which factors (Learning, Complexity, Personal, Professional, Motivation), linked with the formative assessment, have been more developed. So, the findings can tell us in which way formative assessment can become a basic and crucial concept for the students. Finally, the findings will stress if the use of formative assessment supports the students’ motivation to use formative assessment in their own future professional fields.

5.1 Quantitative findings

The first quantitative data analysis was focused on the instrument’s reliability, so we used the following coefficients: Cronbach’s Alpha (α); McDonald’s Omega (ω); and average inter-item correlation. Table 5 summarizes the results.

www.frontiersin.org

Table 5 . Reliability coefficients.

Then, the quantitative analysis concentrated on the potential differences, in terms of effectiveness, between the formative assessment methods: self-, peer- and group-assessment. The Friedman test revealed that there is a statistically significant difference among these three methods (χ 2  = 9.726 df = 2 p < 0.008). Specifically, the Conover’s post hoc comparisons showed that there are not differences between peer- and group-assessment ( t  = 0.174 p  < 0.862) but there are significant differences between self- and peer-assessment ( t  = 2.786 p  < 0.005) and between self- and group-assessment ( t  = 2.612 p  < 0.009). The results were specifically higher for peer- and group-assessment.

Analyzing each scale, we found significant differences between the items related to the research questions. Particularly, within the scales regarding the self- and the peer-assessment methods, we found that the scores for the factors named “Complexity” and “Motivation” were higher than the others.

Similarly, analyzing the potential differences related to each factor between the scales, we found that the scores for the factor “Complexity” were higher (χ 2  = 11.647 df = 2 p < 0.003) in the self- ( t  = 2.054 p  < 0.041) and peer-assessment ( t  = 3.407 p  < 0.001) compared to those in the group-assessment. In addition, the scores for the factor “Professional” are higher (χ 2  = 14.630 df = 2 p < 0.001) in the peer- ( t  = 2.692 p  < 0.007) and group-assessment ( t  = 3.718 p  < 0.001) compared to those in the self-assessment.

Ultimately, analyzing which are the factors with the highest scores across the scales, we found that “Complexity” and “Motivation” were more appreciated by the participants compared to other factors (χ 2  = 73.945 df = 4 p < 0.001).

5.1.1 Differences among participants

The differences between the demographic and educational characteristics of the participants were performed with ANOVA for repeated measures. Tables 6A–C summarize the results. The assessment methods are indicated with SA (self-assessment), PA (peer-assessment) and GA (group-assessment).

www.frontiersin.org

Table 6 . Differences between the participants’ demographic and educational characteristics.

Table 6A presents the statistically significant differences between the variable ‘Age’ with its dimensions and each assessment method. Table 6B displays the differences between all variables with related dimensions and each factor comparing the assessment methods. Lastly, Table 6C reveals the differences between all variables with related dimensions and each factor within each assessment method.

Additionally, we found interesting differences regarding the importance of factors for some variables in general. Regarding the variable ‘Age’, the students 19–20 years old appreciated the factor ‘Complexity’ more than the factors ‘Personal’ and ‘Professional’ (respectively, MD = 0.501 p < 0.006 and MD = 0.621 p < 0.000). Again, the students 19–20 years old highly valued the factor ‘Motivation’ compared to the factor ‘Professional’ (MD = 0.436 p < 0.001). Lastly, the students older than 26 appreciated the factor ‘Complexity’ more than ‘Professional’ (MD = 0.685 p < 0.030).

Considering the variable ‘Year of attendance’, the students at the II year of bachelor degree gave higher scores to the factors ‘Complexity’ and ‘Motivation’ compared to ‘Professional’ (respectively, MD = 0.360 p < 0.036 and MD = 0.319 p < 0.014).

Within the variable ‘Education area’, pre-service teachers appreciated the factor ‘Complexity’ more than ‘Personal’ and ‘Professional’ (respectively, MD = 0.454 p < 0.037 and MD = 0.444 p < 0.034). Similarly, social workers appreciated ‘Complexity’ more than ‘Professional’ (MD = 0.501 p < 0.019).

Regarding the variable ‘Working/teaching experience’, students with ‘never’ work experiences, appreciated ‘Complexity’ more than ‘Professional’ (MD = 0.449 p < 0.005). Ultimately, students with ‘full experience’ appreciated ‘Complexity’ and ‘Motivation’ more than ‘Professional’ (respectively, MD = 0.542 p < 0.012 and MD = 0.444 p < 0.010).

5.1.2 Exploratory factor analysis (EFA)

We decided to perform an Exploratory factor analysis (EFA) because it was interesting to identify potential common factors that might explain the structure of the instrument and the validity of the measured variables ( Watkins, 2018 ). The EFA was completed with varimax rotation and Kaiser normalization, using principal components extraction, because we were interested in highlighting with eigenvalues > 1 to emphasize the presence of latent factors.

The results indicate that the sample was adequate since the Kaiser-Meyer-Olkin test was 0.844; additionally, Bartlett’s Test of Sphericity revealed a p -value of < 0.000 (χ 2 = 1041.176; df = 105). Lastly, the goodness of fit test was 81.104 (df = 51; p < 0.005).

The EFA underscored four factors (see Table 7 ). Factors 1, 2 and 3 follow the structure of the instrument indicating, respectively, the group-assessment (38.86% of explained variance), the self-assessment (13.92%) and the peer-assessment (8.89%). The fourth factor indicated the importance of understanding the complexity of assessment procedures (7.22% of explained variance).

www.frontiersin.org

Table 7 . EFA factor loadings.

5.1.3 Next year

A specific questionnaire item asked students to indicate if, in their opinion, the formative activities should be repeated for students in the following year. This item was rated by the students on a four-point Likert scale, from 4 (Yes, absolutely) to 1 (Not at all). On this item, 97.51% of students chose options 4 and 3 for self-assessment, 94.12% for peer-assessment and 88.19% for group-assessment. The Friedman test did not show any statistically significant differences between the assessment methods (χ 2 = 0.036 df = 2 p  < 0.982).

5.2 Qualitative findings

The qualitative data analysis highlighted four common categories for each assessment method: ‘Positive aspects’; ‘Limits’; ‘Assessment issues’ and ‘Organizational issues’. The first category includes codes related to positive characteristics of the assessment method, and the second category (Limits) contains codes that indicate the main weaknesses of the assessment method. The ‘Assessment issues’ category comprises codes focused on the forms/modalities of assessment. Lastly, the ‘Organizational issues’ category focuses on the technical questions related to the assessment method. As shown in Figure 1 , categories with the same name can include codes shared between two or three assessment methods or specific codes that emphasize exclusive features of one assessment method.

www.frontiersin.org

Figure 1 . Map of categories and codes.

Tables 8 , 9 show in detail the categories and the codes split into the assessment methods. Additionally, these tables display examples of sentences which illustrate the codes, the number of participants who wrote sentences related to the code, and the frequency of the codes.

www.frontiersin.org

Table 8 . Codes included into ‘Positive aspects’ category.

www.frontiersin.org

Table 9 . Codes included into ‘Limits’, ‘Assessment issues’ and ‘Organizational issue’ categories.

In addition to the information enclosed in Tables 8 , 9 , it is interesting to underline that 59 students considered the self-assessment activity ‘useful,’ whereas 51 students identified the peer-assessment activity and 44 students considered the group-assessment activity ‘very useful.’.

6 Discussion

Both the quantitative and qualitative findings of this study provide important insights about the use of formative assessment in the higher education contexts and indicate significant implications from a practical point of view.

First of all, comparing the three formative assessment strategies, peer- and group-assessment proved to be more appreciated among the students involved in the study. Also, self-assessment was evaluated as a positive activity but peer- and group-assessment allowed students to reflect deeply on their own learning processes, create feedback and give them the opportunity to improve and modify their learning strategies.

Among the five factors that characterized this study (Learning; Complexity; Personal; Professional; Motivation), two of them revealed their importance. The scores for “Complexity” (understanding complexity of assessment procedures) and “Motivation” (increasing my motivation to use formative assessment strategies in the future) were significantly higher than the other factors mainly for peer- and group-assessment activities. Opportunities to understand the complexity of assessment procedures and to increase motivation for future use of formative assessment strategies represented crucial elements of the formative assessment activities. These reasons are explained in the qualitative comments. Firstly, formative assessment methods allowed students to recognize that an assessment procedure is not simple and linear but it is composed of many connected components, both summative and formative. Consequently, students must be able to learn and design several assessment methods linked in a coherent way, as indicated by Ismail et al. (2022) . Secondly, the use of formative assessment strategies in higher education increased the probability of using formative assessment techniques in the future, confirming the study carried out by Hamodi et al. (2017) .

Regarding differences among participants, peer- and group assessment were more appreciated by older students, indicating that these students rely on forms of professional relationships to develop their own professional perspectives, as denoted by Biesma et al. (2019) and Montgomery et al. (2023) . Peer-assessment is, in general, more appreciated than group-assessment for developing “Complexity,” “Motivation” and, also, “Learning” but group-assessment was more valued than peer- and self- assessment for increasing the “Professional” factor. This observation means that group-assessment represents an effective simulation of a work experience ( Homayouni, 2022 ). The younger students appreciated group-assessment more than peer-assessment for the “Personal,” “Complexity” and “Motivation” factors, showing that younger students needed formative assessment activities for developing awareness of their own skills and capacities. Finally, students with no work experience appreciated the self-assessment activities for the factor “Learning,” revealing that they valued moments for reflecting on themselves.

The last quantitative finding arises from the exploratory factor analysis which confirmed that the factor related to understanding the complexity of assessment procedures is particularly significant for the participants.

The findings highlighted by the qualitative analysis reveal essential results of the study. Regarding the positive aspects, all three strategies helped students in growing their awareness in recognizing their learning processes, effectively connecting theory to practice, and consciously fostering their professional perspectives, as indicated by Tillema (2010) . In particular, peer- and group-assessment boosted students’ capacity to exchange ideas and views with others; consequently, these strategies amplified peer relationships through responsibility to each other and deepened student understanding of the complexity of assessment procedures.

Additionally, the qualitative analysis identified three main limits in two aspects of formative assessment: lack of feedback in peer-assessment, and in group-assessment, low level of engagement and high level of confusion. Consequently, these two strategies must be carefully implemented. Even though they were highly appreciated by the students in general, the qualitative sections of the questionnaire indicate that peer-and group-assessment must be carried out in an adequate setting allowing students to express their ideas and motivations without pressure and confusion.

Lack of feedback, low level of engagement and high level of confusion represent three risks of formative assessment previously identified in the literature. The lack of feedback was already mentioned by Brown (2019) and the other two points were indicated by Crossouard and Pryor (2012) .

The qualitative analysis also raised issues about the assessment quality and pinpointed crucial features related to the relationship between summative and formative assessment. Some students felt that group-assessment should be followed by a formal grade, while others indicated that the same activity should be carried out more informally. Regardless, all students felt the lack of an instructor-designed assessment to ensure that activities were well designed ( Homayouni, 2022 ). These are fundamental points because the university teachers must decide if the formative assessment should affect and/or modify the final grade or not. This matter represents an open question that may also be influenced by the larger assessment culture ( Doria et al., 2023 ). In this study, we decided that the formative assessment should not change the final grade because the formative strategies were focused on the metacognitive aspects.

From an organizational point of view, the students needed forms of training to effectively carry out both self- and peer-assessment. The organization of group-assessment was a particularly debated aspect because some students desired an open assessment discussion whilst other students wanted a completely blind assessment to avoid any conflict with their peers, as indicated by Hamodi et al. (2017) .

7 Limitations of the study

This study presents some limitations. First, the participants are quite heterogeneous. They are all students enrolled in courses focused on educational contexts but the structure of the courses is different and this situation can affect the application of the assessment strategies. The second limitation is represented by the different topics faced in the different subjects. Even if the formative strategies can be considered valid for all subjects, it is necessary to reflect on the specificities of each subject to verify potential differences in managing and arranging the formative techniques. Finally, the study procedure lasted the whole academic year. It is likely that the elapsed time can represent a bias for students who faced three formative strategies throughout the course.

8 Conclusions and implications for policy and practice

Instead of assuming that ‘theory’ is solely the domain of expert educational researchers, we reassert our alignment with other researchers who recognize the critical significance of educational practitioners’ comprehension of theory in actualizing specific practices ( Crossouard and Pryor, 2012 ). Our findings indicate that the participants in our study significantly enhanced their capacities in reflecting on their assessment competences.

Notably, the higher education students in this study understood the complexity of the assessment procedures (RQ #2) and were motivated to use multiple forms of assessment in their future work (RQ #4). In terms of learner-centred feedback on effectiveness (RQ #6), peer- and group-assessment strategies emerged as the most effective and productive methods from a formative point of view. As for RQ #5, we did not find specific differences in the use of formative assessment in the programs for pre-service teachers, social workers, and heads of social services. Furthermore, despite the different opinions expressed by the participants, the majority declared that it would be important to repeat the formative assessment activities with students in the educational profession programs the following academic year.

These conclusions and the findings have implications for future work in this area. The value of integrating formative assessment and learner-identified feedback into instructional practices is evident and will inform future studies and formative assessments, with some recommendations.

For future studies, our first recommendation is an emphasis on peer- assessment strategies. From a practical and organizational perspective, ways to strengthen the peer-assessment components, such as mandatory peer comments are one recommendation for avoiding a reported lack of feedback. In the present study, qualitative comments were optional and some students did not write any comments to their peers.

Our second recommendation further derives from the students who did not receive peer comments and emphasized that the rubric scores and indicators were not enough to express meaningful feedback. Supporting students in their development of effective peer feedback practices would be an important next step in strengthening this component of instruction and developing student competencies in formative assessment for the education professions.

A third recommendation is related to group assessment. To avoid low levels of engagement and high levels of confusion during group-assessment, additional measures are suggested. One option includes arranging random groups of students with the presence of one instructor in each group to ensure sufficient guidance during the activity. Following the activity, the instructor should leave the group to promote open discussion and support students’ assessment of the peer presentation, so that students can more freely and informally assess the learning. Then, the instructor will come back into the group to give a formal grade. We anticipate that this response could resolve issues of lack of teacher assessment and the question of assigning a formal evaluative grade to this activity, thereby effectively balancing summative and formative assessment methods ( Lui and Andrade, 2022 ).

Typically, formative assessment is aimed at fostering developmental functions mainly related to metacognitive aspects of learning, whereas summative assessment is commonly targeted toward evaluative and administrative decisions (grades, reports, etc.) about performance ( Baker, 2007 ). As the findings of this study reveal, we emphasize the main original contribution of this study: formative assessment is an effective combination of developmental and evaluative purposes.

We further recommend that future studies should investigate both (a) metacognitive steps to underline the formative values of assessment, and (b) evaluative steps to underscore the summative standards to be reached by a professional in education. This design will both align with the recent European Commision report ( European Commission, Directorate-General for Education, Youth, Sport and Culture, et al., 2023 ) and support the creation of a strong combination of formative and summative feedback to deeply assess learners’ competences.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical approval was not required for the studies involving humans because this study strictly followed the principles of ethics. The participants were informed about the aims, the activities and the instruments designed for this study. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

DP: Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft. EN: Formal analysis, Methodology, Writing – original draft. EM: Formal analysis, Methodology, Writing – original draft. MI: Writing – review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Andrade, H. L. (2019). A critical review of research on student self-assessment. Front. Educ. 4:87. doi: 10.3389/feduc.2019.00087

Crossref Full Text | Google Scholar

Andrade, H. G., and Boulay, B. A. (2003). Role of rubric-referenced self-assessment in learning to write. J. Educ. Res. 97, 21–30. doi: 10.1080/00220670309596625

Asghar, A. (2010). Reciprocal peer coaching and its use as a formative assessment strategy for first-year students. Assess. Eval. High. Educ. 35, 403–417. doi: 10.1080/02602930902862834

Baker, D. F. (2007). Peer assessment in small groups: a comparison of methods. J. Manag. Educ. 32, 183–209. doi: 10.1177/1052562907310489

Bawane, J., and Sharma, R. (2020). Formative assessments and the continuity of learning during emergencies and crises. NEQMAP 2020 thematic review. UNESCO.

Google Scholar

Biesma, R., Kennedy, M.-C., Pawlikowska, T., Brugha, R., Conroy, R., and Doyle, F. (2019). Peer assessment to improve medical student’s contributions to team-based projects: randomized controlled trial and qualitative follow-up. BMC Med. Educ. 19:371. doi: 10.1186/s12909-019-1783-8

PubMed Abstract | Crossref Full Text | Google Scholar

Bond, E., Woolcott, G., and Markopoulos, C. (2020). Why aren’t teachers using formative assessment? What can be done about it? Assess. Matters 14, 112–136. doi: 10.18296/am.0046

Brown, S. (1999). “Institutional strategies for assessment” in Assessment matters in higher education: Choosing and using diverse approaches . eds. S. Brown and A. Glasner (Buckingham: SRHE and Open University Press), 3–13.

Brown, G. T. (2019). Is assessment for learning really assessment? Front. Educ. 4:64. doi: 10.3389/feduc.2019.00064

Bruner, J. S. (1970). “Some theories on instruction” in Readings in educational psychology . ed. E. Stones (Methuen), 112–124.

Cefai, C., Downes, P., and Cavioni, V. (2021). A formative, inclusive, whole school approach to the assessment of social and emotional education in the EU. NESET report. Publications Office of the European Union. doi: 10.2766/905981

Charmaz, K. (2014). Constructing grounded theory . London: Sage.

Clark, I. (2012). Formative assessment: assessment is for self-regulated learning. Educ. Psychol. Rev. 24, 205–249. doi: 10.1007/s10648-011-9191-6

Corbin, J., and Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory . Los Angeles, CA: Sage.

Creswell, J. W., and Clark, V. P. (2011). Designing and conducting mixed methods research Thousand Oaks: Sage.

Creswell, J. W., Clark, V. P., Gutmann, M., and Hanson, W. (2003). “Advanced mixed methods research designs” in Handbook of mixed methods in social & behavioral research . eds. A. Tashakkori and C. Teddlie (Thousand Oaks, CA: Sage), 209–240.

Crossouard, B., and Pryor, J. (2012). How theory matters: formative assessment theory and practices and their different relations to education. Stud. Philos. Educ. 31, 251–263. doi: 10.1007/s11217-012-9296-5

Dann, R. (2014). Assessment as learning: blurring the boundaries of assessment and learning for theory, policy and practice. Assess. Educ. Princ. Policy Pract. 21, 149–166. doi: 10.1080/0969594x.2014.898128

De Smet, M. M., Meganck, R., Van Nieuwenhove, K., Truijens, F. L., and Desmet, M. (2019). No change? A grounded theory analysis of depressed patients' perspectives on non-improvement in psychotherapy. Front. Psychol. 10:588. doi: 10.3389/fpsyg.2019.00588

Dekker, H., Schönrock-Adema, J., Snoek, J. W., van der Molen, T., and Cohen-Schotanus, J. (2013). Which characteristics of written feedback are perceived as stimulating students’ reflective competence: an exploratory study. BMC Med. Educ. 13, 1–7. doi: 10.1186/1472-6920-13-94

DeVon, H. A., Block, M. E., Moyle-Wright, P., Ernst, D. M., Hayden, S. J., Lazzara, D. J., et al. (2007). A psychometric toolbox for testing validity and reliability. J. Nurs. Scholarsh. 39, 155–164. doi: 10.1111/j.1547-5069.2007.00161.x

Doria, B., Grion, V., and Paccagnella, O. (2023). Assessment approaches and practices of university lecturers: a nationwide empirical research. Ital. J. Educ. Res. 30, 129–143. doi: 10.7346/sird-012023-p129

European Commission, Directorate-General for Education, Youth, Sport and CultureLooney, J., and Kelly, G. (2023). Assessing learners’ competences – Policies and practices to support successful and inclusive education – Thematic report, Publications Office of the European Union, 2023. Available at: https://data.europa.eu/doi/10.2766/221856

Evans, D. J., Zeun, P., and Stanier, R. A. (2013). Motivating student learning using a formative assessment journey. J. Anat. 224, 296–303. doi: 10.1111/joa.12117

Hadrill, R. (1995). “The NCVQ model of assessment at higher levels” in Assessment for learning in higher education . ed. P. Knight (London: Kogan Page), 167–179.

Hamodi, C., López-Pastor, V. M., and López-Pastor, A. T. (2017). If I experience formative assessment whilst studying at university, will I put it into practice later as a teacher? Formative and shared assessment in initial teacher education (ITE). Eur. J. Teach. Educ. 40, 171–190. doi: 10.1080/02619768.2017.1281909

Higgins, M., Grant, F., and Thompson, P. (2010). Formative assessment: balancing educational effectiveness and resource efficiency. J. Educ. Built Environ. 5, 4–24. doi: 10.11120/jebe.2010.05020004

Homayouni, M. (2022). Peer assessment in group-oriented classroom contexts: on the effectiveness of peer assessment coupled with scaffolding and group work on speaking skills and vocabulary learning. Lang. Test. Asia 12, 1–23. doi: 10.1186/s40468-022-00211-3

Ibarra-Sáiz, M. S., Rodríguez-Gómez, G., and Boud, D. (2020). The quality of assessment tasks as a determinant of learning. Assess. Eval. High. Educ. 46, 943–955. doi: 10.1080/02602938.2020.1828268

Ismail, S. M., Rahul, D. R., Patra, I., and Rezvani, E. (2022). Formative vs. summative assessment: impacts on academic motivation, attitude toward learning, test anxiety, and self-regulation skill. Lang. Test. Asia 12, 1–23. doi: 10.1186/s40468-022-00191-4

Jensen, L. X., Bearman, M., and Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teach. High. Educ. 1–15, 1–15. doi: 10.1080/13562517.2023.2213168

Kealey, E. (2010). Assessment and evaluation in social work education: formative and summative approaches. J. Teach. Soc. Work. 30, 64–74. doi: 10.1080/08841230903479557

Koka, R., Jurāne-Brēmane, A., and Koķe, T. (2017). Formative assessment in higher education: from theory to practice. Eur. J. Soc. Sci. Educ. Res. 9:28. doi: 10.26417/ejser.v9i1.p28-34

Leach, L., Neutze, G., and Zepke, N. (1998). “Motivation in assessment” in Motivating students . eds. S. Brown, S. Armstrong, and G. Thompson (London: Kogan Page), 201–209.

Lui, A. M., and Andrade, H. L. (2022). The next black box of formative assessment: a model of the internal mechanisms of feedback processing. Front. Educ. 7:751801. doi: 10.3389/feduc.2022.751548

Montgomery, L., MacDonald, M., and Houston, E. S. (2023). Developing and evaluating a social work assessment model based on co-production methods. Br. J. Soc. Work 53, 3665–3684. doi: 10.1093/bjsw/bcad154

Morris, R., Perry, T., and Wardle, L. (2021). Formative assessment and feedback for learning in higher education: a systematic review. Review of. Education 9, 1–26. doi: 10.1002/rev3.3292

Ng, E. M. W. (2016). Fostering pre-service teachers’ self-regulated learning through self- and peer assessment of wiki projects. Comput. Educ. 98, 180–191. doi: 10.1016/j.compedu.2016.03.015

Nicol, D. J., and Macfarlane-Dick, D. (2006).” Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31, 199–218. doi: 10.1080/03075070600572090

OECD (2005). Formative assessment: improving learning in secondary classrooms . Paris: OECD

Ozan, C., and Kıncal, R. Y. (2018). The effects of formative assessment on academic achievement, attitudes toward the lesson, and self-regulation skills. Educ. Sci. Theory Pract. 18, 85–118. doi: 10.12738/estp.2018.1.0216

Panadero, E., Brown, G. T., and Strijbos, J.-W. (2015). The future of student self-assessment: a review of known unknowns and potential directions. Educ. Psychol. Rev. 28, 803–830. doi: 10.1007/s10648-015-9350-2

Pereira, D., Flores, M. A., and Niklasson, L. (2015). Assessment revisited: a review of research in assessment and evaluation in higher education. Assess. Eval. High. Educ. 41, 1008–1032. doi: 10.1080/02602938.2015.1055233

Petty, G. (2004). Teaching today . London: Nelson Thornes.

Pintrich, P. R., and Zusho, A. (2002). “The development of academic self-regulation: the role of cognitive and motivational factors” in Development of achievement motivation . eds. A. Wigfield and J. S. Eccles (San Diego: Academic Press), 249–284.

Spiliotopoulou, G. (2009). Reliability reconsidered: Cronbach’s alpha and paediatric assessment in occupational therapy. Aust. Occup. Ther. J. 56, 150–155. doi: 10.1111/j.1440-1630.2009.00785.x

Suhoyo, Y., Van Hell, E. A., Kerdijk, W., Emilia, O., Schönrock-Adema, J., Kuks, J. B., et al. (2017). Influence of feedback characteristics on perceived learning value of feedback in clerkships: does culture matter? BMC Med. Educ. 17, 1–7. doi: 10.1186/s12909-017-0904-5

Tashakkori, A., and Teddlie, C. (2009). Foundations of mixed methods research . London: SAGE Publications.

Tillema, H. (2010). “Formative Assessment in Teacher Education and Teacher Professional Development,” in International Encyclopedia of Education . eds. Peterson, P., Baker, E., and McGaw, B.. 3rd edn (Oxford: Elsevier), 563–571.

van Gennip, N. A. E., Segers, M. S. R., and Tillema, H. H. (2010). Peer assessment as a collaborative learning activity: the role of interpersonal variables and conceptions. Learn. Instr. 20, 280–290. doi: 10.1016/j.learninstruc.2009.08.010

Wanner, T., and Palmer, E. (2018). Formative self-and peer assessment for improved student learning: the crucial factors of design, teacher participation and feedback. Assess. Eval. High. Educ. 43, 1032–1047. doi: 10.1080/02602938.2018.1427698

Watkins, M. W. (2018). Exploratory factor analysis: a guide to best practice. J. Black Psychol. 44, 219–246. doi: 10.1177/0095798418771807

Watson, G. P. L., and Kenny, N. (2014). Teaching critical reflection to graduate students. Collected Essays Learn. Teach. 7:56. doi: 10.22329/celt.v7i1.3966

Xiao, Y., and Yang, M. (2019). Formative assessment and self-regulated learning: how formative assessment supports students’ self-regulation in English language learning. System 81, 39–49. doi: 10.1016/j.system.2019.01.004

Yin, S., Chen, F., and Chang, H. (2022). Assessment as learning: how does peer assessment function in students’ learning? Front. Psychol. 13:912568. doi: 10.3389/fpsyg.2022.912568

Yorke, M. (2003). Formative assessment in higher education: moves towards theory and the enhancement of pedagogic practice. High. Educ. 45, 477–501. doi: 10.1023/a:1023967026413

Keywords: formative assessment, professionals in education, higher education, teacher education, social workers

Citation: Parmigiani D, Nicchia E, Murgia E and Ingersoll M (2024) Formative assessment in higher education: an exploratory study within programs for professionals in education. Front. Educ . 9:1366215. doi: 10.3389/feduc.2024.1366215

Received: 05 January 2024; Accepted: 14 March 2024; Published: 22 March 2024.

Reviewed by:

Copyright © 2024 Parmigiani, Nicchia, Murgia and Ingersoll. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Davide Parmigiani, [email protected]

This article is part of the Research Topic

Research on Teaching Strategies and Skills in Different Educational Stages

Woman working in library

Assessment and Feedback in Higher Education

Assessment is an essential part of education. Robust assessment processes are critical for a rigorous evaluation of the level of student learning. Beyond judgement, the modes of assessment we select will shape not just what, but how students learn. Empowering and engaging learners through assessment design and providing opportunities for dialogic feedback is central to learning and the student experience.

Modes of assessment are evolving, encompassing assessment for, of and as learning, allowing the evaluation of a broader range of professional and subject-specific competencies, providing greater student choice and opportunities for students to showcase their talents.

Kay Hack

Designing a diverse diet of authentic, valid and verifiable assessment tasks that excite and engage learners together with constructive and timely feedback are the most influential means we have as teachers to direct and support learning.” Dr Catherine (Kay) Hack (PFHEA) ,  Principal Adviser (Learning and Teaching) Advance HE

online learning

The challenges of effective assessment in the modern business school

Dr Joanna Tai

Equality, diversity and partnership in assessment and feedback

More resources, articles and publications on assessment and feedback in higher education.

The Assessment and Feedback information pack contains links to all the resources and content available on this page plus more.

Buff line

Framework for Enhancing Assessment

Assessment and feedback practices are at the core of higher education (HE). Assessment provides judgement so it needs to be valid and reliable. It also supports student learning and feedback, so it should be designed to promote desirable learning behaviours, develop subject-specific and professional competencies, as well as other graduate attributes.

This framework is designed for all stakeholders; for educators, those in charge of policy, quality assurance, and quality enhancement, and those responsible for leading education at all levels from pro vice-chancellors to programme leaders.

Enhancing Assessment in Higher Education

Impacts of higher education assessment and feedback policy and practice on students: a review of the literature 2016-2021

Conducted by Edd Pitt, PhD SFHEA, and Kathleen M Quinlan, PhD PFHEA, from the University of Kent, the review explores evidence from peer-reviewed journal articles relating to assessment and feedback in higher education published from 2016 to 2021. The review highlights evidence-based assessment and feedback policies or practices that have had a demonstrable impact on key student outcomes including performance, engagement and satisfaction, and prompts a rethinking of traditional views of assessment and feedback.

Advance HE members can access the literature review, a searchable dataset and an infographic exploring the research.  Find out if your organisation is a member .

Assessment and feedback literature review

Degree Standards - Professional Development and Calibration for the External Examining System in the UK

The project on degree standards, running from 2016 to 2021, was managed by the Office for Students on behalf of England and the devolved administrations in Northern Ireland and Wales. OfS contracted Advance HE to work across the UK to facilitate a sector-owned development process focusing on the professional development for external examiners. The project has two interrelated parts:

  • Working with a range of higher education providers to design and pilot generic professional development for external examiners
  • Exploring different forms of calibration exercises with subject associations and Professional, Statutory and Regulatory Bodies (PSRBs)

Assessment and Feedback reports, publications and resources

Advance HE has curated a number of resources to support the understanding of the role of Assessment and Feedback in higher education. These cover areas such as learner-focused feedback, student partnerships and guides to our frameworks and are available to download below.

Essential Frameworks For Enhancing Student Success: Transforming Assessment

Our framework for transforming assessment is an integral part of the Advance HE Frameworks. This report provides an updated guide to the framework.  Download the guide.  

The guide is open to colleagues at  Advance HE member organisations  only.  Find out if your organisation is a member .

On Your Marks: Learner-focused Feedback Practices and Feedback Literacy

The publication contains a diverse range of expertise in assessment and feedback, bringing together experts in a number of disciplines to present ideas on learner-centred feedback.

The main goal of the collection is to showcase learner-focused feedback practices that make an impact on student learning.  Download the publication.

More resources, reports and publications

You will find links to more resources, reports and publications in our information pack on assessment and feedback in higher education.

Ethnicity awarding gaps in UK higher education in 2019/20

Advance HE has published data and findings related to the size of ethnicity awarding gaps (the difference in proportions of white and students awarded a first/2:1 degree) since 2005 as part of our annual  Equality in higher education statistical reports . However, this is the first dedicated report which explores ethnicity awarding gaps in detail across individual and course-level characteristics, specifically for the 2019-20 cohort of qualifiers. Download the report .

Student Partnerships in Assessment (SPiA)

The  Student Partnerships in Assessment (SPiA) Connect Member Benefit Series  was coordinated by Advance HE through the Spring-Summer of 2021. This guide is informed by the contributions of a global community engaging with this initiative and features some overarching principles for SPiA, current thinking, and how those principles can be applied in practice. Access the guide .

Assessment and Feedback in Law

This publication arose from an Advance HE Collaboration Project, which took place between 2020-2021, during the Covid-19 pandemic. It consists of nine case study papers focusing on a particular issue or innovation in Law at UK higher education institutions. Download the publication .

©Advance HE 2020. Company limited by guarantee registered in England and Wales no. 04931031 | Company limited by guarantee registered in Ireland no. 703150 | Registered charity, England and Wales 1101607 | Registered charity, Scotland SC043946 | VAT Registered number GB 152 1219 50. Registered UK Address: Advance HE, Innovation Way, York Science Park, Heslington, York, YO10 5BR, United Kingdom | Registered Ireland Address: Advance HE, First Floor, Penrose 1, Penrose Dock, Cork, T23 Kw81, Ireland.

Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice

  • Published: June 2003
  • Volume 45 , pages 477–501, ( 2003 )

Cite this article

formative assessments in tertiary education

  • Mantz Yorke 1  

16k Accesses

482 Citations

1 Altmetric

Explore all metrics

The importance of formative assessment instudent learning is generally acknowledged, butit is not well understood across higher education.The identification of some key features offormative assessment opens the way for adiscussion of theory. It is argued that thereis a need for further theoretical developmentin respect of formative assessment, which needsto take account of disciplinary epistemology,theories of intellectual and moral development,students' stages of intellectual development,and the psychology of giving and receivingfeedback. A sketch is offered of the directionthat this development might take. It is notedthat formative assessment may be eitherconstructive or inhibitory towards learning. Suggestions are made regarding research intoformative assessment, and how research mightcontribute to the development of pedagogicpractice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

formative assessments in tertiary education

Constructing Formative Assessment Strategies

formative assessments in tertiary education

Looking to the Future: Lessons Learned and Ideas for Further Research

formative assessments in tertiary education

Formative Assessment in Mathematics Instruction: Theoretical Considerations and Empirical Results of the Co2CA Project

Argyris, C. and Schön, D. (1974). Theory in Practice: Increasing Professional Effectivenes s. San Francisco, CA: Jossey-Bass.

Google Scholar  

Ausubel, D.P. (1968). Educational Psychology: A Cognitive Vie w. London: Holt, Rinehart and Winston.

Bandura, A. (1997). Self-Efficacy: The Exercise of Contro l. New York: Freeman.

Barnett, R. (1997). Higher Education: A Critical Busines s. Buckingham: SRHE and Open University Press.

Baume, D. and Yorke, M. (2002). 'The reliability of assessment by portfolio on a course to develop and accredit teachers in higher education', Studies in Higher Education 27(1), 7-25.

Biggs, J. (1999). Teaching for Quality Learning at Universit y. Buckingham: SRHE and Open University Press.

Biggs, J.B. and Collis, K.F. (1982). Evaluating the Quality of Learning: The SOLO Taxonomy (Structure of the Observed Learning Outcome ). New York: Academic Press.

Black, P. (1998). 'Formative assessment: Raising standards', School Science Review 80(291), 39-46.

Black, P. and Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education 5(1), 7-74.

Bligh, D. (1998). What's the Use of Lectures? 5th edn. Exeter: Intellect.

Bloom, B.S., (ed.) (1956). Taxonomy of Educational Objectives [Handbook 1: Cognitive Domain]. London: Longmans.

Bloom, B.S., Hastings, J.T. and Madaus, G.F. (1971). Handbook on Formative and Summative Evaluation of Student Learnin g. New York: McGraw-Hill.

Boud, D. (1995). 'Assessment and learning: Contradictory or complementary?', in Knight, P. (ed.), Assessment for Learning in Higher Educatio n. London: Kogan Page, pp. 35-48.

Boud, D. (2000). 'Sustainable assessment: Rethinking assessment for the learning society', Studies in Continuing Education 22(2), 151-167.

Brown, G., with Bull, J. and Pendlebury, M. (1997). Assessing Student Learning in Higher Educatio n. London: Routledge.

Brown, S. (1999). 'Institutional strategies for assessment', in Brown, S. and Glasner, A. (eds.), Assessment Matters in Higher Education: Choosing and Using Diverse Approache s. Buckingham: SRHE and Open University Press, pp. 3-13.

Brown, S. and Knight, P. (1994). Assessing Learners in Higher Educatio n. London: Kogan Page.

Bruner, J.S. (1970). 'Some theories on instruction', in Stones, E. (ed.), Readings in Educa-tional Psycholog y. London: Methuen, pp. 112-124.

Bruner, J.S. (1974). Beyond the Information Given: Studies in the Psychology of Knowin g.Ed Anglin, J.M. London: Allen and Unwin.

Brunson, B.I. and Matthews, K.A. (1981). 'The Type-A coronary-prone behavior pattern and reaction to uncontrollable stress: An analysis of performance strategies, affect, and attributions during failure', Journal of Personal and Social Psychology 40, 906-918.

Carroll, M. (1995). 'Formative assessment workshops: feedback sessions for large classes', Biochemical Education 23(2), 65-67.

Cowan, J. (1998). On Becoming an Innovative University Teacher: Reflection in Actio n.Buckingham: SRHE and the Open University Press.

Cowie, B. and Bell, B. (1999). 'A model of formative assessment in science education', Assessment in Education 6(1), 101-116.

Dweck, C.S. (1999). Self-Theories: Their Role in Motivation, Personality and Developmen t. Philadelphia, PA: Psychology Press.

Dweck, C.S. and Leggett, E.L. (1988). 'A social-cognitive approach to motivation and personality', Psychological Review 95, 256-273.

Ecclestone, K. and Swann, J. (1999). 'Litigation and learning: tensions in improving university lecturers' assessment practice', Assessment in Education 6(3), 377-389.

Eisner, E.W. (1985). The Art of Educational Evaluation: A Personal Vie w. London: Falmer.

Elliott, E. and Dweck, C. (1988). 'Goals: An approach to motivation and achievement', Journal of Personal and Social Psychology 54, 5-12.

Eraut, M. (1994). Developing Professional Knowledge and Competenc e. London: Falmer.

Gibbs, G. (1999). 'Using assessment strategically to change the way students learn', in Brown, S. and Glasner, A. (eds.), Assessment Matters in Higher Education: Choosing and Using Diverse Approache s. Buckingham: SRHE and Open University Press, pp. 41-53.

Gipps, C.V. (1994). Beyond Testing: Towards a Theory of Educational Assessmen t. London: Falmer.

Harlen, W. and James, M. (1997). 'Assessment and learning: differences and relationships between formative and summative assessment', Assessment in Education 4(3), 365-379.

Harrow, A.J. (1972). Taxonomy of the Psychomotor Domain: A Guide for Developing Behavioral Objective s. London: Longman.

HEFCE (1999). Performance Indicators in Higher Education, 1996-1997, 1997-199 8. Bristol: Higher Education Funding Council for England.

Heywood, J. (2000). Assessment in Higher Education: Student Learning, Teaching, Programmes and Institution s. London: Jessica Kingsley.

Hudson, L. (1966). Contrary Imagination s. London: Methuen.

King, P.M. and Kitchener, K.S. (1994). Developing Reflective Judgment: Understanding and Promoting Intellectual Growth and Critical Thinking in Adolescents and Young Adult s. San Francisco: Jossey-Bass.

Knight, P. (2000). 'The value of a programme-wide approach to assessment', Assessment and Evaluation in Higher Education 25(3), 237-251.

Kohlberg, L. (1964). The Philosophy of Moral Development: Moral Stages and the Idea of Justic e. San Francisco: Harper and Row.

Krathwohl, D.R., Bloom, B.S. and Masia, B.B. (1964). Taxonomy of Educational Objectives [Handbook 2: Affective Domain]. London: Longman.

Laurillard, D. (1993). Rethinking University Teaching: A Framework for the Effective Use of Educational Technolog y. London: Routledge.

McKeachie, W.J. (1997). 'Good teaching makes a difference-and we know what it is', in Perry, R.P. and Smart, J.C. (eds.), Effective Teaching in Higher Education: Research and Practic e. Bronx, NY: Agathon, pp. 396-408.

Mentkowski, M. and Associates (2000). Learning that Lasts: Integrating Learning, Development, and Performance in College and Beyon d. San Francisco, CA: Jossey-Bass.

Miller, C.M.L. and Parlett, M. (1974). Up to the Mark: A Study of the Examination Game [Monograph 21]. London: Society for Research into Higher Education.

NCIHE (1997). Higher Education in the Learning Society (Report of the National Committee of Inquiry into Higher Education: 'The Dearing Report'). Norwich: HMSO.

Pascarella, E.T. and Terenzini, P.T. (1991). How College Affects Student s. San Francisco: Jossey-Bass.

Perry, W.G. (1998). Forms of Ethical and Intellectual Development in the College Year s. San Francisco: Jossey-Bass [Reprint of 1970 text, with a new introduction by L.L. Knefelkamp].

Peterson, C., Maier, S.F. and Seligman, M.E.P. (1993). Learned Helplessness: A Theory for the Age of Personal Contro l. New York: Oxford University Press.

Polanyi, M. (1958). Personal Knowledge: Towards a Postcritical Philosoph y. London: Routledge and Kegan Paul.

Ramsden, P. (1992). Learning to Teach in Higher Educatio n. London: Routledge.

Rolfe, I. and McPherson, J. (1995). 'Formative assessment: how am I doing?', Lancet 345(8953), 837-839.

Rowntree, D. (1987). Assessing Students: How Shall We Know Them? [Revised edition] London: Kogan Page.

Sadler, D.R. (1989). 'Formative assessment and the design of instructional systems', Instructional Science 18(2), 119-141.

Sadler, D.R. (1998). 'Formative assessment: Revisiting the territory', Assessment in Education 5(1), 77-84.

Schön, D.A. (1983). The Reflective Practitione r. New York: Basic Books.

Shepard, L.A. (2000). 'The role of assessment in a learning culture', Educational Researcher 29(7), 4-14.

Stephenson, J. (1998). 'The concept of capability and its importance in higher education', in Stephenson, J. and Yorke, M. (eds.), Capability and Quality in Higher Educatio n. London: Kogan Page, pp. 1-13.

Swann, J. and Ecclestone, K. (1999). 'Improving lecturers' assessment practice in higher education: A problem-based approach', Educational Action Research 7(1), 63-84.

Sylva, K. (1994). 'School influences on children's development', Journal of Child Psychology and Psychiatry 35(1), 135-170.

Tittle, C.K. (1994). 'Toward an educational psychology of assessment for teaching and learning: theories, contexts and validation arguments', Educational Psychologist 29(3), 149-162.

Torrance, H. (1993). 'Formative assessment: Some theoretical problems and empirical questions', Cambridge Journal of Education 23(3), 333-343.

Torrance, H. and Pryor, J. (1998). Investigating Formative Assessment: Teaching, Learning and Assessment in the Classroo m. Buckingham: Open University Press.

Torrance, H. and Pryor, J. (2001). 'Developing formative assessment in the classroom: using action research to explore and modify theory', British Educational Research Journal 27(5), 615-631.

Vaz, M., Avadhany, S.T. and Rao, B.S. (1996). 'Student perspectives on the role of formative assessment in physiology', Medical Teacher 18(4), 324-326.

Vygotsky, L.S. (1978). Mind in Society: The Development of Higher Psychological Processe s.Cambridge, MA: Harvard University Press.

West, R., Chair (1997). Learning for Lif e. Canberra: Australian Government Publishing Service.

Wolf, A. (1995). Competence-Based Assessmen t. Buckingham: Open University Press.

Wood, R. (1987). Measurement and Assessment in Education and Psychology: Collected Papers 1967-198 7. London: Falmer.

Yorke, M. (1999). Getting it Right First Tim e. Cheltenham: Universities and Colleges Admissions Service.

Yorke, M. (2000). 'A cloistered virtue? Pedagogical research and policy in UK higher education', Higher Education Quarterly 54(2), 106-126.

Yorke, M. (2001). 'Telling it as it is? Massification, performance indicators and the press', Tertiary Education and Management 7(1), 57-68.

Download references

Author information

Authors and affiliations.

Centre for Higher Education Development, Liverpool John Moores University, IM Marsh Campus, Barkhill Road, Liverpool, L17 6BD, England

Mantz Yorke

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

About this article

Yorke, M. Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education 45 , 477–501 (2003). https://doi.org/10.1023/A:1023967026413

Download citation

Issue Date : June 2003

DOI : https://doi.org/10.1023/A:1023967026413

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • enhancement
  • formative assessment
  • Find a journal
  • Publish with us
  • Track your research
  • Our Mission

An Illustration depicting a formative assessment concept

7 Smart, Fast Ways to Do Formative Assessment

Within these methods you’ll find close to 40 tools and tricks for finding out what your students know while they’re still learning.

Formative assessment—discovering what students know while they’re still in the process of learning it—can be tricky. Designing just the right assessment can feel high stakes—for teachers, not students—because we’re using it to figure out what comes next. Are we ready to move on? Do our students need a different path into the concepts? Or, more likely, which students are ready to move on and which need a different path?

When it comes to figuring out what our students really know, we have to look at more than one kind of information. A single data point—no matter how well designed the quiz, presentation, or problem behind it—isn’t enough information to help us plan the next step in our instruction.

Add to that the fact that different learning tasks are best measured in different ways, and we can see why we need a variety of formative assessment tools we can deploy quickly, seamlessly, and in a low-stakes way—all while not creating an unmanageable workload. That’s why it’s important to keep it simple: Formative assessments generally just need to be checked, not graded, as the point is to get a basic read on the progress of individuals, or the class as a whole.

7 Approaches to Formative Assessment

1. Entry and exit slips: Those marginal minutes at the beginning and end of class can provide some great opportunities to find out what kids remember. Start the class off with a quick question about the previous day’s work while students are getting settled—you can ask differentiated questions written out on chart paper or projected on the board, for example.

Exit slips can take lots of forms beyond the old-school pencil and scrap paper. Whether you’re assessing at the bottom of Bloom’s taxonomy or the top, you can use tools like Padlet or Poll Everywhere , or measure progress toward attainment or retention of essential content or standards with tools like Google Classroom’s Question tool , Google Forms with Flubaroo , and Edulastic , all of which make seeing what students know a snap.

A quick way to see the big picture if you use paper exit tickets is to sort the papers into three piles : Students got the point; they sort of got it; and they didn’t get it. The size of the stacks is your clue about what to do next.

No matter the tool, the key to keeping students engaged in the process of just-walked-in or almost-out-the-door formative assessment is the questions. Ask students to write for one minute on the most meaningful thing they learned. You can try prompts like:

  • What are three things you learned, two things you’re still curious about, and one thing you don’t understand?
  • How would you have done things differently today, if you had the choice?
  • What I found interesting about this work was...
  • Right now I’m feeling...
  • Today was hard because...

Or skip the words completely and have students draw or circle emojis to represent their assessment of their understanding.

2. Low-stakes quizzes and polls: If you want to find out whether your students really know as much as you think they know, polls and quizzes created with Socrative or Quizlet or in-class games and tools like Quizalize , Kahoot , FlipQuiz, Gimkit , Plickers , and Flippity can help you get a better sense of how much they really understand. (Grading quizzes but assigning low point values is a great way to make sure students really try: The quizzes matter, but an individual low score can’t kill a student’s grade.) Kids in many classes are always logged in to these tools, so formative assessments can be done very quickly. Teachers can see each kid’s response, and determine both individually and in aggregate how students are doing.

Because you can design the questions yourself, you determine the level of complexity. Ask questions at the bottom of Bloom’s taxonomy and you’ll get insight into what facts, vocabulary terms, or processes kids remember. Ask more complicated questions (“What advice do you think Katniss Everdeen would offer Scout Finch if the two of them were talking at the end of chapter 3?”), and you’ll get more sophisticated insights.

3. Dipsticks: So-called alternative formative assessments are meant to be as easy and quick as checking the oil in your car, so they’re sometimes referred to as dipsticks . These can be things like asking students to:

  • write a letter explaining a key idea to a friend,
  • draw a sketch to visually represent new knowledge, or
  • do a think, pair, share exercise with a partner.

Your own observations of students at work in class can provide valuable data as well, but they can be tricky to keep track of. Taking quick notes on a tablet or smartphone, or using a copy of your roster, is one approach. A focused observation form is more formal and can help you narrow your note-taking focus as you watch students work.

4. Interview assessments: If you want to dig a little deeper into students’ understanding of content, try discussion-based assessment methods. Casual chats with students in the classroom can help them feel at ease even as you get a sense of what they know, and you may find that five-minute interview assessments work really well. Five minutes per student would take quite a bit of time, but you don’t have to talk to every student about every project or lesson.

You can also shift some of this work to students using a peer-feedback process called TAG feedback (Tell your peer something they did well, Ask a thoughtful question, Give a positive suggestion). When you have students share the feedback they have for a peer, you gain insight into both students’ learning.

For more introverted students—or for more private assessments—use Flipgrid , Explain Everything , or Seesaw to have students record their answers to prompts and demonstrate what they can do.

5. Methods that incorporate art: Consider using visual art or photography or videography as an assessment tool. Whether students draw, create a collage, or sculpt, you may find that the assessment helps them synthesize their learning . Or think beyond the visual and have kids act out their understanding of the content. They can create a dance to model cell mitosis or act out stories like Ernest Hemingway’s “Hills Like White Elephants” to explore the subtext.

6. Misconceptions and errors: Sometimes it’s helpful to see if students understand why something is incorrect or why a concept is hard. Ask students to explain the “ muddiest point ” in the lesson—the place where things got confusing or particularly difficult or where they still lack clarity. Or do a misconception check : Present students with a common misunderstanding and ask them to apply previous knowledge to correct the mistake, or ask them to decide if a statement contains any mistakes at all, and then discuss their answers.

7. Self-assessment: Don’t forget to consult the experts—the kids. Often you can give your rubric to your students and have them spot their strengths and weaknesses.

You can use sticky notes to get a quick insight into what areas your kids think they need to work on. Ask them to pick their own trouble spot from three or four areas where you think the class as a whole needs work, and write those areas in separate columns on a whiteboard. Have you students answer on a sticky note and then put the note in the correct column—you can see the results at a glance.

Several self-assessments let the teacher see what every kid thinks very quickly. For example, you can use colored stacking cups that allow kids to flag that they’re all set (green cup), working through some confusion (yellow), or really confused and in need of help (red).

Similar strategies involve using participation cards for discussions (each student has three cards—“I agree,” “I disagree,” and “I don’t know how to respond”) and thumbs-up responses (instead of raising a hand, students hold a fist at their belly and put their thumb up when they’re ready to contribute). Students can instead use six hand gestures to silently signal that they agree, disagree, have something to add, and more. All of these strategies give teachers an unobtrusive way to see what students are thinking.

No matter which tools you select, make time to do your own reflection to ensure that you’re only assessing the content and not getting lost in the assessment fog . If a tool is too complicated, is not reliable or accessible, or takes up a disproportionate amount of time, it’s OK to put it aside and try something different.

  • (855) 776-7763

Training Maker

Collaborate

WebinarNinja

ProProfs.com

Knowledge Base

Survey Maker

All Products

  • Get Started Free

Do you want a free Quiz Software?

We have the #1 AI-powered Quiz Maker Software for complete learning & assessment.

What Is a Formative Assessment? Types, Examples & Strategies

Matthew Tang

Matthew Tang

eLearning & Instructional Design Expert

Review Board Member

Matthew Tang is a highly skilled eLearning consultant with over two decades of experience in delivering exceptional learning products. He has taught students in public schools and online, led online ... Read more

Matthew Tang is a highly skilled eLearning consultant with over two decades of experience in delivering exceptional learning products. He has taught students in public schools and online, led online education for a Fortune 50 company, partnered with university researchers to pioneer new learning technologies, and delivered expert learning solutions to clients of all sizes. With a genuine passion for helping individuals succeed and reach their academic or business goals, Matthew continually improves and innovates educational technology solutions, making him a trusted authority in eLearning. Read less

Michael Laithangbam

Author & Editor at ProProfs

Michael is a seasoned writer with 12+ years of experience in online learning and training. His work empowers organizations to harness the potential of knowledge in the digital era.

formative assessment

Ever noticed how the most memorable lessons stick with us not because of a final grade, but because of the journey there? 

That’s the magic of formative assessments—they’re not just checkpoints; they’re the secret ingredients that make learning stick. 

This blog post dives into the heart of formative assessments, revealing how they can transform classrooms by turning every lesson into an opportunity for growth and every mistake into a learning moment. 

In this definitive guide, we’ll explore the what, why, and how of formative assessments—from their defining characteristics and purpose to a variety of types and strategies for effective use in the classroom. 

Let’s begin.

What Is a Formative Assessment?

Formative assessment is a strategic approach used by educators to monitor students’ learning progress and adjust teaching methods accordingly. It’s characterized by its real-time application, providing immediate feedback that educators can use to adapt their instruction to meet learners’ current needs. 

Unlike summative assessments that evaluate overall learning at the end of an instructional period, formative assessments are conducted throughout the learning process. 

They can take various forms, including quizzes, interactive discussions, and peer reviews, all aimed at gauging understanding and facilitating continuous improvement.

Watch: How to Create an Online Quiz in Under 5 Mins

What Is the Purpose of Formative Assessment?

The purpose of formative assessment is to enhance the learning process by identifying students’ strengths and areas for growth. This ongoing assessment method allows educators to:

  • Modify teaching strategies in real-time to address the immediate needs of their students.
  • Support personalized learning, ensuring that instruction is tailored to individual student progress.
  • Foster an environment of continuous feedback and growth, encouraging students to engage more deeply with their learning and identify their areas for improvement.

By integrating formative assessment into their teaching, educators can create a dynamic and responsive learning environment that supports student success and promotes a deeper understanding of the material.

Types & Examples of Formative Assessment

Formative assessments come in various formats, each designed to gather feedback on student learning in a way that informs instruction and supports student growth. Here are some common formative assessment tools:

  • Quizzes & Mini-Tests: These brief assessments are powerful tools for gauging student knowledge in a focused manner. 

When used regularly, they can highlight trends in student understanding over time, allowing educators to pinpoint specific topics that may require additional instruction or review.

Watch: How to Use Online Quiz Maker for Teachers

  • Observations & Check-Ins: This approach involves informal yet purposeful monitoring of students during class activities. 

It offers nuanced insights into how students interact with the material and each other, providing a real-time snapshot of engagement and comprehension levels.

  • Interactive Discussions: Encouraging open dialogue about the material not only reinforces students’ understanding but also cultivates critical thinking skills. 

Discussions can unveil diverse interpretations and misconceptions, guiding educators in tailoring subsequent lessons to address these gaps.

  • Peer Reviews: Students engage in a reciprocal learning process by evaluating each other’s work. This method not only diversifies feedback but also encourages students to critically engage with the curriculum, deepening their understanding through the lens of their peers’ perspectives.
  • Exit Tickets: Simple prompts or questions at the end of a lesson offer immediate feedback on the day’s learning outcomes. Analyzing responses helps educators assess the effectiveness of their teaching and plan necessary adjustments for future classes.
  • Learning Journals: Journals that prompt reflection on what was learned and questions that arose during the lesson help students articulate their thoughts and feelings about their learning journey. 

Reviewing these journals gives educators a window into students’ self-perceived progress and areas of difficulty.

Incorporating a mix of these formative assessment types enriches the learning environment and empowers students to take an active role in their education. 

Educators can harness these tools to create a dynamic classroom atmosphere that values growth, encourages engagement, and fosters a deeper connection to the material. 

What Is the Process of a Formative Assessment?

The formative assessment process is a cyclical, interactive approach designed to gauge student understanding, provide feedback, and continuously adapt instruction throughout the learning journey. It’s a dynamic framework that supports teaching and enhances learning. 

Here’s a breakdown of the key steps involved:

Step 1: Identify Learning Objectives 

The first step involves clearly defining what students should learn. These objectives guide the creation of assessment tasks and ensure that the assessment is aligned with instructional goals.

Step 2: Select Appropriate Assessment Methods 

Choose from various assessment methods (e.g., quizzes, discussions, projects) that best suit the learning objectives and the learner’s needs. This diversity allows for a more comprehensive understanding of student learning.

Step 3: Implement the Assessment 

Carry out the chosen formative assessment during the instructional process. This could be through live quizzes, interactive discussions, peer reviews, or individual reflections. The key is to integrate these assessments seamlessly into the learning activities.

Step 4: Analyze Learner Responses 

formative assessments in tertiary education

Review the information gathered from the assessment to identify patterns, strengths, and areas for improvement. This analysis provides insights into each student’s understanding and progress.

Step 5: Provide Feedback 

formative assessments in tertiary education

Offer timely and constructive feedback to students based on their performance. Effective feedback is specific, actionable, and focused on growth, helping students understand what they did well and where they can improve.

Step 6: Adjust Instruction 

Based on the feedback and analysis, adapt your teaching strategies to address the identified learning gaps or challenges. This might involve revisiting specific topics, introducing new resources, or modifying learning activities to suit students’ needs better.

Step 7: Reflect on the Process 

Finally, reflect on the effectiveness of the formative assessment process itself. Consider what worked well and what could be improved in future iterations. This reflection helps refine the assessment process, making it more effective over time.

Throughout this process, the emphasis is on fostering an environment of continuous learning and improvement. By actively engaging in each step, educators can create a responsive classroom atmosphere that supports every student’s growth and achievement.

Strategies for Effective Formative Assessments

To maximize the benefits of formative assessments, educators need to apply strategies that make the feedback loop as effective and seamless as possible. Here’s how to ensure formative assessments contribute positively to both teaching and learning:

  • Embed Assessments in Everyday Learning 

Make formative assessments a natural extension of classroom activities. After a science experiment, for instance, ask students to predict the outcome based on the theory they’ve learned. This not only assesses their understanding but also encourages critical thinking.

  • Embrace Technology for Interactive Learning 

Modern tools have revolutionized the way we assess and engage with students. ProProfs Quiz Maker, for example, offers an intuitive platform for creating quizzes that are both fun and educational. 

You can create educational quizzes that provide instant feedback, helping students identify areas of strength and those needing improvement, all within an interactive format that captures their interest.

  • Foster a Culture of Peer Feedback

Implement structured peer review sessions where students can offer constructive feedback on each other’s presentations or essays. This strategy not only diversifies the sources of feedback but also helps students develop a critical eye for their work and that of their peers.

  • Encourage Reflective Practices 

Guide students in reflecting on their learning experiences and outcomes. A reflective journal entry after completing a group project can provide insights into what they learned, the challenges they faced, and how they overcame them, fostering a deeper understanding of the learning process.

  • Connect Learning to Real-world Applications 

Design assessments that require students to apply classroom knowledge to solve real-world problems. For instance, in a geography class, students could analyze the impact of climate change on their local community, encouraging them to connect theory with practical, observable phenomena.

  • Leverage Exit Tickets for Immediate Insights 

At the end of a lesson, a simple question related to the day’s topic can serve as an exit ticket. This strategy offers quick insights into students’ understanding and retention, informing future instructional decisions.

Implementing these strategies can make formative assessments a powerful tool for enhancing student learning, providing educators with the flexibility to meet each student’s needs while fostering a supportive and inclusive classroom environment.

What Are the Benefits of a Formative Assessment?

Formative assessments offer a wealth of benefits that significantly contribute to both teaching efficacy and student learning outcomes. 

By integrating formative assessments into the educational process, educators and students can experience a more engaged, reflective, and practical learning journey. Here are some of the key benefits:

  • Enhanced Learning and Understanding 

Formative assessments help students consolidate their learning by actively engaging with the material. This continuous engagement promotes deeper understanding and retention of the content.

Watch: How Luc Viatour Transformed Education for 1500+ Daily Learners

  • Immediate Feedback for Quick Adjustments

The real-time feedback provided through formative assessments allows students to identify their strengths and areas for improvement promptly. This immediacy enables quick corrective actions, fostering a more dynamic and responsive learning environment.

  • Personalized Learning Experiences 

Formative assessments identify individual learning needs, enabling educators to tailor their teaching strategies and resources. This personalization ensures that all students receive the support and challenge they need to progress.

  • Increased Student Motivation and Engagement 

Active involvement in the learning process increases students’ motivation and engagement. Formative assessments encourage students to take ownership of their learning, leading to higher levels of participation and interest.

  • Development of Critical Thinking and Skills 

Through activities like peer reviews and self-assessments, students develop essential skills, including critical thinking, self-reflection, and the ability to receive and apply feedback constructively.

  • Support for a Growth Mindset 

Formative assessments emphasize growth and improvement over grades, helping to cultivate a growth mindset among students. This perspective encourages learners to view challenges as opportunities to learn and grow rather than as failures.

  • Improved Teacher-Student Relationships 

The continuous interaction and feedback loop foster closer relationships between teachers and students. This rapport builds a supportive classroom atmosphere where students feel valued and understood.

  • Data-Driven Instructional Decisions 

Insights from formative assessments give educators a clear view of student understanding, enabling precise, data-driven adjustments to teaching. This targeted approach ensures lessons meet students’ exact needs, optimizing learning outcomes.

  • Reduction of Test Anxiety 

Integrating formative assessments throughout the learning journey shifts the focus from high-stakes evaluation to ongoing improvement, significantly easing test-related stress. This frequent, low-pressure feedback mechanism familiarizes students with the assessment process, building their confidence and diminishing anxiety over time.

  • Preparation for Summative Assessments 

Regular formative assessments prepare students for summative assessments by ensuring they understand the material and can apply their knowledge effectively. This preparation can lead to better performance on final exams and standardized tests.

Watch: How DMS Boosted Student Scores

How to Create a Formative Assessment Quiz

If you’re using an intuitive quiz tool, such as ProProfs Quiz Maker, the process for creating a quiz is quite straightforward. Here’s how to create a formative assessment quiz in five quick and easy steps:

Step 1: Click “Create a Quiz” on your dashboard. 

formative assessments in tertiary education

Step 2: Pick a ready-to-use quiz, create a quiz with AI , or build it from scratch.

formative assessments in tertiary education

Step 3: Add/edit the quiz title, description & cover image.

formative assessments in tertiary education

Step 4: Add/edit questions. 

formative assessments in tertiary education

Employ a variety of question formats to explore diverse knowledge and skill areas, guaranteeing a thorough examination of the topic at hand. 

ProProfs provides an array of question styles, including multiple-choice, fill-in-the-blanks, drag & drop, hotspot, and audio/video responses, facilitating a detailed assessment of learners’ comprehension.

Watch: 15+ Question Types for Online Learning & Assessment

You can add new questions by:

  • importing them from 1,000,000+ ready-to-use questions  
  • using ProProfs AI to generate questions instantly 
  • creating them by yourself

You can add images, videos, audio clips, and docs to your quiz. 

formative assessments in tertiary education

You can also automate the grading of your quizzes to save time and effort, which you can invest in providing individualized support to your learners.   

Watch: How to Automate Quiz Scoring & Grading

You also have the option to offer explanations for answers immediately after a question is answered in the quiz. This instant feedback not only supports the learning process but also enables students to recognize areas requiring improvement.

Step 5: Configure settings.

You can implement several security and anti-cheating measures , including:

  • Setting your quiz to be private and secured with a password
  • Randomizing the sequence of questions and/or answer choices
  • Developing a question pool and drawing a random selection of questions for each participant
  • Overseeing the quiz through screen sharing, webcam, and microphone monitoring
  • Preventing tab switching, printing, copying, downloading, and repeated attempts

Watch: How to Customize & Configure Your Quiz Settings

You can also change the quiz’s appearance by adjusting the background, colors, fonts, and button text. Plus, you can set the quiz to appear in the participant’s native language.

formative assessments in tertiary education

That’s it. Your formative assessment quiz is ready.

Analyzing the Results

After administering a formative assessment, ProProfs Quiz Maker delivers in-depth analytics that paints a complete picture of every student’s learning progress and overall class performance. This data is essential for modifying instructional strategies to better align with students’ learning needs. 

Apply this insightful feedback to adjust your teaching plans, focusing on clarifying common misconceptions and bolstering areas where students show weaknesses.

Enhance Classroom Dynamics With Formative Assessments

In conclusion, formative assessments are the core of an adaptive and responsive teaching strategy. They offer a clear window into student progress and areas for growth. This approach aligns instruction closely with student needs, significantly enhancing learning outcomes. 

By incorporating tools like ProProfs Quiz Maker, educators can design engaging and insightful assessments that contribute to a tailored learning experience. 

Start elevating your teaching approach by trying out ProProfs Quiz Maker through a free trial or requesting a demonstration today.

Frequently Asked Questions  

What are formative and summative assessments?

Formative assessments are tools teachers use during the learning process to see how students are doing and to adjust their teaching methods. Summative assessments happen at the end of a learning period, like a final exam, to measure what students have learned overall.

Are quizzes summative or formative?

Quizzes can act as both formative and summative assessments. As formative assessments, quizzes are used throughout the learning process to guide both teaching and learning. As summative assessments, quizzes evaluate students’ final understanding at the end of a unit or semester.

Is a worksheet a formative assessment?

Worksheets can serve as formative assessments when used to monitor students’ understanding and inform future teaching strategies. They become practical tools for ongoing learning and adaptation in the classroom, emphasizing feedback over final grades.

Do you want free Quiz Software?

We have the #1 Online Quiz Maker Software for complete learning & assessment

Michael Laithangbam

About the author

Michael laithangbam.

Michael Laithangbam is the senior writer & editor at ProProfs with 12 years of experience in enterprise software and eLearning. Michael's expertise encompasses online training, web-based learning, quizzes & assessments, LMS, and more. Michael’s work has been published in G2, Software Advice, Capterra, and eLearning Industry.

Popular Posts in This Category

formative assessments in tertiary education

15 Best Online Assessment Tools for 2024

formative assessments in tertiary education

What Is a Hiring Assessment? Types, Examples & Best Practices

formative assessments in tertiary education

What is an Internal Assessment? Methods, Advantages and Tips

formative assessments in tertiary education

What Is an Assessment Tool? Types, Benefits, Best Practices

formative assessments in tertiary education

Decoding Informal Assessments: Definition, Examples & Benefits

formative assessments in tertiary education

6 Best Quizizz Alternatives & Competitors for Online Assessment in 2024

Higher Education Whisperer

Course Design, Teaching and Research.

Wednesday, May 8, 2024

Academic integrity through digital proctoring of assessment at edutech asia in singapore in november.

formative assessments in tertiary education

No comments:

Post a comment.

IMAGES

  1. Formative and summative assessments in higher education: an overview

    formative assessments in tertiary education

  2. Four Types of Formative Assessment To Enhance Engagement & Learning (2022)

    formative assessments in tertiary education

  3. 27 Formative Assessment Tools for Your Classroom

    formative assessments in tertiary education

  4. Formative and Summative Assessment

    formative assessments in tertiary education

  5. The Ultimate Guide to Formative Assessments (2023)

    formative assessments in tertiary education

  6. Formative Assessment: What Is It and Why Use It

    formative assessments in tertiary education

VIDEO

  1. Understanding Formative Assessment Versus Summative Assessment

  2. Using Formative Assessments to Guide Instruction

  3. What is formative assessment?

  4. NTU rated GOLD in the 2023 Teaching Excellence Framework (TEF) assessment

  5. 7 Types Of Assessment/Evaluation in Teaching Learning || Teaching Aptitude ||

  6. Formative and Summative Assessments: Using Both to Enhance Student Learning

COMMENTS

  1. Formative assessment and feedback for learning in higher education: A systematic review

    INTRODUCTION. Formative assessment and feedback are fundamental aspects of learning. In higher education (HE), both topics have received considerable attention in recent years with proponents linking assessment and feedback—and strategies for these—to educational, social, psychological and employability benefits (Gaynor, 2020; Jonsson, 2013; van der Schaaf et al., 2013).

  2. Formative Assessment of Teaching

    Developments may include changes in the students we serve, changes in our understanding of effective teaching, and changes in expectations of the discipline and of higher education as a whole. Formative assessment of teaching ultimately should guide instructors towards using more effective teaching practices.

  3. Formative assessment and feedback for learning in higher education: A

    We review causal evidence from trials of feedback and formative assessment in higher education. Although the evidence base is currently limited, our results suggest that low stakes‐quizzing is a ...

  4. PDF Assessment in Higher Education and Student Learning

    Still, Carless et al. (2010) noted that summative assessment could be formative and for learning if there is feedback given that helps students learn. In higher education, most assessment strategies, such as course assignments, serve both a formative (assessment for learning) and a summative (assessment of learning) function

  5. Formative Assessment and Feedback Strategies

    Formative assessment is an essential part of higher education. Formative assessment is an umbrella term for different approaches and strategies to monitor and improve students' self-regulated learning as well as the quality of instruction.

  6. Formative assessment and feedback for learning in higher education: A

    We review causal evidence from trials of feedback and formative assessment in higher education. Although the evidence base is currently limited, our results suggest that low stakes-quizzing is a particularly powerful approach and that there are benefits for forms of peer and tutor feedback, although these depend on implementation factors. ...

  7. Enhancing Learning Through Formative Assessment: Evidence ...

    In higher education, assessment can involve both summative and formative assessment. For various practical and accountability reasons, summative assessment, refers to "the processes and instruments [such as final exams] that provide a general and final assessment of student's learning within a given course or learning unit" (Coombe, 2018 , p.

  8. Using scaffolding strategies to improve formative assessment practice

    Introduction. Formative assessment has become a well-known and important concept in education and research (Black and Wiliam Citation 2018; Schildkamp et al. Citation 2020).Formative assessment is defined as a process of constant interaction between students and teacher (Black and Wiliam Citation 2012), and should be viewed as an integral part of teaching and learning (Leenknecht et al ...

  9. PDF Formative assessment in higher education: Moves towards theory and the

    Formative assessment Definitional fuzziness Formative assessment is a concept that is more complex than it might appear at first sight. The basic idea seems simple enough - the central purpose of formative assessment is to contribute to student learning through the provi-sion of information about performance. Formative assessment can be formal

  10. Online formative assessment in higher education: A review of the

    In review ing the literature about formative assessment and its pedagogical implications in higher education, Koh (2008) identified deep learning, motivation and self-esteem, self-regulated and transferable learning as main benefits of formative feedback. Koh's review included studies in both online and f2f settings.

  11. Frontiers

    This study explores how prospective professionals in higher education can learn about and apply formative assessment methods relevant to their future educational workplaces. In the academic year 2022-23, 156 pre-service teachers, social workers, and heads of social services took part in a three-stage mixed-method study on university learning experiences involving formative assessment ...

  12. Summative and formative assessment: Perceptions and realities

    Abstract. Assessment is critically important to education both for accreditation and to support learning. Yet the literature dealing with formative and summative assessment definitions and terminology is not aligned. This article reports an empirical small-scale study of lecturers in Education at an English university.

  13. PDF Online formative assessment in higher education: Its pros and cons

    1. Introduction. Assessment for learning (formative assessment) has been noticeable intonation in assessment circles rather than assessment of learning (summative assessment) but the main focus has shifted; the use of online and blended learning has developed drastically in the 21st century higher education learning and teaching environment.

  14. (PDF) A Review of Formative Assessment Techniques in Higher Education

    Formal formative assessment techniques such as quizzes, mid-term tests, exercises, and. computer adaptive tests give instructors the leverage to plan assessment tasks ahead of time, and allow ...

  15. Sentiment analysis for formative assessment in higher education: a

    SA for learning assessment in online higher education. Assessment of learning involves a process of verification, evaluation, and decision-making with the purpose of optimizing the teaching-learning process (Pinger et al., 2018).Formative and final assessment can increase motivation and involvement of students and provide opportunities for the correction of errors (Gikandi et al., 2011).

  16. [PDF] Formative assessment in higher education: Moves towards theory

    The importance of formative assessment instudent learning is generally acknowledged, butit is not well understood across higher education.The identification of some key features offormative assessment opens the way for adiscussion of theory. It is argued that thereis a need for further theoretical developmentin respect of formative assessment, which needsto take account of disciplinary ...

  17. Online formative assessment in higher education: A review of the

    As online and blended learning has become common place educational strategy in higher education, educators need to reconceptualise fundamental issues of teaching, learning and assessment in non traditional spaces. These issues include concepts such as validity and reliability of assessment in online environments in relation to serving the intended purposes, as well as understanding how ...

  18. Online Formative Assessment in Higher Education: Bibliometric ...

    Assessment is critical in postsecondary education, as it is at all levels. Assessments are classified into four types: diagnostic, summative, evaluative, and formative. Recent trends in assessment have migrated away from summative to formative evaluations. Formative evaluations help students develop expertise and concentrate their schedules, ease student anxiety, instill a feeling of ownership ...

  19. Assessment and Feedback in Higher Education

    Impacts of higher education assessment and feedback policy and practice on students: a review of the literature 2016-2021. Conducted by Edd Pitt, PhD SFHEA, and Kathleen M Quinlan, PhD PFHEA, from the University of Kent, the review explores evidence from peer-reviewed journal articles relating to assessment and feedback in higher education published from 2016 to 2021.

  20. Formative assessment in higher education: Moves towards ...

    The importance of formative assessment instudent learning is generally acknowledged, butit is not well understood across higher education.The identification of some key features offormative assessment opens the way for adiscussion of theory. It is argued that thereis a need for further theoretical developmentin respect of formative assessment, which needsto take account of disciplinary ...

  21. 7 Smart, Fast Formative Assessment Strategies

    3. Dipsticks: So-called alternative formative assessments are meant to be as easy and quick as checking the oil in your car, so they're sometimes referred to as dipsticks. These can be things like asking students to: write a letter explaining a key idea to a friend, draw a sketch to visually represent new knowledge, or.

  22. (PDF) Formative Assessment in Higher Education

    The princip les are as follows: 1. helps clarify what good performance is (goals, criteria, expected standards); 2. facilitates the development of self-assessment (reflection) in learning; 3 ...

  23. The effectiveness of formative assessment: student views and staff

    Assessment & Evaluation in Higher Education Volume 46, 2021 - Issue 1. Submit an article Journal homepage. 7,211 Views 44 ... Students engaged with the formative assessments and the evidence gathered via questionnaires show that students perceived that formative e-assessments helped them to monitor their progress; encouraged further study and ...

  24. PDF The Effect of Formative Assessment Practices on Student Learning ...

    the effectiveness of formative assessment in each education level from primary to tertiary, features of formative assessment interventions and publication types were also examined as moderator variables in the study. In this study, how formative assessment practices in Turkey's education system promote

  25. New Trends in Formative-Summative Evaluations for Adult Education

    4. Assessment goal is formative or assessment for learning, that is, to improve the performance during the process but evaluation is summative since it is preformed after the program has been completed to judge the quality. 5. Assessment targets the process, whereas evaluation is aimed to the outcome. 6.

  26. What is Formative Assessment? Types, Examples & Strategies

    Formative assessment is a strategic approach used by educators to monitor students' learning progress and adjust teaching methods accordingly. It's characterized by its real-time application, providing immediate feedback that educators can use to adapt their instruction to meet learners' current needs.

  27. Higher Education Whisperer: Academic integrity through digital

    Looking forward to speaking on a panel on "Safety check: maintaining academic integrity through digital proctoring in assessments" in Singapore, at EDUtech Asia 2024, 7 November, 11am, Stage 4, with Girija Veerappan, & Mohd Rozi Ismail.I have lost count how many EduTechs I have been to.

  28. Exploring usage of summative peer assessments in engineering education

    Summative peer assessment is an assessment method where the one's work is typically graded by several other anonymous peers using predefined criteria. The value of summative peer assessments in higher education stems from the fact that they can provide scalability in assessment for large enrollment classes for a variety of different assessment types.