Advertisement

Advertisement

The power of assessment feedback in teaching and learning: a narrative review and synthesis of the literature

  • Original Paper
  • Published: 09 March 2021
  • Volume 1 , article number  75 , ( 2021 )

Cite this article

summative assessment literature review

  • Michael Agyemang Adarkwah   ORCID: orcid.org/0000-0001-8201-8965 1  

3566 Accesses

13 Citations

1 Altmetric

Explore all metrics

Assessment feedback is heralded as an integral facilitator of teaching and learning. Despite the acknowledgement of its crucial role in education, there are inconsistencies in its powerful impact in teaching and learning: the role of the categories of feedback, the role of providers of feedback, constituents of effective feedback, and barriers to effective feedback. The focus of the narrative synthesis is to examine these different dimensions of assessment feedback and its powerful role in teaching and learning. A narrative evidence involving 82 studies was presented in thematic themes identified in literature. From the comprehensive review of the literature, the concept of assessment feedback and how it contributes to school effectiveness is thoroughly discussed. The article presents assessment feedback as a valuable factor for educators and students seeking to ensure continuous school improvement. It was found that a blended form of formative and summative feedback can improve teaching and learning. Feedback in any form should be specific, timely, frequent, supportive, and constructive. Negative feedback can distort learning, affective states of the recipient of feedback, and the job performance of employees. Findings from the review can assist researchers, authors, and readers of feedback reviews in the conceptualization of the role of assessment feedback in education. The study concludes with pedagogical implications for teaching and learning practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

summative assessment literature review

Similar content being viewed by others

summative assessment literature review

Formative Assessment and Feedback Strategies

summative assessment literature review

How to Improve the Efficacy of Student Feedback

Adalberon E (2020) Providing assessment feedback to pre-service teachers: a study of examiners’ comments. Assess Eval Higher Educ. https://doi.org/10.1080/02602938.2020.1795081

Article   Google Scholar  

Al-Hattami AA (2019) The perception of students and faculty staff on the role of constructive feedback. Int J Instr 12(1):885–894. https://doi.org/10.29333/iji.2019.12157a

Alt D, Raichel N (2020) Higher education students’ perceptions of and attitudes towards peer assessment in multicultural classrooms. Asia-Pacific Educ Res 29(6):567–580. https://doi.org/10.1007/s40299-020-00507-z

Aoun C, Vatanasakdakul S, Ang K (2016) Feedback for thought: examining the influence of feedback constituents on learning experience. Stud High Educ 43(1):72–95. https://doi.org/10.1080/03075079.2016.1156665

Atwater LE, Brett JF (2006) 360-Degree feedback to leaders: does it relate to changes in employee attitudes? Group Organ Manag 31(5):578–600. https://doi.org/10.1177/1059601106286887

Bader M, Burner T, Iversen SH (2019) Student perspectives on formative feedback as part of writing portfolios. Assess Eval 44(7):1017–1028. https://doi.org/10.1080/02602938.2018.1564811

Banister C (2020) Exploring peer feedback processes and peer feedback meta-dialogues with learners of academic and business English. Lang Teach Res. https://doi.org/10.1177/1362168820952222

Beran TN, Rokosh JL (2009) Instructors’ perspectives on the utility of student ratings of instruction. Instr Sci 37(2):171–184. https://doi.org/10.1007/s11251-007-9045-2

Bergil AS, Atlib I (2012) Different perspectives about feedback on teaching. Procedia 46:5833–5839. https://doi.org/10.1016/j.sbspro.2012.06.524

Black P, Wiliam D (1998) Assessment and classroom learning. Assess Educ 5(1):7–74. https://doi.org/10.1080/0969595980050102

Bohndick C, Menne CM, Kohlmeyer S, Buhl HM (2019) Feedback in internet-based self-assessments and its effects on acceptance and motivation. J Further Higher Educ 44(6):717–728. https://doi.org/10.1080/0309877X.2019.1596233

Boud D, Falchikov N (1989) Quantitative studies of student self-assessment in higher education: a critical analysis of findings. High Educ 18:529–549. https://doi.org/10.1007/BF00138746

Brown G, Harris LR, Harnett JA (2012) Teacher beliefs about feedback within an Assessment for Learning environment: endorsement of improved learning over student well-being. Teach Teach Educ 28(7):968–978. https://doi.org/10.1016/j.tate.2012.05.003

Carless D, Boud D (2018) The development of student feedback literacy: enabling uptake of feedback. Assess Eval Higher Educ 43(8):1315–1325. https://doi.org/10.1080/02602938.2018.1463354

Chan JC, Lam S-F (2010) Effects of different evaluative feedback on students’ self-efficacy in learning. Instr Sci 38:37–58. https://doi.org/10.1007/s11251-008-9077-2

Cohen VB (1985) A reexamination of feedback in computer-based instruction: implications for instructional design. Educ Technol 25(1):33–37

Google Scholar  

Colbran S, Gilding A, Colbran S (2016) Animation and multiple-choice questions as a formative feedback tool for legal education. Law Teach 51(3):249–273. https://doi.org/10.1080/03069400.2016.1162077

Cooper NJ (2000) Facilitating learning from formative feedback in level 3 assessment. Assess Eval Higher Educ 25(3):279–291. https://doi.org/10.1080/713611435

Crisp BR (2007) Is it worth the effort? How feedback influences students’ subsequent submission of assessable work. Assess Eval Higher Educ 32(5):571–581. https://doi.org/10.1080/02602930601116912

Deeley SJ (2013) Summative co-assessment: a deep learning approach to enhancing employability skills and attributes. Act Learn High Educ 15(1):39–51. https://doi.org/10.1177/1469787413514649

Donaldson ML, Papay JP (2015) An idea whose time had come: negotiating teacher evaluation reform in New Haven, Connecticut. Am J Educ 122(1):39–70. https://doi.org/10.1086/683291

Ellegaard M, Damsgaard L, Bruun J, Johannsen BF (2017) Patterns in the form of formative feedback and student response. Assess Eval Higher Educ 43(5):727–744. https://doi.org/10.1080/02602938.2017.1403564

Evans C (2013) Making sense of assessment feedback in higher education. Rev Educ Res 83(1):70–120. https://doi.org/10.3102/0034654312474350

Fedor DB, Davis WD, Maslyn JM, Mathieson K (2001) Performance improvement efforts in response to negative feedback: the roles of source power and recipient self-esteem. J Manage 27(1):79–97. https://doi.org/10.1177/014920630102700105

Flodén J (2016) The impact of student feedback on teaching in higher education. Assess Eval Higher Educ 42(7):1054–1068. https://doi.org/10.1080/02602938.2016.1224997

Fluckiger J, Vigil YT, Pasco R, Danielson K (2010) Formative feedback: involving students as partners in assessment to enhance learning. Coll Teach 58(4):136–140. https://doi.org/10.1080/87567555.2010.484031

Frank B, Simper N, Kaupp J (2017) Formative feedback and scaffolding for developing complex problem solving and modelling outcomes. Eur J Eng Educ 43(4):552–568. https://doi.org/10.1080/03043797.2017.1299692

Gaertner H (2014) Effects of student feedback as a method of self-evaluating the quality of teaching. Stud Educ Eval 42:91–99. https://doi.org/10.1016/j.stueduc.2014.04.003

Gibbs G, Simpson C (2005) Conditions under which assessment supports students’ learning. Learn Teach Higher Educ 1:3–31

Gibbs JC, Taylor JD (2016) Comparing student self-assessment to individualized instructor feedback. Act Learn High Educ 17(2):1–13. https://doi.org/10.1177/1469787416637466

Goh K, Walker R (2018) Written teacher feedback: reflections of year seven music students. Austral J Teach Educ 43(12):30–41

Halverson R (2010) School formative feedback systems. Peabody J Educ 85(2):130–146. https://doi.org/10.1080/01619561003685270

Hamilton IR (2009) Automating formative and summative feedback for individualised assignments. Campus-Wide Inf Syst 26(5):355–364. https://doi.org/10.1108/10650740911004787

Harlen W, James M (1997) Assessment and Learning: differences and relationships between formative and summative assessment. Assess Educ 4(3):365–379. https://doi.org/10.1080/0969594970040304

Harris LR, Brown GT, Harnett JA (2015) Analysis of New Zealand primary and secondary student peer- and self-assessment comments: applying Hattie and Timperley’s feedback model. Assess Educ 22(2):265–281. https://doi.org/10.1080/0969594X.2014.976541

Harrison CJ, Könings KD, Schuwirth L, Wass V, Vleuten C (2015) Barriers to the uptake and use of feedback in the context of summative assessment. Adv Health Sci Educ 20:229–245. https://doi.org/10.1007/s10459-014-9524-6

Hattie J (2008) Visible learning: a synthesis of over 800 meta-analyses relating to achievement. Routledge, Oxford

Book   Google Scholar  

Hattie J, Timperley H (2007) The power of feedback. Rev Educ Res 77(1):81–112. https://doi.org/10.3102/003465430298487

Hellrung K, Hartig J (2013) Understanding and using feedback – a review of empirical studies concerning feedback from external evaluations to teachers. Educ Res Rev 9:174–190. https://doi.org/10.1016/j.edurev.2012.09.001

Hoban G, Hastings G (2006) Developing different forms of student feedback to promote teacher reflection: a 10-year collaboration. Teach Teach Educ 22(8):1006–1019. https://doi.org/10.1016/j.tate.2006.04.006

Hoo H-T, Tan K, Deneen C (2020) Negotiating self- and peer-feedback with the use of reflective journals: an analysis of undergraduates’ engagement with feedback. Assess Eval Higher Educ 45(3):431–446. https://doi.org/10.1080/02602938.2019.1665166

Huang S-C (2015) Understanding learners’ self-assessment and self-feedback on their foreign language speaking performance. Assess Eval Higher Educ 41(6):803–820. https://doi.org/10.1080/02602938.2015.1042426

Karlsen KH (2017) The value of oral feedback in the context of capstone projects in design education. Des Technol Educ 22(3):1–23

Kluger AN, DeNisi A (1996) The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psycholog Bull 119(2):254–284. https://doi.org/10.1037/0033-2909.119.2.254

Kmet LM, Cook IS, Lee RC (2004) Standard quality assessment criteria for evaluating primary research papers from a variety of fields. Alberta Heritage Foundation for Medical Research, Alberta

Koka A, Hein V (2006) Perceptions of teachers’ positive feedback and perceived threat to sense of self in physical education: a longitudinal study. Eur Phys Educ Rev 12(2):333–346. https://doi.org/10.1177/1356336X06065180

Kyaruzi F, Strijbos J-W, Ufer S, Brown GT (2019) Students’ formative assessment perceptions, feedback use and mathematics performance in secondary schools in Tanzania. Assess Educ 26(3):278–302. https://doi.org/10.1080/0969594X.2019.1593103

Lepper MR, Chabay RW (1985) Intrinsic motivation and instruction: Conflicting views on the role of motivational processes in computer-based education. Educ Psychol 20(4):217–230. https://doi.org/10.1207/s15326985ep2004_6

Lipnevich AA, Smith JK (2009) Effects of differential feedback on students’ examination performance. J Exp Psychol 15(4):319–333. https://doi.org/10.1037/a0017841

Liu Y, Visone J, Mongillo MB, Lisi P (2019) What matters to teachers if evaluation is meant to help them improve? Stud Educ Eval 61:41–54. https://doi.org/10.1016/j.stueduc.2019.01.006

Lutovac S, Kaasila R, Komulainen J, Maikkola M (2017) University lecturers’ emotional responses to and coping with student feedback: a Finnish case study. Eur J Psychol Educ 32:235–250. https://doi.org/10.1007/s10212-016-0301-1

Mahfoodh OH (2017) “I feel disappointed”: EFL university students’ emotional responses towards teacher written feedback. Assess Writ 31:53–72. https://doi.org/10.1016/j.asw.2016.07.001

May T (2013) Identifying the characteristics of written formative feedback used by assessors in work-based qualifications. J Vocat Educ Train 65(1):18–32. https://doi.org/10.1080/13636820.2012.727855

McCarthy J (2017) Enhancing feedback in higher education: students’ attitudes towards online and in-class formative assessment feedback models. Act Learn High Educ 18(2):127–141. https://doi.org/10.1177/1469787417707615

McKevitt CT (2013) Engaging students with self-assessment and tutor feedback to improve performance and support assessment capacity. J Univ Teach Learn Pract 13(1):1–20

Merry S, Orsmond P (2008) Students’ attitudes to and usage of academic feedback provided via audio files. Biosci Educ 1:1–11. https://doi.org/10.3108/beej.11.3

Mireles-Rios R, Becchio JA (2018) The evaluation process, administrator feedback, and teacher self-efficacy. J School Leadership 28(4):462–487. https://doi.org/10.1177/105268461802800402

Mireles-Rios R, Becchio JA, Roshandel S (2019) Teacher evaluations and contextualized self-efficacy: classroom management, instructional strategies and student engagement. J School Adm Res Dev 4(1):6–17

Montgomery JL, Baker W (2007) Teacher-written feedback: student perceptions, teacher self-assessment, and actual teacher performance. J Second Lang Writing 16(2):82–99. https://doi.org/10.1016/j.jslw.2007.04.002

Moreno R (2004) Decreasing cognitive load for novice students: effects of explanatory versus corrective feedback in discovery-based multimedia. Instr Sci 32(1):99–113. https://doi.org/10.1023/B:TRUC.0000021811.66966.1d

Mubayrik HF (2020) New trends in formative-summative evaluations for adult education. SAGE Open. https://doi.org/10.1177/2158244020941006

Narciss S, Huth K (2004) How to design informative tutoring feedback for multi-media learning. In: Niegemann HM, Leutner D, Brunken R (eds) Instructional design for multimedia learning. Waxmann, Munster, NY, pp 181–195

Nicol D, Thomson A, Breslin C (2014) Rethinking feedback practices in higher education: a peer review perspective. Assess Eval Higher Educ 39(1):102–122. https://doi.org/10.1080/02602938.2013.795518

O’Donovan BM, Outer BD, Price M, Lloyd A (2020) What makes good feedback good? Stud High Educ. https://doi.org/10.1080/03075079.2019.1630812

O’Neill G, McEvoy E, Maguire T (2020) Developing a national understanding of assessment and feedback in Irish higher education. Irish Educ Stud. https://doi.org/10.1080/03323315.2020.1730220

Pagano R, Paucar-Caceres A (2013) Using systems thinking to evaluate formative feedback in UK higher education: the case of classroom response technology. Innov Educ Teach Int 50(1):94–103. https://doi.org/10.1080/14703297.2012.748332

Panhoon S, Wongwanich S (2014) An analysis of teacher feedback for improving teaching quality in primary schools. Procedia 116:4124–4130. https://doi.org/10.1016/j.sbspro.2014.01.902

Percell JC (2017) Lessons from alternative grading: essential qualities of teacher feedback. Clearing House 90(4):111–115. https://doi.org/10.1080/00098655.2017.1304067

Perera J, Lee N, Win K, Wijesuriya L (2008) Formative feedback to students: the mismatch between faculty perceptions and student expectations. Med Teach 30:395–399. https://doi.org/10.1080/01421590801949966

Perera L, Nguyen H, Watty K (2014) Formative feedback through summative tutorial-based assessments: the relationship to student performance. Account Educ 23(5):424–442. https://doi.org/10.1080/09639284.2014.947093

Peters M, Godfrey C, Khalil H, McInerney P, Parker D, Baldini Soares C (2015) Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc 13:141–146. https://doi.org/10.1097/XEB.0000000000000050

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, Duffy S (2006) Guidance on the conduct of narrative synthesis in systematic reviews. A Product from the ESRC Methods Programme Version 1: b92

Popham J (2008) Transformative assessment. Association for Supervision and Curriculum Instruction, Alexandria, VA

Qunayeer HS (2019) Supporting postgraduates in research proposals through peer feedback in a Malaysian university. J Further Higher Educ 44(7):956–970. https://doi.org/10.1080/0309877X.2019.1627299

Rand J (2017) Misunderstandings and mismatches: the collective disillusionment of written summative assessment feedback. Res Educ 97(1):33–48. https://doi.org/10.1177/0034523717697519

Robins L, Smith S, Kost A, Combs H, Kritek PA, Klein EJ (2019) Faculty perceptions of formative feedback from medical students. Teach Learn Med 32(2):168–175. https://doi.org/10.1080/10401334.2019.1657869

Ryan JJ, Anderson JA, Birchler AB (1980) Student evaluation: the faculty responds. Res High Educ 12(4):317–333. https://doi.org/10.1007/BF00976185

Sadler RD (1989) Formative assessment and the design of instructional systems. Instr Sci 18:119–144

Schweinberger K, Quesel C, Mahler S, Höchli A (2017) Effects of feedback on process features of school quality: a longitudinal study on teachers’ reception of school inspection of Swiss compulsory schools. Stud Educ Eval 55:75–82. https://doi.org/10.1016/j.stueduc.2017.07.004

Shute VJ (2007) Focus on formative feedback. Research Report. Educational Testing Service, Princeton, NJ

Shute VJ (2008) Focus on formative feedback. Rev Educ Res 78(1):153–189. https://doi.org/10.3102/0034654307313795

Skovholt K (2018) Anatomy of a teacher–student feedback encounter. Teach Teach Educ 69:142–153. https://doi.org/10.1016/j.tate.2017.09.012

Tan FD, Whipp PR, Gagné M, Van Quaquebeke N (2020) Expert teacher perceptions of two-way feedback interaction. Teach Teach Educ 87:1–12. https://doi.org/10.1016/j.tate.2019.102930

Taras M (2008) Summative and formative assessment: perceptions and realities. Act Learn High Educ 9(2):172–192. https://doi.org/10.1177/1469787408091655

Tasker TQ, Herrenkohl LR (2016) Using peer feedback to improve students’ scientific inquiry. J Sci Teacher Educ 27:35–59. https://doi.org/10.1007/s10972-016-9454-7

Van den Hurk HT, Houtveen AA, Van de Grift WJ (2016) Fostering effective teaching behavior through the use of data-feedback. Teach Teach Educ 60:444–451. https://doi.org/10.1016/j.tate.2016.07.003

Van der Kleij FM, Adie LE, Cumming JJ (2019) A meta-review of the student role in feedback. Int J Educ Res 98:303–323. https://doi.org/10.1016/j.ijer.2019.09.005

Watling C, Driessen E, Vleuten CP, Vanstone M, Lingard L (2013) Beyond individualism: professional culture and its influence on feedback. Med Educ 47(6):585–594. https://doi.org/10.1111/medu.12150

Weaver MR (2006) Do students value feedback? Student perceptions of tutors’ written responses. Assess Eval Higher Educ 31(3):379–394. https://doi.org/10.1080/02602930500353061

White HD (1994) Scientific communication and literature retrieval. In: Cooper H, Hedges LV (eds) The handbook of research synthesis. Russell Sage Foundation, New York NY, pp 41–55

Wiggins G (2011) Giving students a voice: The power of feedback to improve teaching. Educ Horiz 89(3):23–26

Winstone NE, Boud D (2020) The need to disentangle assessment and feedback in higher education. Stud High Educ. https://doi.org/10.1080/03075079.2020.1779687

Yorke M (2003) Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. High Educ 45(4):477–501. https://doi.org/10.1023/A:1023967026413

Zhong Q, Yan M, Zou F (2019) The effect of teacher feedback on the simple past tense acquisition in senior high school students’ english writing. World J Educ 9(3):30–37. https://doi.org/10.5430/wje.v9n3p30

Download references

Author information

Authors and affiliations.

Southwest University, No. 2 Tianzhu Street, Beibei District, Chongqing, China

Michael Agyemang Adarkwah

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Michael Agyemang Adarkwah .

Ethics declarations

Data availability.

All data analysed or generated are included in the paper.

Conflict of interest

The author declares that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Adarkwah, M.A. The power of assessment feedback in teaching and learning: a narrative review and synthesis of the literature. SN Soc Sci 1 , 75 (2021). https://doi.org/10.1007/s43545-021-00086-w

Download citation

Received : 09 September 2020

Accepted : 12 February 2021

Published : 09 March 2021

DOI : https://doi.org/10.1007/s43545-021-00086-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Formative assessment
  • Summative assessment
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Adv Simul (Lond)

Logo of advsim

Simulation-based summative assessment in healthcare: an overview of key principles for practice

Clément buléon.

1 Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France

2 Medical School, University of Caen Normandy, Caen, France

3 Center for Medical Simulation, Boston, MA USA

Laurent Mattatia

4 Department of Anesthesiology, Intensive Care and Perioperative Medicine, Nîmes University Hospital, Nîmes, France

Rebecca D. Minehart

5 Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA

6 Harvard Medical School, Boston, MA USA

Jenny W. Rudolph

Fernande j. lois.

7 Department of Anesthesiology, Intensive Care and Perioperative Medicine, Liège University Hospital, Liège, Belgique

Erwan Guillouet

Anne-laure philippon.

8 Department of Emergency Medicine, Pitié Salpêtrière University Hospital, APHP, Paris, France

Olivier Brissaud

9 Department of Pediatric Intensive Care, Pellegrin University Hospital, Bordeaux, France

Antoine Lefevre-Scelles

10 Department of Emergency Medicine, Rouen University Hospital, Rouen, France

Dan Benhamou

11 Department of Anesthesiology, Intensive Care and Perioperative Medicine, Kremlin Bicêtre University Hospital, APHP, Paris, France

François Lecomte

12 Department of Emergency Medicine, Cochin University Hospital, APHP, Paris, France

Associated Data

All data generated or analyzed during this study are included in this published article.

Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, “the use of simulation for summative assessment” requires rigorous and evidence-based development because any summative assessment is high stakes for participants, trainers, and programs. The first step of this process is to identify the baseline from which we can start.

First, using a modified nominal group technique, a task force of 34 panelists defined topics to clarify the why, how, what, when, and who for using simulation-based summative assessment (SBSA). Second, each topic was explored by a group of panelists based on state-of-the-art literature reviews technique with a snowball method to identify further references. Our goal was to identify current knowledge and potential recommendations for future directions. Results were cross-checked among groups and reviewed by an independent expert committee.

Seven topics were selected by the task force: “What can be assessed in simulation?”, “Assessment tools for SBSA”, “Consequences of undergoing the SBSA process”, “Scenarios for SBSA”, “Debriefing, video, and research for SBSA”, “Trainers for SBSA”, and “Implementation of SBSA in healthcare”. Together, these seven explorations provide an overview of what is known and can be done with relative certainty, and what is unknown and probably needs further investigation. Based on this work, we highlighted the trustworthiness of different summative assessment-related conclusions, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted.

Our results identified among the seven topics one area with robust evidence in the literature (“What can be assessed in simulation?”), three areas with evidence that require guidance by expert opinion (“Assessment tools for SBSA”, “Scenarios for SBSA”, “Implementation of SBSA in healthcare”), and three areas with weak or emerging evidence (“Consequences of undergoing the SBSA process”, “Debriefing for SBSA”, “Trainers for SBSA”). Using SBSA holds much promise, with increasing demand for this application. Due to the important stakes involved, it must be rigorously conducted and supervised. Guidelines for good practice should be formalized to help with conduct and implementation. We believe this baseline can direct future investigation and the development of guidelines.

There is a critical need for summative assessment in healthcare education [ 1 ]. Summative assessment is high stakes, both for graduation certification and for recertification in continuing medical education [ 2 – 5 ]. Knowing the consequences, the decision to validate or not validate the competencies must be reliable, based on rigorous processes, and supported by data [ 6 ]. Current methods of summative assessment such as written or oral exams are imperfect and need to be improved to better benefit programs, learners, and ultimately patients [ 7 ]. A good summative assessment should sufficiently reflect clinical practice to provide a meaningful assessment of competencies [ 1 , 8 ]. While some could argue that oral exams are a form of verbal simulation, hands-on simulation can be seen as a solution to complement current summative assessments and enhance their accuracy by bringing these tools closer to assessing the necessary competencies [ 1 , 2 ].

Simulation is now well established in the healthcare curriculum as part of a modern, comprehensive approach to medical education (e.g., competency-based medical education) [ 9 – 11 ]. Rich in various modalities, simulation provides training in a wide range of technical and non-technical skills across all disciplines. Simulation adds value to the educational training process particularly with feedback and formative assessment [ 9 ]. With the widespread use of simulation in the formative setting, the next logical step is using simulation for summative assessment.

The shift from formative to summative assessment using simulation in healthcare must be thoughtful, evidence-based, and rigorous. Program directors and educators may find it challenging to move from formative to summative use of simulation. There are currently limited experiences (e.g., OSCE [ 12 , 13 ]) but not established guidelines on how to proceed. The evidence needed for the feasibility, the validity, and the definition of the requirement for simulation-based summative assessment (SBSA) in healthcare education has not yet been formally gathered. With this evidence, we can hope to build a rigorous and fair pathway to SBSA.

The purpose of this work is to review current knowledge for SBSA by clarifying the guidance on why, how, what, when, and who. We aim at identifying areas (i) with robust evidence in the literature, (ii) with evidence that requires guidance by expert opinion, and (iii) with weak or emerging evidence. This may serve as a basis for future research and guideline development for the safe and effective use of SBSA (Fig.  1 ).

An external file that holds a picture, illustration, etc.
Object name is 41077_2022_238_Fig1_HTML.jpg

Study question and topic level of evidence

First, we performed a modified Nominal Group Technique (NGT) to define the further questions to be explored in order to have the most comprehensive understanding of SBSA. We followed recommendations on NGT for conducting and reporting this research [ 14 ]. Second, we conducted state-of-the-art literature reviews to assess the current knowledge on the questions/topics identified by the modified NGT. This work did not require Institutional Review Board involvement.

A discussion on the use of SBSA was led by executive committee members of the Société Francophone de Simulation en Santé (SoFraSimS) in a plenary session and involved congress participants in May 2018 at the SoFraSimS annual meeting in Strasbourg, France. Key points addressed during this meeting were the growing interest in using SBSA, its informal uses, and its inclusion in some formal healthcare curricula. The discussion identified that these important topics lacked current guidelines. To reduce knowledge gaps, the SoFraSimS executive committee assigned one of its members (FL, one of the authors) to lead and act as a NGT facilitator for a task force on SBSA. The task force’s mission was to map the current landscape of SBSA, the current knowledge and gaps; and potentially to identify experts’ recommendations.

Task force characteristics

The task force panelists were recruited among volunteer simulation healthcare trainers in French-speaking countries after a call for application by SoFraSimS in May 2019. Recruiting criteria were a minimum of 5 years of experience in simulation and a direct involvement in simulation programs development or currently running. There were 34 (12 women and 22 men) from 3 countries (Belgium, France, Switzerland) included. Twenty-three were physicians and 11 were nurses, while 12 total had academic positions. All were experienced trainers in simulation for more than 7 years and were involved or responsible for initial training or continuing education programs with simulation. The task force leader (FL) was responsible for recruiting panelists, organizing, and coordinating the modified NGT, synthesizing responses, and writing the final report. A facilitator (CB) assisted the task force leader with the modified NGT, the synthesis of responses, and the writing of the final report. Both NGT facilitators (FL and CB) had more than 14 years of experience in simulation, had experience in research in simulation, and were responsive to simulation programs development and running.

First part: initial question and modified nominal group technique (NGT)

To answer the challenging question of “What do we need to know for a safe and effective SBSA practice?”, following the French Haute Autorité de Santé guidelines [ 15 ], we applied a modified nominal group technique (NGT) approach [ 16 ] between September and October 2019. The goal of our modified NGT was to define the further questions to be explored to have the most comprehensive understanding of the current SBSA (Fig.  2 ). The modifications to NGT included interactions that were not in-person and were asynchronous for some. Those modifications were introduced as a result of the geographic dispersion of the panelists across multiple countries and the context of the COVID-19 pandemic.

An external file that holds a picture, illustration, etc.
Object name is 41077_2022_238_Fig2_HTML.jpg

Study flowchart

The first two steps of the NGT (generation of ideas and round robin) facilitated by the task force leader (FL) were conducted online simultaneously and asynchronously via email exchanges and online surveys over a 6-week period. For the initiation of the first step (generation of ideas), the task force leader (FL) sent an initial non-exhaustive literature review of 95 articles and proposed the initial following items for reflection: definition of assessment, educational principles of simulation, place of summative assessment and its implementation, assessment of technical and non-technical skills in initial training, continuing education, and interprofessional training. The task force leader (FL) asked the panelists to formulate topics or questions to propose for exploration in Part 2 based on their knowledge and the literature provided Panelists independently elaborated proposals and sent them back to the task force leader (FL) who regularly synthesized them and sent the status of the questions/topics to the whole task force while preserving the anonymity of the contributors and asking them to check the accuracy of the synthesized elements (second step, as a “round robin”).

The third step of the NGT (clarification) was carried out during a 2-h video conference session. All panelists were able to discuss the proposed ideas, group the ideas into topics, and make the necessary clarifications. As a result of this step, 24 preliminary questions were defined for the fourth step (Supplemental Digital Content 1).

The fourth step of the NGT (vote) consisted of four distinct asynchronous and anonymous online vote rounds that led to a final set of topics with related sub-questions (Supplemental Digital content 2). Panelists were asked to vote to regroup, separate, keep, or discard questions/topics. All vote rounds followed similar validation rules. We [NGT facilitators (FL and CB)] kept items (either questions or topics) with more than 70% approval ratings by panelists. We reworded and resubmitted in the next round all items with 30–70% approval. We discarded items with less than 30% approval. The task force discussed discrepancies and achieved final ratings with a complete agreement for all items. For each round, we sent reminders to reach a minimum participation rate of 80% of the panelists. Then, we split the task force into 7 groups, one for each of the 7 topics defined at the end of the vote (step 4).

Second part: literature review

From November 2019 to October 2020, the groups each identified existing literature containing the current knowledge, and potential recommendations on the topic they were to address. This identification was done based on a non-systematic review of the existing literature. To identify existing literature, the groups conducted state-of-the-art reviews [ 17 ] and expanded their reviews with a snowballing literature review technique [ 18 ] based on the articles’ references. The selected literature search performed by each group was inserted into the task force's common library on SBSA in healthcare as it was conducted.

For references, we searched electronic databases (MEDLINE), gray literature databases (including digital theses), simulation societies and centers’ websites, generic web searches (e.g., Google Scholar), and reference lists from articles. We selected publications related to simulation in healthcare with keywords “summative assessment,” “summative evaluation,” and also specific keywords related to topics. The search was iterative to seek all available data until saturation was achieved. Ninety-five references were initially provided to the task force by the NGT facilitator leader (FL). At the end of the work, the task force common library contained a total of 261 references.

Techniques to enhance trustworthiness from primary reports to the final report

The groups’ primary reports were reviewed and critiqued by other groups. After group cross-reviewing, primary reports were compiled by NGT facilitators (FL and CB) in a single report. This report, responding to the 7 topics, was drafted in December 2020 and submitted as a single report to an external review committee composed of 4 senior experts in education, training, and research from 3 countries (Belgium, Canada, France) with at least 15 years of experience in simulation. NGT facilitators (FL and CB) responded directly to reviewers when possible and sought assistance from the groups when necessary. The final version of the report was approved by the SoFraSimS executive committee in January 2021.

First part: modified nominal group technique (NGT)

The first two steps of the NGT by their nature (generation of ideas and “round robin”) did not provide results. The third step (clarification phase), identified 24 preliminary questions (Supplemental digital content 1) to be submitted to the fourth step (vote). The 4 rounds of voting (step 4) resulted in 7 topics with sub-questions (Supplemental Digital content 2): (1) “What can be assessed in simulation?” (2) “Assessment tools for SBSA,” (3) “Consequences of undergoing the SBSA process,” (4) “Simulation scenarios for SBSA,” (5) “Debriefing, video, research and SBSA strategies,” (6) Trainers for SBSA,” (7) “Implementation of SBSA in healthcare”. These 7 topics and their sub-questions were the starting point for the state-of-the-art literature reviews of each group for the second part.

For each of the 7 topics, the groups highlighted what appears to be validated in the literature, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted. Results in this section present the major ideas and principles from the literature review, including their nuances where necessary.

What can be assessed in simulation?

Healthcare faculty and institutions must ensure that each graduate is practice ready. Readiness to practice implies mastering certain competencies, which is dependent on learning them appropriately. The competency approach involves explicit definitions of the acquired core competencies necessary to be a “good professional.” Professional competency could be defined as the ability of a professional to use judgment, knowledge, skills, and attitudes associated with their profession to solve complex problems [ 19 – 21 ]. Competency is a complex “knowing how to act” based on the effective mobilization and combination of a variety of internal and external resources in a range of situations [ 19 ]. Competency is not directly observable; it is the performance in a situation that can be observed [ 19 ]. Performance can vary depending on human factors such as stress, fatigue, etc.… During simulation, competencies can be assessed by observing “key” actions using assessment tools [ 22 ]. Simulation’s limitations must consider when defining the assessable competencies. Not all simulation methods are equivalent to assessing specific competencies [ 22 ].

Most healthcare competencies can be assessed with simulation, throughout at curriculum, if certain conditions are met. First, the competency being assessed summatively must have already been assessed formatively with simulation [ 23 , 24 ]. Second, validated assessment tools must be available to conduct this summative assessment [ 25 , 26 ]. These tools must be reliable, objective, reproducible, acceptable, and practical [ 27 – 30 ]. The small number of currently validated tools limits the use of simulation for competency certification [ 31 ]. Third, it is not necessary or desirable to certify all competencies [ 32 ]. The situations chosen must be sufficiently frequent in the student’s future professional practice (or potentially impactful for the patient) and must be hard or impossible to assess and validate in other circumstances (e.g., clinical internships) [ 2 ]. Fourth, simulation can be used for certification throughout the curriculum [ 33 – 35 ]. Finally, limitations for the use of simulation throughout the curriculum may be a lack of logistical resources [ 36 ]. Based on our findings in the literature, we have summarized in Table ​ Table1 1 the educational consideration when implementing a SBSA.

Considerations for implementing a summative assessment with simulation

Assessment tools for simulation-based summative assessment

One of the challenges of assessing competency lies in the quality of the measurement tools [ 31 ]. A tool that allows the raters to collect data must also allow them to give meaning to their assessment, while securing that it is really measuring what it aims to [ 25 , 37 ]. A tool must be valid and, capable of measuring the assessed competency with fidelity and, reliability while providing reproducible data [ 38 ]. Since a competency is not directly measurable, it will be analyzed on the basis of learning expectations, the most “concrete” and observable form of a competency [ 19 ]. Several authors have described definitions of the concept of validity and the steps to achieve it [ 38 – 41 ]. Despite different validation approaches, the objectives are similar: to ensure that the tool is valid, the scoring items reflect the assessed competency, and the contents are appropriated for the targeted learners and raters [ 20 , 39 , 42 , 43 ]. A tool should have psychometric characteristics that allow users to be confident of its reproducibility, discriminatory nature, reliability, and external consistency [ 44 ]. A way to ensure that a tool has acceptable validity is to compare it to existing and validated tools that assess the same skills for the same learners. Finally, it is important to consider the consequences of the test to determine whether it best discriminates competent students from others [ 38 , 43 ].

Like a diagnostic score, a relevant assessment tool must be specific [ 30 , 39 , 41 ]. It is not good or bad, but valid through a rigorous validation process [ 39 , 41 , 42 ]. This validation process determines whether the tool measures what it is supposed to measure and whether this measurement is reproducible at different times (test–retest) or with 2 observers simultaneously. It also determines if the tool results are correlated with another measure of the same ability or competency and if the consequences of the tool results are related to the learners’ actual competency [ 38 ].

Following Messick’s framework, which aimed to gather different sources of validity in one global concept (unified validity), Downing describes five sources of validity, which must be assessed with the validation process [ 38 , 45 , 46 ]. Table ​ Table2 2 presents an illustration of the development used in SBSA according to the unified validity framework for a technical task [ 38 , 45 , 46 ]. An alternative framework using three sources of validity for teamwork’s non-technical skills are presented in Table ​ Table3 3 .

Example of the development of a tool to assess technical skill achievement in a simulated situation, based on work by Oriot et al., Downing, and Messick’s framework [ 38 , 46 , 47 ]

Example of the development of an assessment tool for the observation of teamwork in simulation [ 48 ]

A tool is validated in a language. Theoretically, this tool can only be used in this language, given the nuances present with interpretation [ 49 ]. In certain circumstances, a “translated” tool, but not a “translated and validated in a specific language” tool, can lead to semantic biases that can affect the meaning of the content and its representation [ 49 – 55 ]. For each assessment sequence, validity criteria consist of using different tools in different assessment situations and integrating them into a comprehensive program which considers all aspects of competency. The rating made with a validated tool for one situation must be combined with other assessment situations, since there is no “ideal” tool [ 28 , 56 ] A given tool can be used with different professions or with participants at different levels of expertise or in different languages if it is validated for these situations [ 57 , 58 ]. In a summative context, a tool must have demonstrated a high-level of validity to be used because of the high stake for the participants [ 56 ]. Finally, the use or creation of an assessment tool requires trainers to question its various aspects, from how it was created to its reproducibility and the meaning of the results generated [ 59 , 60 ].

Two types of assessment tools should be distinguished: tools that can be adapted to different situations and tools that are specific to a situation [ 61 ]. Thus, technical skills may have a dedicated assessment tool (e.g., intraosseous) [ 47 ] or an assessment grid generated from a list of pre-established and validated items (e.g., TAPAS scale) [ 62 ]. Non-technical skills can be observed using scales that are not situation-specific (e.g., ANTS, NOTECHS) [ 63 , 64 ] or that are situation-specific (e.g., TEAM scale for resuscitation) [ 57 , 65 ]. Assessment tools should be provided to participants and should be included in the scenario framework, at least as a reference [ 66 – 69 ]. In the summative assessment of a procedure, structured assessment tools should probably be used, using a structured objective assessment form for technical skills [ 70 ]. The use of a scale, in the context of the assessment of a technical gesture, seems essential. As with other tools, any scale must be validated beforehand [ 47 , 70 – 72 ].

Consequences of undergoing the simulation-based summative assessment process

Summative assessment has two notable consequences on learning strategies. First, it may drive the learner’s behavior during the assessment, while it is essential to assess the competencies targeted, not the ability of the participant to adapt to the assessment tool [ 6 ]. Second, the pedagogy key concept of “pedagogical alignment” must be respected [ 23 , 73 ]. It means that assessment methods must be coherent with the pedagogical activities and objectives. For this to happen, participants must have formative simulation training focusing on the assessed competencies prior to the SBSA [ 24 ].

Participants have been reported as experiencing commonly mild (e.g., appearing slightly upset, distracted, teary-eyed, quiet, or resistant to participating in the debriefing) or moderate (e.g., crying, making loud, and frustrated comments) psychological events in the simulation [ 74 ]. While voluntary recruitment for formative simulation is commonplace, all students are required to take summative assessments in training. This required participation in high-stake assessment may have a more consequential psychological impact [ 26 , 75 ]. This impact can be modulated by training and assessment conditions [ 75 ]. First, the repetition of formative simulations reduces the psychological impact of SBSA on participants [ 76 ]. Second, the transparency on the objectives and methods of assessment limits detrimental psychological impact [ 77 , 78 ]. Finally, detrimental psychological impacts are increased by abnormally high physiological or emotional stress such as fatigue, and stressful events in the 36 h preceding the assessment, such that students with a history of post-traumatic stress disorder or psychological disorder may be strongly and negatively impacted by the simulation [ 76 , 79 – 81 ].

It is necessary to optimize SBSA implementation to limit its pedagogical and psychological negative impacts. Ideally, during the summative assessment, it has been proposed to take into account the formative assessment that has already been carried out [ 1 , 20 , 21 ]. Similarly in continuing education, the professional context of the person assessed should be considered. In the event of failure, it will be necessary to ensure sympathetic feedback and to propose a new assessment if necessary [ 21 ].

Scenarios for simulation-based summative assessment

Some authors argue that there are differences between summative and formative assessment scenarios [ 76 , 79 – 81 ]. The development of a SBSA scenario begins with the choice of a theme, which is most often agreed upon by experts at the local level [ 66 ]. The themes are most often chosen based on the participants’ competencies to be assessed and included in the competencies requirement for the initial [ 82 ] and continuing education [ 35 , 83 ]. A literature review even suggests the need to choose themes covering all the competences to be assessed [ 41 ]. These choices of themes and objectives also depend on the simulation tools technically available: “The themes were chosen if and only if the simulation tools were capable of reproducing “a realistic simulation” of the case.” [ 84 ].

The main quality criterion for SBSA is that the cases selected and developed are guided by the assessment objectives [ 85 ]. It is necessary to be clear about the assessment objectives of each scenario to select the right assessment tool [ 86 ]. Scenarios should meet four main principles: predictability, programmability, standardizability, and reproducibility [ 25 ]. Scenario writing should include a specific script, cues, timing, and events to practice and assess the targeted competencies [ 87 ]. The implementation of variable scenarios remains a challenge [ 88 ]. Indeed, most authors develop only one scenario per topic and skill to be assessed [ 85 ]. There are no recommendations for setting a predictable duration for a scenario [ 89 ]. Based on our findings we suggest some key elements for structuring a SBSA scenario in Table ​ Table4. 4 . For technical skill assessment, scenarios will be short and the assessment is based on an analytical score [ 82 , 89 ]. For non-technical skill assessment, scenarios will be longer and the assessment based on analytical and holistic scores [ 82 , 89 ].

Key element structuring a summative assessment scenario

Debriefing, video, and research for simulation-based summative assessment

Studies have shown that debriefings are essential in formative assessment [ 90 , 91 ]. No such studies are available for summative assessment. Good practice requires debriefing in both formative and summative simulation-based assessments [ 92 , 93 ]. In SBSA, debriefing is often brief feedback given at the end of the simulation session, in groups [ 85 , 94 , 95 ], or individually [ 83 ]. Debriefing can also be done later with a trainer and help of video, or via written reports [ 96 ]. These debriefings make it possible to assess clinical skills for summative assessment purposes [ 97 ]. Some tools have been developed to facilitate this assessment of clinical reasoning [ 97 ].

Video can be used for four purposes: session preparation, simulation improvement, debriefing, and rating (Table ​ (Table5) 5 ) [ 95 , 98 ]. In SBSA sessions, video can be used during the prebriefing to provide participants with standardized and reproducible information [ 99 ]. A video can increase the realism of the situation during the simulation with ultrasound loops and laparoscopy footage. Simulation records can be reviewed either for debriefing or rating purposes [ 34 , 71 , 100 , 101 ]. A video is very useful for the training raters (e.g., for calibration and recalibration) [ 102 ]. It enables raters to rate the participants’ performance offline and to have an external review if necessary [ 34 , 71 , 101 ]. Despite the technical difficulties to be considered [ 42 , 103 ], it can be expected that video-based automated scoring assistance will facilitate assessments in the future.

Uses of video for simulation-based formative and summative assessment

The constraints associated with the use of video rely on the participants’ agreement, the compliance with local rules, and that the structure in charge of the assessment with video secures the protection of the rights of individuals and data safety, both at a national and at the higher (e.g., European GDPR) level [ 104 , 105 ].

In Table ​ Table5, 5 , we list the main uses of video during simulation sessions found in the literature.

Research in SBSA can focus, as in formative assessment, on the optimization of simulation processes (programs, structures, human resources). Research can also explore the development and validation of summative assessment tools, the automation and assistance of assessment resources, and the pedagogical and clinical consequences of SBSA.

Trainers for simulation-based summative assessment

Trainers for SBSA probably need specific skills because of the high number of potential errors or biases in SBSA, despite the quest for objectivity (Table ​ (Table6) 6 ) [ 106 ]. The difficulty in ensuring objectivity is likely the reason why the use of self or peer assessment in the context of SBSA is not well documented and the literature does not yet support it [ 59 , 60 , 107 , 108 ].

Potential errors, effects, and bias in simulation-based summative assessment [ 109 , 110 ]

SBSA requires the development of specific scenarios, staged in a reproducible way, and the mastery of assessment tools to avoid assessment bias [ 111 – 114 ]. Fulfilling those requirements calls for specific abilities to fit with the different roles of the trainer. These different roles of trainers would require specific initial and ongoing training tailored to their tasks [ 111 , 113 ]. In the future, concepts of the roles and tasks of these trainers should be integrated into any “training of trainers” in simulation.

Implementation of simulation-based summative assessment in healthcare

The use of SBSA has been described by Harden in 1975 with Objective and Structured Clinical Examination (OSCE) tests for medical students [ 115 ]. The summative use of simulation has been introduced in different ways depending on the professional field and the country [ 116 ]. There is more literature on certification at the undergraduate and graduate levels than on recertification at the postgraduate level. The use of SBSA in re-certification is currently more limited [ 83 , 117 ]. Participation is often mandated, and it does not provide a formal assessment of competency [ 83 ]. Some countries are defining processes for the maintenance of certification in which simulation is likely to play a role (e.g., in the USA [ 118 ] and France [ 116 ]). Recommendations regarding the development of SBSA for OSCE were issued by the AMEE (Association for Medical Education in Europe) in 2013 [ 12 , 119 ]. Combined with other recommendations that address the organization of examinations on other immersive simulation modalities, in particular, full-scale sessions using complex mannequins [ 22 , 85 ], they give us a solid foundation for the implementation of SBSA.

The overall process to ensure a high-quality examination by simulation is therefore defined but particularly demanding. It mobilizes many material and human resources (administrative staff, trainers, standardized patients, and healthcare professionals) and requires a long development time (several months to years depending on the stakes) [ 36 ]. We believe that the steps to overcome during the implementation of SBSA range from setting up a coordination team, to supervising the writers, the raters, and the standardized patients, as well as taking into account the logistical and practical pitfalls.

The development of a competency framework valid for an entire curriculum (e.g., medical studies) satisfies a fundamental need [ 7 , 120 ]. This development allows identifying competencies to be assessed with simulation, those to be assessed by other methods, and those requiring triangulation by several assessment methods. This identification then guides scenarios’ writing and examination’s development with good content validity. Scenarios and examinations will form a bank of reproducible assessment exercises. The examination quality process, including psychometric analyses, is part of the development process from the beginning [ 85 ].

We have summarized in Table ​ Table7 7 the different steps in the implementation of SBSA.

Implementation of simulation-based summative assessment step by step

Recertification

Recertification programs for various healthcare domains are currently being implemented or planned in many countries (e.g., in the USA [ 118 ] and France [ 116 ]). This is a continuation of the movement to promote the maintenance of competencies. Examples can be cited in France with the creation of an agency for continuing professional development or in the USA with the Maintenance Of Certification [ 83 , 126 ]. The certification of health care facilities and even teams is also being studied [ 116 ]. Simulation is regularly integrated into these processes (e.g., in the USA [ 118 ] and France [ 116 ]). Although we found some commonalities basis between the certification and recertification processes, there are many differences (Table ​ (Table8 8 ).

Commonalities and discrepancies between certification and recertification

Currently, when simulation-based training is mandatory (e.g., within the American Board of Anesthesiology’s “Maintenance Of Certification in Anesthesiology,” or MOCA 2.0® in the US), it is most often a formative process [ 34 , 83 ]. SBSA has a place in the recertification process, but there are many pitfalls to avoid. In the short term, we believe that it will be easier to incorporate formative sessions as a first step. The current consensus seems to be that there should be no pass/fail recertification simulation without personalized global professional support, but which is not limited to a binary aptitude/inaptitude approach [ 21 , 116 ].

Many important issues and questions remain regarding the field of SBSA. This discussion will return to our identified 7 topics and highlight these points, their implications for the future, and some possible leads for future research and guidelines development for the safe and effective use of this tool in SBSA.

SBSA is currently mainly used in initial training in uni-professional and individual settings via standardized patients or task trainers (OSCE) [ 12 , 13 ]. In the future, SBSA will also be used in continuing education for professionals who will be assessed throughout their career (re-certification) as well as in interprofessional settings [ 83 ]. When certifying competencies, it is important to keep in mind the differences between the desired competencies and the observed performances [ 128 ]. Indeed, it must be that “what is a competency” is specifically defined [ 6 , 19 , 21 ]. Competencies are what we wish to evaluate during the summative assessment to validate or revalidate a professional for his/her practice. Performance is what can be observed during an assessment [ 20 , 21 ]. In this context, we consider three unresolved issues. The first issue is that an assessment only gives access to a performance at a given moment (“Performance is a snapshot of a competency”), whereas one would like to assess a competency more generally [ 128 ]. The second issue is: How does an observed performance—especially in simulation—reveal a real competency in real life? [ 129 ] In other words, does the success or failure of a single SBSA really reflect actual real-life competency? [ 130 ] The third issue is the assessment of a team performance/competency [ 131 – 133 ]. Until now, SBSA has come from the academic field and has been an individual assessment (e.g., OSCE). Future SBSA could involve teams, driven by governing bodies, institutions, or insurances [ 134 , 135 ]. The competency of a team is not the sum of the competencies of the individuals who compose it. How can we proceed to assess teams as a specific entity, both composed of individuals and independent of them? To make progress in answering these three issues, we believe it is probably necessary to consider the approximation between observed and assessed performance and competency as acceptable, but only by specifying the scope of validity. Research in these areas is needed to define it and answer these questions.

The consequence of undergoing SBSA has focused on the psychological aspect and have set aside the more usual consequences such as achieving (or not) the minimum passing score. Future research should embrace more global SBSA consequence field, including how reliable SBSA is at determining how someone is competent.

Rigor and method in the development and selection of assessment tools are paramount to the quality of the summative assessment [ 136 ]. The literature shows that is necessary that assessment tools be specific to their intended use that their intrinsic characteristics be described and that they be validated [ 38 , 40 , 41 , 137 ]. These specific characteristics must be respected to avoid two common issues [ 1 , 6 ]. The first issue is that of a poorly designed or constructed assessment tool. This tool can only give poor assessments because it will be unable to capture performance correctly and therefore to approach the skill to be assessed in a satisfactory way [ 56 ]. The second issue is related to poor or incomplete tool evaluation or inadequate tool selection. If the tool is poorly evaluated, its quality is unknown [ 56 ]. The scope of the assessment that is done with it is limited by the imprecision of the tool’s quality. If the tool is poorly selected, it will not accurately capture the performance being assessed. Again, summative assessment will be compromised. It is currently difficult to find tools that meet all the required quality and validation criteria [ 56 ]. On the one hand, this requires complex and rigorous work; on the other hand, the potential number of tools required is large. Thus, the overall volume of work to rigorously produce assessment tools is substantial. However, the literature provides the characteristics of validity (content, response process, internal structure, comparison with other variables, and consequences), and the process of developing qualitative and reliable assessment tools [ 38 – 41 , 45 ]. It therefore seems important to systematize the use of these guidelines for the selection, development, and validation of assessment tools [ 137 ]. Work in this area is needed and network collaboration could be a solution to move forward more quickly toward a bank of valid and validated assessment tools [ 39 ].

We had focused our discussion on the consequences of SBSA excluding the determining of the competencies and passing rates. Establishing and maintaining psychological safety is mandatory in simulation [ 138 ]. Considering the psychological and physiological consequences of SBSA is fundamental to control and limit negative impacts. Summative assessment has consequences for both the participants and the trainers [ 139 ]. These consequences are often ignored or underestimated. However, these consequences can have an impact on the conduct or results of the summative assessment. The consequences can be positive or negative. The “testing effect” can have a positive impact on long-term memory [ 139 ]. On the other hand, negative psychological (e.g., stress or post-traumatic stress disease), and physiological (e.g., sleep) consequences can occur or degrade a fragile state [ 139 , 140 ]. These negative consequences can lead to questioning the tools used and the assessments made. These consequences must therefore be logically considered when designing and conducting the SBSA. We believe that strategies to mitigate their impact must be put in place. The trainers and the participants must be aware of these difficulties to better anticipate them. It is a real duality for the trainer: he/she has to carry out the assessment in order to determine a mark and at the same time guarantee the psychological safety of the participants. It seems fundamental to us that trainers master all aspects of SBSA as well as the concept of the safe container [ 138 ] to maximize the chances of a good experience for all. We believe that ensuring a fluid pedagogical continuum, from training to (re)certification in both initial and continuing education using modern pedagogical techniques (e.g., mastery learning, rapid cycle deliberate practice) [ 141 – 144 ] could help maximize the psychological and physiological safety of participants.

The structure and use of scenarios in a summative setting are unique and therefore require specific development and skills [ 83 , 88 ]. SBSA scenarios differ from formative assessment scenarios by the different educational objectives that guide their development. Summative scenarios are designed to assess a skill through observation of performance, while formative scenarios are designed to learn and progress in mastering this same skill. Although there may be a continuum between the two, they remain distinct. SBSA scenarios must be predictable, programmable, standardizable, and reproductible [ 25 ] to ensure fairly assessed performances among participants. This is even more crucial when standardized patients are involved (e.g., OSCE) [ 119 , 145 ]. In this case, a specific script with expectations and training is needed for the standardized patient. The problem is that currently there are many formative scenarios but few summative scenarios. The rigor and expertise required to develop them is time-consuming and requires expert trainer resources. We believe that a goal should be to homogenize the scenarios, along with preparing the human resources who will implement them (trainers and standardized patients) and their application. We believe one solution would be to develop a methodology for converting formative scenarios into summative ones in order to create a structuring model for summative scenarios. This would reinvest the time and expertise already used for developing = formative scenarios.

Debriefing for simulation-based summative assessment

The place of debriefing in SBSA is currently undefined and raises important questions that need exploration [ 77 , 90 , 146 – 148 ]. Debriefing for formative assessment promotes knowledge retention and helps to anchor good behaviors while correcting less ideal ones [ 149 – 151 ]. In general, taking an exam promotes learning of the topic [ 139 , 152 ]. Formative assessment without a debriefing has been shown to be detrimental, so it is reasonable to assume that the same is true in summative assessment [ 91 ]. The ideal modalities for debriefing in SBSA are currently unknown [ 77 , 90 , 146 – 148 ]. Integrating debriefing into SBSA raises a number of organizational, pedagogical, cognitive, and ethical issues that need to be clarified. From an organizational perspective, we consider that debriefing is time and human resource-consuming. The extent of the organizational impact varies according to whether the feedback is automatized, standardized, personalized, and collective or individual. From an educational perspective, debriefing ensures pedagogical continuity and continued learning. We believe this notion is nuanced, depending on whether the debriefing is integrated into the summative assessment or instead follows the assessment while focusing on formative assessment elements. We believe that if the debriefing is part of the SBSA, it is no longer a “teaching moment.” This must be factored into the instructional strategy. How should the trainer prioritize debriefing points between those established in advance for the summative assessment and those that would emerge from any individuals’ performance? From a cognitive perspective, whether the debriefing is integrated into the summative assessment may alter the interactions between the trainer and the participants. We believe that if the debriefing is integrated into the SBSA, the participant will sometimes be faced with the cognitive dilemma of whether to express his/her “true” opinions or instead attempt to provide the expected answers. The trainer then becomes uncertain of what he/she is actually assessing. Finally, from an ethical perspective, in the case of a mediocre or substandard clinical performance, there is a question of how the trainer resolves discrepancies between observed behavior and what the participant reveals during the debriefing. What weight should be given to the simulation and to the debriefing for the final rating? We believe there is probably no single solution to how and when the debriefing is conducted during a summative assessment but rather promote the idea of adapting debriefing approaches (e.g., group or individualized debriefing) to various conditions (e.g., success or failure in the summative assessment). These questions need to be explored to provide answers as to how debriefing should be ideally conducted in SBSA. We believe a balance must be found that is ethically and pedagogically satisfactory, does not induce a cognitive dilemma for the trainer, and is practically manageable.

The skills and training of trainers required for SBSA are crucial and must be defined [ 136 , 153 ]. We consider that skills and training for SBSA closely mirror skills and training needed for formative assessment in simulation. This continuity is part of the pedagogical alignment. These different steps have common characteristics (e.g., rules in simulation, scenario flow) and specific ones (e.g., using assessment tools, validating competence). To ensure pedagogical continuity, the trainers who supervise these courses must be trained in and be masterful in simulation, adhering to pedagogical theories. We believe training for SBSA represents new skills and a potentially greater cognitive load for the trainers. It is necessary to provide solutions to both of these issues. For the new skills of trainers, we consider it necessary to adapt or complete the training of trainers by integrating knowledge and skills needed for properly conducting SBSA: good assessment practices, assessment bias mitigation, rater calibration, mastery of assessment tools, etc. [ 154 ]. To optimize the cognitive load induced by the tasks and challenges of SBSA, we suggest that it could be helpful to divide the tasks between the different trainers’ roles. We believe that conducting a SBSA therefore requires three types of trainers whose training is adapted to their specific role. First, three are the trainer-designers who are responsible for designing the assessment situation, selecting the assessment tool(s), training the trainer-rater(s), and supervising the SBSA sessions. Second, there should be the trainer-operators responsible for running the simulation conditions that support the assessment. Third, there are the trainer-raters who conduct the assessment using the assessment tool(s) selected by the trainer-designer(s) for which these trainer-raters have been specifically trained. The high-stake nature of SBSA requires a high level of rigor and professionalism from the three levels of trainers, which implies they have a working definition of the skills and the necessary training to be up to the task.

Implementing simulation-based summative assessment in healthcare

Implementing SBSA is delicate, requires rigor, respect for each step, and must be evidence-based. While OSCEs are simulation-based, simulation is not limited to OSCEs. Summative assessment with OSCEs has been used and studied for many years [ 12 , 13 ]. This literature is therefore a valuable source for learning lessons about summative assessment applied to simulation as a whole [ 22 , 85 , 155 ]. Knowledge from OSCE summative assessment needs to be supplemented so that simulation can perform summative assessment according to good evidence-based practices. Given the high stakes of SBSA, we believe it necessary to rigorously and methodically adapt what is already validated during implementation (e.g., scenarios, tools) and to proceed with caution for what has not yet been validated. As described above, many steps and prerequisites are necessary for optimal implementation, including (but not limited to) identifying objectives; identifying and validating assessment tools; preparing simulations scenarios, trainers, and raters; and planning a global strategy beginning from integrating the summative assessment in the curriculum to the managing the consequences of this assessment. SBSA must be conducted within a strict framework for its own sake and that of the people involved. Poor implementation would be detrimental to all participants, trainers, and the practice SBSA. This risk is greater for recertification than for certification [ 156 ], while initial training is able to accommodate SBSA easily because it is familiar (e.g., trainees engage in OSCEs at some point in their education), including SBSA in recertifying practicing professionals is not as obvious and may be context-dependent [ 157 ]. We understand that the consequences of failed recertification are potentially more impactful, both psychologically and for professional practice. We believe that solutions must be developed, tested, and validated that both fill gaps and preserve professionals and patients. Implementing SBSA therefore must be progressive, rigorous, and evidence-based to be accepted and successful [ 158 ].

Limitations

This work has some limitations that should be emphasized. First, this work covers only a limited number of issues related to SBSA. The entire topic is possibly not covered and we may not have explored other questions of interest. Nevertheless, the NGT methodology allowed this work to focus on those issues that were most relevant and challenging to the panel. Second, the literature review method (state-of-the-art literature reviews expanded with a snowball technique) does not guarantee exhaustiveness, and publications on the topic may have escaped the screening phase. However, it is likely that we have identified key articles focused on the questions explored. Potentially unidentified articles would therefore either not be important to the topic or would address questions not selected by the NGT. Third, this work was done by a French-speaking group, and a Francophone-specific approach to simulation, although not described to our knowledge, cannot be ruled out. This risk is reduced by the fact that the work is based on international literature from different specialties in different countries and that the panelists and reviewers were from different countries. Fourth, the analysis and discussion of the consequences of SBSA were focused on psychological consequences. This does not cover the full range of consequences including the impact on subsequent curricula or career pathways. Data in the literature exist on the subject and probably deserve a specific body of work. Despite these limitations, however, we believe this work is valuable because it raises questions and offers some leads toward solutions.

Conclusions

The use of SBSA is very promising with a growing demand for its application. Indeed, SBSA is a logical extension of simulation-based formative assessment and competency-based medical education development. It is probably wise to anticipate and plan for approaches to SBSA, as many important moving parts, questions, and consequences are emerging. Clearly identifying these elements and their interactions will aid in developing reliable, accurate, and reproducible models. All this requires a meticulous and rigorous approach to preparation commensurate with the challenges of certifying or recertifying the skills of healthcare professionals. We have explored the current knowledge on SBSA and have now shared an initial mapping of the topic. Among the seven topics investigate, we have identified (i) areas with robust evidence (what can be assessed with simulation?); (ii) areas with limited evidence that can be assisted by expert opinion and research (assessment tools, scenarios, and implementation); and (iii) areas with weak or emerging evidence requiring guidance by expert opinion and research (consequences, debriefing, and trainers) (Fig.  1 ). We modestly hope that this work can help reflection on SBSA for future investigations and can drive guideline development for SBSA.

Acknowledgements

The authors thank SoFraSimS Assessment with simulation group members: Anne Bellot, Isabelle Crublé, Guillaume Philippot, Thierry Vanderlinden, Sébastien Batrancourt, Claire Boithias-Guerot, Jean Bréaud, Philine de Vries, Louis Sibert, Thierry Sécheresse, Virginie Boulant, Louis Delamarre, Laurent Grillet, Marianne Jund, Christophe Mathurin, Jacques Berthod, Blaise Debien, and Olivier Gacia who have contributed to this work. The authors thank the external experts committee members: Guillaume Der Sahakian, Sylvain Boet, Denis Oriot and Jean-Michel Chabot; and the SoFraSimS executive Committee for their review and feedback.

Abbreviations

Author’s contributions.

CB helped with the study conception and design, data contribution, data analysis, data interpretation, writing, visualization, review, and editing. FL helped with the study conception and design, data contribution, data analysis, data interpretation, writing, review, and editing. RDM, JWR, and DB helped with the study writing, and review and editing. JWR and DB helped with the data interpretation, writing, and review and editing. LM, FJL, EG, ALP, OB, and ALS helped with the data contribution, data analysis, data interpretation, and review. The authors read and approved the final manuscript.

This work has been supported by the French Speaking Society for Simulation in Healthcare (SoFraSimS).

This work is a part of CB PhD which has been support by grants from the French Society for Anesthesiology and Intensive Care (SFAR), the Arthur Sachs-Harvard Foundation, the University Hospital of Caen, the North-West University Hospitals Group (G4), and the Charles Nicolle Foundation. Funding bodies did not have any role in the design of the study, collection, analysis, and interpretation of the data and in writing the manuscript.

Availability of data and materials

Declarations.

Not applicable.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Quality indicators of effective teacher-created summative assessment

https://research.usq.edu.au/item/z0z8v/quality-indicators-of-effective-teacher-created-summative-assessment

Related outputs

summative assessment literature review

Model-Based Assessment of the Liver Safety Profile of Acetaminophen to Support its Combination Use with Topical Diclofenac in Mild-to-Moderate Osteoarthritis Pain

Thailand, the forgotten market for international student recruitment a qualitative study into the strategies of regional australian universities.

summative assessment literature review

When constellations align: What early childhood pre-service teachers need from online learning to become confident and competent teachers of the arts

Online engagement through praxis-based assessment, getting hands-on: praxis-focused assessment to enhance online arts teacher education, competence, confidence and opportunity of early career secondary teachers to create effective summative assessment, towards an australian regional university professional development typology: a qualitative exploration of the academic voice, community-school collaborations: community counsellors’ perceptions of school counselling in singapore.

summative assessment literature review

Is this work? Revisiting the definition of work in the 21st century

Effective teacher-created summative assessment survey results.

summative assessment literature review

Working toward a place-based online pedagogy

When school’s in a caravan on the road to an astonishing world.

summative assessment literature review

Perspectives of Healthcare Professionals Towards Combination Use of Oral Paracetamol and Topical Non-Steroidal Inflammatory Drugs in Managing Mild-to-Moderate Pain for Osteoarthritis in a Clinical Setting: An Exploratory Study

summative assessment literature review

How do homeschoolers approach creative arts learning and how can they be supported? Developing a flexible framework for homeschool contexts

Cold case homicides and assigning priority for investigation: a review of available literature, authentic assessment and work-based learning: the case of professional studies in a post-covid australia.

summative assessment literature review

Publishing in the academy: An arts-based, metaphorical reflection towards self-care

summative assessment literature review

The Homeschool Choice: Parents and the Privatization of Education by Kate Henley Averett

summative assessment literature review

Work + learning: unpacking the agglomerated use of pedagogical terms

Capacity and opportunity in creating effective summative assessment: the practice based perceptions of early career teachers in queensland.

summative assessment literature review

Disciplinary Literacies In The Arts: Semiotic Explorations of Teachers’ Use of Multimodal and Aesthetic Metalanguage

Personal responsibility and the role of self-identity in adolescents; a female regional australian perspective, 'i got a badge' using badges to promote student engagement, agencies of the gospel: mission challenges and opportunities in the australian lutheran church, the value of praxis-based assessment to stimulate practical engagement and classroom readiness in online initial teacher education, marketing and recruitment strategies used by regional australian universities in thailand: a scoping review, disciplinary literacies in the arts: semiotic explorations of teachers' use of multimodal and aesthetic metalanguage.

summative assessment literature review

The implications of a learning and teaching professional development practices typology in Australian universities

summative assessment literature review

Capacity and opportunity in creating effective summative assessment: the the practice-based perceptions of early career teachers in Queensland

Disciplinarity and work: work-based learning as an emergent transdisciplinary mode of study, towards a typology of learning and teaching professional development practice uptake by university academics in australia, the higher degree by research student as ‘master’: utilising a design thinking approach to improve learner experience in higher degree research supervision.

summative assessment literature review

The development of work-integrated learning ecosystems: an Australian example of cooperative education

summative assessment literature review

Organisational & Personal Impacts of a Work-Based Learning & Research Programme in Australia

Applied micro- and macro-reflective cycles in work-based learning and research: two advanced practice contexts, mapping pedagogical touchpoints: exploring online student engagement and course design, disentangling strategic foresight a critical analysis of the term building on the pioneering work of richard slaughter, we can’t always measure what matters: revealing opportunities to enhance online student engagement through pedagogical care.

summative assessment literature review

Why practical content really matters for assessment in online learning [Blog post]

Risk in the asia pacific region: opportunities in disguise, foresight capacity: towards a foresight competency model.

summative assessment literature review

'How can the creative arts possibly be taught online?' Perspectives and experiences of online educators in Australian higher education

Creating emotional engagement in online learning.

summative assessment literature review

Acknowledging another face in the virtual crowd: Reimagining the online experience in higher education through an online pedagogy of care

Operationalising a framework for organisational vulnerability to intentional insider threat: the ovit as a valid and reliable diagnostic tool.

summative assessment literature review

Virtual praxis: Constraints, approaches, and innovations of online creative arts teacher educators

summative assessment literature review

Back to futures: futures studies and its role in addressing the great civilisational challenges

Work, resilience and sustainable futures: the approach of work-based research to problems and their solutions, reflective practice and work-based research: a description of micro- and macro-reflective cycles.

summative assessment literature review

Most people think playing chess makes you ‘smarter’, but the evidence isn’t clear on that

The challenges of facilitating arts learning in home education, the ethos and transformational nature of professional studies: a study of student experience in australia, the art of home education: an investigation into the impact of context on arts teaching and learning in home education, enabling innovative postgraduate research: critical foresight and strategic considerations for university leaders.

summative assessment literature review

Exploring arts learning in Australian home education: understanding and improving practice though design-based research

summative assessment literature review

'Authentic' arts teaching and learning: an investigation into the practices of Australian home educators

The innovative australian university: the role of 3rd generation postgraduate research programmes, planning organisational futures in blockchain environments - a systems theory perspective and possible solutions to enhance trust, asia pacific's economic risks and futures.

summative assessment literature review

Health care foresight: identifying mega-trends, convergence and scenarios of ways in which the future can be imagined

summative assessment literature review

Building foresight capacity: toward a foresight competency model

summative assessment literature review

Rising to the challenge: supporting educators without arts experience in the delivery of authentic arts learning

Organizational vulnerability to insider threat: what do australian experts say.

summative assessment literature review

Investigating creative arts practices in Australian home education through design-based research: entering the research maze with the spirit of adventure

Equity and access as keys for opening open learning: the case for virtually facilitated work-based learning, foresight & strategy in the asia pacific region: practice and theory to build enterprises of the future, home is where the art is: using design-based research to support arts engagement in australian home education, community capacity building: the question of sustainability.

summative assessment literature review

In good hands? Foresight and strategic thinking capabilities of regional university leaders

summative assessment literature review

Connectedness needs of external doctoral students

Foresight styles assessment: a valid and reliable measure of dimensions of foresight competence.

summative assessment literature review

Foresight styles of strategy level leaders

summative assessment literature review

Foresight competence and the strategic thinking of strategy-level leaders

summative assessment literature review

The imperative of strategic foresight to strategic thinking

The Sheridan Libraries

  • Write a Literature Review
  • Sheridan Libraries
  • Find This link opens in a new window
  • Evaluate This link opens in a new window

summative assessment literature review

Not every source you found should be included in your annotated bibliography or lit review. Only include the most relevant and most important sources.

Get Organized

  • Lit Review Prep Use this template to help you evaluate your sources, create article summaries for an annotated bibliography, and a synthesis matrix for your lit review outline.

Summarize your Sources

Summarize each source: Determine the most important and relevant information from each source, such as the findings, methodology, theories, etc.  Consider using an article summary, or study summary to help you organize and summarize your sources.

Paraphrasing

  • Use your own words, and do not copy and paste the abstract
  • The library's tutorials about plagiarism are excellent, and will help you with paraphasing correctly

Annotated Bibliographies

     Annotated bibliographies can help you clearly see and understand the research before diving into organizing and writing your literature review.        Although typically part of the "summarize" step of the literature review, annotations should not merely be summaries of each article - instead, they should be critical evaluations of the source, and help determine a source's usefulness for your lit review.  

Definition:

A list of citations on a particular topic followed by an evaluation of the source’s argument and other relevant material including its intended audience, sources of evidence, and methodology
  • Explore your topic.
  • Appraise issues or factors associated with your professional practice and research topic.
  • Help you get started with the literature review.
  • Think critically about your topic, and the literature.

Steps to Creating an Annotated Bibliography:

  • Find Your Sources
  • Read Your Sources
  • Identify the Most Relevant Sources
  • Cite your Sources
  • Write Annotations

Annotated Bibliography Resources

  • Purdue Owl Guide
  • Cornell Annotated Bibliography Guide
  • << Previous: Evaluate
  • Next: Synthesize >>
  • Last Updated: Sep 26, 2023 10:25 AM
  • URL: https://guides.library.jhu.edu/lit-review

Summative assessment of clinical practice of student nurses: A review of the literature

Affiliations.

  • 1 Department of Nursing Science, University of Eastern Finland, Kuopio, Finland; Saimaa University of Applied Sciences, Lappeenranta, Finland. Electronic address: [email protected].
  • 2 Department of Nursing Science, University of Eastern Finland, Kuopio, Finland.
  • 3 School of Nursing, Midwifery, Social Work and Social Science, University of Salford, Salford, United Kingdom.
  • PMID: 26522265
  • DOI: 10.1016/j.ijnurstu.2015.09.014

Objectives: To provide an overview of summative assessment of student nurses' practice currently in use.

Design: Narrative review and synthesis of qualitative and quantitative studies.

Data sources: With the support of an information specialist, the data were collected from scientific databases which included CINAHL, PubMed, Medic, ISI Web of Science, Cochrane library and ERIC published from January 2000 to May 2014. Sources used in all of the included studies were also reviewed.

Review methods: 725 articles concerned with student nurse clinical practice assessment were identified. After inclusion and exclusion criteria, 23 articles were selected for critical review.

Results: Findings suggest that the assessment process of student nurses' clinical practice lacks consistency. It is open to the subjective bias of the assessor, and the quality of assessment varies greatly. Student nurses' clinical assessment was divided into 3 themes: acts performed before final assessment, the actual final assessment situation and the acts after the final assessment situation. Mentors and students need teachers to provide them with an orientation to the assessment process and the paperwork. Terminology on evaluation forms is sometimes so difficult to grasp that the mentors did not understand what they mean. There is no consensus about written assignments' ability to describe the students' skills. Mentors have timing problems to ensure relevant assessment of student nurses. At the final interview students normally self-assess their performance; the mentor assesses by interview and by written assignments whether the student has achieved the criteria, and the role of the teacher is to support the mentor and the student in appropriate assessment. The variety of patient treatment environments in which student nurses perform their clinical practice periods is challenging also for the assessment of student nurses' expertise.

Conclusions: Mentors want clinical practice to be a positive experience for student nurses and it might lead mentors to give higher grades than what student nurses in fact deserve. It is very rare that student nurses fail their clinical practice. If the student nurse does not achieve the clinical competencies they are allowed to have extra time in clinical areas until they will be assessed as competent. Further research needs to be carried out to have more knowledge about the final assessment in the end of clinical practice. Through further research it will be possible to have better methods for high quality assessment processes and feedback to student nurses. Quality in assessment improves patient safety.

Keywords: Assessment; Clinical practice; Nurse education; Review; Student nurse.

Copyright © 2015 Elsevier Ltd. All rights reserved.

Publication types

  • Clinical Competence*
  • Nursing Assessment
  • Students, Nursing*

IMAGES

  1. 21 Summative Assessment Examples (2024)

    summative assessment literature review

  2. 10 Easy Steps: How to Write a Literature Review Example

    summative assessment literature review

  3. FREE 5+ Sample Literature Review Templates in PDF

    summative assessment literature review

  4. The Ultimate Guide to Summative Assessment: Benefits, Limitations

    summative assessment literature review

  5. Summative Assessment Literature Review

    summative assessment literature review

  6. 39 Best Literature Review Examples (Guide & Samples)

    summative assessment literature review

VIDEO

  1. SUMMATIVE ASSESSMENT, ENGLISH, CLASS-3,TERM-3, QUESTION AND ANSWER

  2. Summative Assessment Case Study

  3. Math 1332 Summative I Review

  4. SUMMATIVE ASSESSMENT 2 QUESTION PAPER SOCIAL STUDIES CLASS 9TH

  5. Summative assessment -2 9th class English question paper 📝📝📝

  6. Summative Assessment 1 Presentation

COMMENTS

  1. PDF Research on Classroom Summative Assessment

    review the literature on teachers' summative assessment practices to note their influence on teachers and teaching and on students and learn - ing. It begins with an overview of effective sum-mative assessment practices, paying particular attention to the skills and competencies that teachers need to create their own assessments, interpret ...

  2. New Trends in Formative-Summative Evaluations for Adult Education

    This study aimed to identify new trends in adult education formative-summative evaluations. Data were collected from multiple peer-reviewed sources in a comprehensive literature review covering the period from January 2014 to March 2019. A total of 22 peer-reviewed studies were included in this study.

  3. PDF Literature Overview on Summative Quizzes and Continuous Assessment

    Literature Overview on Summative Quizzes and Continuous Assessment Using summative online assignments as part of your assessment ... Russell, Elton et al. (2006) present a critical review of summative assessment neglecting formative aspects: "It is possible for an assessment to have both formative and summative aspects (as in some

  4. A Literature Review of Assessment

    Assessment is frequently described as a four-step process.1,7 Learning outcomes or objectives are developed first. Second, assessment tools are created to measure how well the student has learned. The third step is the actual teaching and learning process, during and after which assess-ment tools can be administered.

  5. Study on the Effectiveness of Formative and Summative Assessment

    2000). b) Formative Assessment. According to Beverley, form ative assessment involves the " teacher gathering, interpret ing. and acting of information about the students' learning, in order ...

  6. The power of assessment feedback in teaching and learning: a ...

    From the comprehensive review of the literature, the concept of assessment feedback and how it contributes to school effectiveness is thoroughly discussed. ... Summative assessment feedback has the impetus to imbue in learners intrinsic motivation to learn, which can result in empowerment and deep learning (Deeley 2013). Providers of feedback ...

  7. PDF A systematic review of the impact of summative assessment and tests on

    A systematic review of the impact of summative assessment and tests on students' motivation for learning 1 SUMMARY Background The current widespread use of summative assessment and tests is supported by a range of arguments. The points made include that not only do tests indicate standards to be aimed for and enable these standards to be

  8. PDF Exploring Variation in Summative Assessment : Language Teachers ...

    The review of the literature has focused on formative and summative assessment and then the relation between the two types of assessments. Summative Assessment Summative assessment aims at recording or reporting the students' achievement (Harlen (2005). In other words, summative assessment is the reflection of what they have learned in the past.

  9. Summative assessment of clinical practice of student nurses: A review

    The purpose of this review was to provide an overview of the approaches to the summative assessment of student nurses' practice that are currently in use. 3. Methods. The argumentative nature of a narrative literature review, to aggregate and summarize evidence, has been considered a strength for the method (Webb and Roe, 2007). The narrative ...

  10. PDF From Through-Course Summative to Adaptive Through-Year Models ...

    Through-Year Assessment Literature Review Page 1 1. Introduction 1.1. Literature Review Overview The purpose of this literature review is to study the advantages and limitations of various through-course summative assessment (TCSA) models with the goal of informing the design of the new and innovative adaptive through-year assessment system at ...

  11. Full article: A systematic review on factors influencing teachers

    The wide use of summative assessment impede teachers' implementation of formative assessment when various stakeholders (e.g., ... Expanding views about formative assessment: A review of the literature. In J. H. McMillan (Ed.), Formative classroom assessment: Theory into practice (pp. 43-62). Teachers College Press. Google Scholar

  12. Formative vs. summative assessment: impacts on academic motivation

    Review of the literature. In the field of teaching English as a foreign language, several researchers and experts defined the term "assessment" as a pivotal component of the process of teaching. ... The effect of formative assessment on performance in summative assessment: A study on business English students in a language training center ...

  13. Formative assessment: A systematic review of critical teacher

    This study aims to address this gap, by reviewing the available evidence from the literature about prerequisites for teachers' use of formative assessment. This review sought to address the following research question: What teacher prerequisites need to be in place for using formative assessment in their classroom practice? 2. Method2.1 ...

  14. Simulation-based summative assessment in healthcare: an overview of key

    For the initiation of the first step (generation of ideas), the task force leader (FL) sent an initial non-exhaustive literature review of 95 articles and proposed the initial following items for reflection: definition of assessment, educational principles of simulation, place of summative assessment and its implementation, assessment of ...

  15. Formative assessment and feedback for learning in higher education: A

    An online summative assessment, used as a post-test, was administered immediately after the formative task. Findings indicate no significant difference between the feedback conditions and achievement on the post-test. In a similar study, Gaona et al. also considered the impacts of immediate feedback provided on short-answer online quizzes ...

  16. 'Formative good, summative bad?'

    Abstract. The debate between summative and formative assessment is creating a situation that increasingly calls to mind the famous slogan in George Orwell's (1945) Animal Farm - 'Four legs good, two legs bad'. Formative assessment is increasingly being portrayed in the literature as 'good' assessment, which tutors should strive towards, whereas summative assessment is 'bad ...

  17. PDF Classroom Assessment Techniques: A Literature Review

    and assessment practices" (Armellini & Aiyegbayo, 2010, p. 922). The online activities designed during Carpe Diem were successfully used primarily for learning and formative assessment, with exception to some summative assessments. Web 2.0 tools were employed to enable collaborative online learning and were prominent in the new designs (Armellini

  18. PDF English-language Literature Review

    continuous summative assessment, and to examine the political, social and cultural factors that affect how teachers and students practise formative assessment in different learning and assessment contexts (Ecclestone, 2002, 2004 ... ENGLISH-LANGUAGE LITERATURE REVIEW - „assessment for

  19. Summative Assessment

    Academy for Teaching and Learning. Moody Library, Suite 201. One Bear Place. Box 97189. Waco, TX 76798-7189. [email protected]. (254) 710-4064. In contrast to formative assessment, summative assessment evaluates a student's knowledge of material at a given point in time in relation to previously determined learning goals.

  20. (PDF) The power of assessment feedback in teaching and learning: a

    The paper contributes to the extant literature on assessment feedback by highlighting the integral role it plays in improving teac hing and learning in the SN Soc Sci (2021) 1:75 Page 3 of 44 75

  21. Quality indicators of effective teacher-created summative assessment

    Purpose The current literature on school teacher-created summative assessment lacks a clear consensus regarding its definition and key principles. The purpose of this research was therefore to arrive at a cohesive understanding of what constitutes effective summative assessment. Design/methodology/approach Conducting a systematic literature review of 95 studies, this research adhered to the ...

  22. Summarize

    Annotated Bibliographies. Annotated bibliographies can help you clearly see and understand the research before diving into organizing and writing your literature review. Although typically part of the "summarize" step of the literature review, annotations should not merely be summaries of each article - instead, they should be critical ...

  23. Summative assessment of clinical practice of student nurses: A review

    Objectives: To provide an overview of summative assessment of student nurses' practice currently in use. Design: Narrative review and synthesis of qualitative and quantitative studies. Data sources: With the support of an information specialist, the data were collected from scientific databases which included CINAHL, PubMed, Medic, ISI Web of Science, Cochrane library and ERIC published from ...